Prosecution Insights
Last updated: April 19, 2026
Application No. 18/874,887

Mesh Segmentation and Mesh Segmentation Validation In Digital Dentistry

Non-Final OA §101§103
Filed
Dec 13, 2024
Examiner
GEDRA, OLIVIA ROSE
Art Unit
3681
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Solventum Intellectual Properties Company
OA Round
1 (Non-Final)
0%
Grant Probability
At Risk
1-2
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 12 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
39 currently pending
Career history
51
Total Applications
across all art units

Statute-Specific Performance

§101
39.8%
-0.2% vs TC avg
§103
43.6%
+3.6% vs TC avg
§102
5.9%
-34.1% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in reply to the current action filed on 12/13/2024. Claims 1-20 are currently pending and have been examined. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/13/2024 was filed before the mailing date of the first action on the merits. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claim 20 is objected to for stating “processor to g generate”. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 USC § 101 as being directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1 Analysis: Independent Claims 1 and 17 are within the four statutory categories. Claims 1 and 17 are directed to a method and a system, respectively. Dependent Claims 2-16 and 18-20 are further directed to a method and a system, respectively, and therefore also fall into one of the four statutory categories. Step 2A Analysis – Prong One: Claim 1, which is indicative of the inventive concept, recites the following: A computer-implemented method for training one or more neural networks to automatically validate digitally generated tooth segmentation data used in digital oral care, the method comprising: receiving, by one or more computer processors, a first digital 3D oral care representation of a patient's teeth, wherein one or more aspects of the first representation have been assigned labels by one or more machine learning models having been trained to predict one more labels describing a segmentation of the first representation; receiving, by the one or more computer processors, a second 3D oral care digital representation of the patient's teeth, wherein one or more aspects of the second representation having predefined labels assigned thereto; determining, by the one or more computer processors, whether the labels on the one or more aspects of the first representation are substantially similar to the labels on the corresponding one or more aspects of the second representation; and automatically training, by the one or more computer processors, the one or more machine learning model based on the results of the comparison. The limitations as shown in underline above, given the broadest reasonable interpretation, cover the abstract idea of certain methods of organizing human activity because they recite managing personal behavior or relationships or interactions between people (i.e. social activities, teaching, and following rules or instructions, and/or mental process that a neurologist should follow when testing a patient for nervous system malfunctions – in this case, receiving a first 3D representation of a patient’s teeth which have been assigned labels, receiving a second representation having predefined labels, and determining whether the labels of the first representation are substantially similar to the second representation) e.g. see MPEP 2106.04(a)(2). Any limitations not identified above as part of the abstract idea are deemed “additional elements” and will be discussed in further detail below. Dependent Claims 2-12, 14-16, and 18-20 include other limitations directed toward the abstract idea. For example, Claim 2 recites the labels on the one or more aspects of the second representation are assigned by a domain expert, Claims 3 and 18 recite generating one or more suggestions of how to correct the first representation when it is determined that the first representation is not correctly labelled, Claim 4 recites the first representation describes at least one of: one or more teeth of the patient, one or more non-organic structures, and one or more gums of the patient, Claims 5 and 19 recite the labels on the aspects of the first representation describe a boundary between one or more teeth of the patient and one or more gums of the patient, Claim 6 recites the labels on the one or more aspects of the first representation describe a boundary between one or more teeth of the patient and one or more non-organic structures, Claim 7 recites the labels on the one or more aspects of the first representation describe a boundary between one portion of the gums of the patient and another portion of the gums of the patient, Claim 8 recites the labels on the one or more aspects of the first representation describe a boundary between one portion of a tooth of the patient and another portion of that tooth, Claim 9 recites the labels on the one or more aspects of the first representation describe a boundary between the facial side of a tooth of the patient and the lingual side of that tooth, Claims 10 and 20 recite generating one or more two dimensional (2D) representations based on at least in part the first representation, Claim 11 recites classifying the one or more 2D representations, Claim 12 recites classifying one or more 3D oral care representations, Claim 14 recites generating output that specifies whether the aspects of the first representation has not been correctly labelled. Claim 15 recites determining that aspects of the first representation have not been correctly labeled, Claim 16 recites the determining comprises computing a loss value. These limitations only serve to further narrow the abstract idea, and a claim may not preempt abstract ideas, even if the judicial exception is narrow, e.g., see MPEP 2106.04. Additionally, any limitations in dependent Claims 2-16, and 18-20 not addressed above are deemed additional elements to the abstract idea and will be further addressed below. Hence dependent Claims 2-12, 14-16, and 18-20 are nonetheless directed towards fundamentally the same abstract idea as independent Claims 1 and 17. Step 2A Analysis – Prong Two: Claims 1 and 17 are not integrated into practical application because the additional elements (i.e., the non-underlined limitations above – in this case, the computer processors, machine learning model, and the digital [representation] of Claim 1, and the computer processors, non-transitory computer-readable storage, machine learning model, and the digital [representation] of Claim 17) are recited at a high level of generality (i.e. as a generic processor performing generic computer functions) such that they amount to no more than mere instructions to apply an exception using generic computer parts. For example, Applicant’s specification explains that processing unit includes processing circuitry that may include one or more processors 104 and memory 106 that, in some examples, provide a computer platform for executing an operating system 116,…Processors 104 are coupled to one or more I/O interfaces 114, which provide I/O interfaces for communicating with devices such as a keyboard, controllers, display devices, image capture devices, other computing systems, and the like…Additionally, processors 104 may be coupled to electronic display 108 (Applicant’s specification, ¶ 0036). In general, a machine learning model can be trained to validate datasets to be used for digital dentistry or digital orthodontics. In some implementations, a machine learning model, such as a neural network can be used to validate 2D raster image views of the 3D data [0113]. If a sufficient number of aspects do not receive a passing accuracy score, the system 100 can generate information as to why one or more aspects of the representation failed, and in some implementations automatically train the one or more neural networks based on the results and then perform method 1500 again leverage the additional training of the neural networks to see if a passing score can be achieved [0145]. Storage units 134 may include a computer-readable storage medium or computer-readable storage device [0039]. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into practical application because they do not impose any meaningful limits on the abstract idea. Therefore, independent Claims 1 and 17 are directed to an abstract idea without practical application. Dependent Claims 3-15 and 18-20 recite additional elements. Claims 3 and 18 recite a digital representation and specify generating suggestions of how to correct the first digital representation. Claim 4 recites the previously recited digital representation and specifies the digital representation describes teeth, one or more non-organic structures, and one or more gums. Claims 5 and 19 recite the previously recited digital representation and specifies the labels in the digital representation describe a boundary between teeth of the patient and gums. Claim 6 recites the previously recited digital representation and specifies the labels in the digital representation describe a boundary between teeth of the patient and non-organic structures. Claim 7 recites the previously recited digital representation and specifies the labels in the digital representation describe a boundary between the gums of the patient and another portion of gums of the patient. Claim 8 recites the previously recited digital representation and specifies the labels in the digital representation describe a boundary between one portion of a tooth and another portion of a tooth of the same patient. Claim 9 recites the previously recited digital representation and specifies the labels in the digital representation describe a boundary between the facial side of a tooth of the patient and the lingual side of that tooth. Claims 10 and 20 recite the previously recited computer processor and specifies the computer processor generates 2D representations based on the first representation. Claim 11 recites the previously recited machine learning models and specifies the machine learning models are trained to classify the one or more 2D representations. Claim 12 recites the previously recited machine learning models and specifies the machine learning models are trained to classify the one or more 3D oral care representations. Claim 13 recites the previously recited machine learning models and the new neural network and specifies the machine learning model is a neural network. Claims 14 recites the previously recited computer processor and specifies the computer processor generates output that specifies whether the aspects of the first digital representation has not been labeled correctly. Claims 15 recites the previously recited computer processor and specifies the computer processor determines that the aspects of the first digital representation has not been labeled correctly. However, these additional elements are used in their expected fashion, so they do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on the abstract idea. These additional elements amount to no more than mere instructions to apply an exception, and hence, do not integrate the aforementioned abstract idea into practical application. Step 2B Analysis: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of the computer processors, machine learning model, and the digital [representation] of Claim 1, and the computer processors, non-transitory computer-readable storage, machine learning model, and the digital [representation] of Claim 17 amount to no more than mere instructions to apply an exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept (“significantly more”). MPEP 2106.05(I)(A) indicates that merely stating “apply it” or equivalent to the abstract idea cannot provide an inventive concept (“significantly more”). Dependent Claims 2 and 16 do not recite any additional elements and only serve to narrow the abstract idea. Claim 2 recites the labels on the one or more aspects of the second representation are assigned by a domain expert. Claim 16 recites the determining comprises computing a loss value. Dependent Claims 3-12, 14-15, and 18-20 recite previously recited additional elements, which are not eligible for the reasons stated above, and further narrow the abstract idea. Claims 3 and 18 recite a digital representation and specify generating suggestions of how to correct the first digital representation. Claim 4 recites the previously recited digital representation and specifies the digital representation describes teeth, one or more non-organic structures, and one or more gums. Claims 5 and 19 recite the previously recited digital representation and specifies the labels in the digital representation describe a boundary between teeth of the patient and gums. Claim 6 recites the previously recited digital representation and specifies the labels in the digital representation describe a boundary between teeth of the patient and non-organic structures. Claim 7 recites the previously recited digital representation and specifies the labels in the digital representation describe a boundary between the gums of the patient and another portion of gums of the patient. Claim 8 recites the previously recited digital representation and specifies the labels in the digital representation describe a boundary between one portion of a tooth and another portion of a tooth of the same patient. Claim 9 recites the previously recited digital representation and specifies the labels in the digital representation describe a boundary between the facial side of a tooth of the patient and the lingual side of that tooth. Claims 10 and 20 recite the previously recited computer processor and specifies the computer processor generates 2D representations based on the first representation. Claim 11 recites the previously recited machine learning models and specifies the machine learning models are trained to classify the one or more 2D representations. Claim 12 recites the previously recited machine learning models and specifies the machine learning models are trained to classify the one or more 3D oral care representations. Claims 14 recites the previously recited computer processor and specifies the computer processor generates output that specifies whether the aspects of the first digital representation has not been labeled correctly. Claims 15 recites the previously recited computer processor and specifies the computer processor determines that the aspects of the first digital representation has not been labeled correctly. Claim 13 recites new additional elements. Claim 13 recites the previously recited machine learning models and specifies the machine learning model is a neural network (new additional element). Hence, Claims 1-20 do not include any additional elements that amount to “significantly more” than the judicial exception. Thus, taken alone, the additional elements do not amount to significantly more than the abstract idea identified above. Furthermore, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually, and there is no indication that the combination of elements improves the functioning of computer or improves any other technology, and their collective functions merely provide conventional computer implementation. Therefore, whether taken individually or as an ordered combination, Claims 1-20 are nonetheless rejected under 35 U.S.C 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-6, 8-13, 16, and 19 are rejected under 35 U.S.C. § 103 as being unpatentable over Pei et al. (CN 113139908 A1) in view of Brown et al. (US 20210073998 A1). Regarding Claim 1, Pei discloses the following: A computer-implemented method for training one or more neural networks to automatically validate digitally generated tooth segmentation data used in digital oral care, the method comprising: (Pei discloses the invention claims a three-dimensional dentition segmentation and labeling method, automatic segmentation and labeling based on three-dimensional dentition grid model, which can effectively realize the automatic segmentation and labeling of the three-dimensional dentition grid model (p. 3, ¶ 0001, Fig. 1).) receiving, by one or more computer processors, a first digital 3D oral care representation of a patient's teeth, (Pei discloses the digital scanning model of the plaster model of human body dentition is used; the data form is a three-dimensional grid; the original model is down sampled by means of a secondary side shrinkage simplification algorithm to obtain a model containing about 15000 vertices (p. 4, ¶ 0009).) receiving, by the one or more computer processors, a second 3D oral care digital representation of the patient's teeth, wherein one or more aspects of the second representation…(Pei discloses the graph [uses] convolutional neural network module on characteristic guide provided by the method uses a supervised training method; the manual label of the three-dimensional dentition grid model vertex is used for optimizing the classification performance of the network (p. 5, ¶ 0007).) determining, by the one or more computer processors, whether …aspects of the first representation are substantially similar to…the corresponding one or more aspects of the second representation; (Pei discloses the graph convolutional neural network module on characteristic guide provided by the method uses a supervised training method; the manual label of the three-dimensional dentition grid model vertex is used for optimizing the classification performance of the network. Lcls is cross entropy of network output label and manual label: (p. 5, ¶ 0007). The cross entropy of the network output and the manual label is interpreted as determining the similarity between the two outputs.) and…training, by the one or more computer processors, the one or more machine learning model (Pei discloses the graph convolutional neural network module on characteristic guide provided by the method uses a supervised training method; the manual label of the three-dimensional dentition grid model vertex is used for optimizing the classification performance of the network (p. 5, ¶ 0007).) Pei does not disclose automatically training the machine learning model based on the comparison which is met by Brown: wherein one or more aspects of the first representation have been assigned labels… having predefined labels assigned thereto; (Brown teaches FIG. 4B shows the example projection of FIG. 4A after analyzing and labeling the 2D image of FIG. 4A (and others) and applying this analysis and labeling to a 3D model of the subject's dentition; in FIG. 4B just the segmented teeth are shown [0051].) …one or more machine learning models having been trained to predict one more labels describing a segmentation of the first representation; (Brown teaches FIGS. 3A-3C illustrate training an agent, e.g., a machine learning agent, to recognize individual teeth from images of a subject's teeth. FIG. 3A illustrates mapping of height map inputs to manually identified segmented images (FIG. 3B), and using this information to predict labels from 2D height maps (FIG. 3C) [0049].) …automatically training the one or more machine learning model (Brown teaches The 3D oral cavity modeling system 1910 may process the 2D images using manual, semi-manual, or automatic processing techniques…the processing may be driven, performed and/or guided by a machine learning agent. The machine learning agent may be trained on a variety of different datasets and may be adaptively trained, so that it may update/modify its behavior over time [0076].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the method and system for a first label of a dentition assigned by machine learning models and a second representation having predefined labels, determining that the labels are substantially similar, and training the machine learning model as disclosed by Pei to incorporate the machine learning model being automatically trained as a result of the comparison as taught by Brown. This modification would create a method capable of accurately segmenting, modifying, updating, and processing dental models (see Brown, ¶ 0004). Regarding Claim 17, this claim recites limitations that are substantially similar to those recited in Claim 1 above; thus, the same rejection applies. Pei further discloses: A system (Pei discloses the system performs feature learning and training point classifier to the three-dimensional dentition surface grid model based on the feature-oriented graph convolutional neural network module on the basis of the multi-classification cross entropy loss function, providing shape consistency constraint, boundary consistency constraint and classification label smoothing constraint, the three-dimensional dentition grid model for efficient and accurate automatic segmentation and tooth position label. Regarding Claim 4, Pei and Brown teach the limitations as seen in the rejection of Claim 1 above. Pei further discloses: the first digital representation describes at least one of: one or more teeth of the patient, one or more non-organic structures, and one or more gums of the patient. (Pei discloses the network outputs the classification label of each kind of dental crown; dividing boundary between teeth and gum and dividing boundary between teeth; dividing the dental crown and boundary thereof for further enhancing feature extraction and vertex classification performance of the network model (p. 3, ¶ 0002). The Examiner interprets the dental crown as a non-organic structure.) Regarding Claim 5, Pei and Brown teach the limitations as seen in the rejection of Claim 1 above. Pei further discloses: the labels on the one or more aspects of the first digital representation describe a boundary between one or more teeth of the patient and one or more gums of the patient. (Pei discloses the invention uses graph convolution neural network module on feature guide to perform feature learning and classification for three-dimensional dentition grid model vertex;… the network outputs the classification label… dividing boundary between teeth and gum and dividing boundary between teeth;…(p. 3, ¶ 0002).) Regarding Claim 19, this claim recites limitations that are substantially similar to those recited in Claim 5 above; thus, the same rejection applies. Regarding Claim 6, Pei and Brown teach the limitations as seen in the rejection of Claim 1 above. Pei further discloses: aspects of the first digital representation describe a boundary between… one or more non-organic structures. (Pei discloses the vertex in the model comprises a dental crown and a gum. considering the dentition arrangement and tooth shape change, the invention designs the dental crown shape distribution and dental crown boundary curvature constraint to improve the dental crown boundary segmentation confusion (p. 3, ¶ 0002).) Pei does not disclose the following limitations met by Brown: the labels… describe a boundary between one or more teeth of the patient (Brown teaches variations the segmentation agent may be a machine-learning agent that is trained on one or more datasets to recognize boundaries between teeth [0103].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the method and system for a first label of a dentition assigned by machine learning models and a second representation having predefined labels, determining that the labels are substantially similar, and training the machine learning model as disclosed by Pei to incorporate boundaries being between teeth as taught by Brown. This modification would create a method capable of accurately segmenting, modifying, updating, and processing dental models (see Brown, ¶ 0004). Regarding Claim 8, Pei and Brown teach the limitations as seen in the rejection of Claim 1 above. Pei further discloses: the labels on the one or more aspects of the first digital representation describe a boundary between one portion of a tooth of the patient and another portion of that tooth. (Pei discloses the purpose of the method is to obtain the category label of each vertex on the grid model wherein N is three-dimensional dentition model vertex number; K is the number of category label…, comprising a gum; left side cutting tooth; left side cutting tooth; left side tip tooth; left side first front grinding tooth; left side second front grinding tooth; left side first grinding tooth; left side second grinding tooth; right side middle cutting tooth; right side cutting tooth; right side sharp tooth; right side first front grinding tooth; right side second front grinding tooth; right side first grinding tooth; right side second grinding tooth;…(p. 4, ¶ 0009).) Regarding Claim 9, Pei and Brown teach the limitations as seen in the rejection of Claim 8 above. Pei further discloses: the labels on the one or more aspects of the first digital representation… (Pei discloses the invention claims a three-dimensional dentition segmentation and labeling method, automatic segmentation and labeling based on three-dimensional dentition grid model, which can effectively realize the automatic segmentation and labeling of the three-dimensional dentition grid model (p. 3, ¶ 0001, Fig. 1).) Pei does not disclose the following limitations met by Brown: …describe a boundary between the facial side of a tooth of the patient and the lingual side of that tooth. (Brown teaches generating a plurality of interproximal separation planes between teeth of a digital three-dimensional (3D) model of a subject’s oral cavity; collecting a two-dimensional (2D) images corresponding to each of one or more of: buccal, lingual and occlusal views,…[0015, see Fig. 1A-D]. The Examiner interprets the buccal side of the tooth as the facial side.) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the method and system for a first label of a dentition assigned by machine learning models and a second representation having predefined labels, determining that the labels are substantially similar, and training the machine learning model as disclosed by Pei to incorporate labeling the facial and lingual side of the tooth as taught by Brown. This modification would create a method capable of accurately studying the dentitions of subjects (see Brown, ¶ 0004). Regarding Claim 10, Pei and Brown teach the limitations as seen in the rejection of Claim 1 above. Pei does not disclose the following limitations met by Brown: generating, by the one or more computer processors, one or more two dimensional (2D) representations based on at least in part the first representation. (Brown teaches in some variations the apparatus may include identifying interproximals and calculating directions to view the 3D model in order to optimally see the interproximal space. The views that best (e.g., maximally) show the interproximal spacing between two or more teeth may be used to generate slices (e.g., 2D images, as described above) that may in turn be processed as described above…[0114].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the method and system for a first label of a dentition assigned by machine learning models and a second representation having predefined labels, determining that the labels are substantially similar, and training the machine learning model as disclosed by Pei to incorporate generating a 2D representation of the model based on the 3D model as taught by Brown. This modification would create a method capable of accurately segmenting, modifying, updating, and processing dental models (see Brown, ¶ 0004). Regarding Claim 20, this claim recites limitations that are substantially similar to those recited in Claim 10 above; thus, the same rejection applies. Regarding Claim 11, Pei and Brown teach the limitations as seen in the rejection of Claim 10 above. Pei does not disclose the following limitations met by Brown: the one or more machine learning models are trained to classify the one or more 2D representations. (Brown teaches the 3D oral cavity modeling system 1910 may use a conditional Generative Adversarial Network (cGAN) and/or any other machine learning system to classify data from dental scans and/or dental images into dental classes. As noted herein, the 3D oral cavity modeling system 1910 may be trained with a library of labeled and/or accurately modeled 2D dental scans and/or dental images [0079].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the method and system for a first label of a dentition assigned by machine learning models and a second representation having predefined labels, determining that the labels are substantially similar, and training the machine learning model as disclosed by Pei to incorporate classifying 2D representations of the model as taught by Brown. This modification would create a method capable of accurately segmenting, modifying, updating, and processing dental models (see Brown, ¶ 0004). Regarding Claim 12, Pei and Brown teach the limitations as seen in the rejection of Claim 1 above. Pei further discloses: the one or more machine learning models have been trained to classify one or more 3D oral care representations. (Pei discloses an automatic tooth segmentation and annotation is a challenging problem in a computer-assisted oral medical image processing. Existing geometry-based methods have a curvature threshold based approach and a motion profile tracking method. However, the method based on curvature threshold only obtains the initial estimation about the tooth boundary coarse, especially the tongue-side tooth-gingival boundary curvature change is not obvious in the local segmentation result noise (p. 2, ¶ 0004).) Regarding Claim 13, Pei and Brown teach the limitations as seen in the rejection of Claim 12 above. Pei further discloses: at least one of the one or more machine learning models is a neural network. (Pei discloses the invention uses graph convolutional neural network module on feature guide to perform feature learning and classification or three-dimensional dentition grid model vertex (p. 3, ¶ 0002).) Regarding Claim 16, Pei and Brown teach the limitations as seen in the rejection of Claim 1 above. Pei further discloses: the determining comprises computing a loss value that quantifies one or more differences between the first representation and the second representation (Pei discloses the manual label of the three-dimensional dentition grid model vertex is used for optimizing the classification performance of the network. Lcls is cross entropy of network output label and manual label label…(p. 5, ¶ 0007).) Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Pei and Brown in view of Cramer et al. (US 20220262007 A1). Regarding Claim 2, Pei and Brown teach the limitations as seen in the rejection of Claim 1 above. Pei and Brown do not teach the following limitations met by Cramer: wherein the labels on the one or more aspects of the second representation are assigned by a domain expert. (Cramer teaches a human technician's manual segmentation of scan data can be input as a ground truth into the machine learning model,…[0063]. The Examiner interprets the human technician as being the domain expert.) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the method and system for a first label of a dentition assigned by machine learning models and a second representation having predefined labels, determining that the labels are substantially similar, and training the machine learning model as disclosed by Pei to incorporate the labels being assigned by a domain expert as taught by Cramer. This modification would create a method capable of ensuring additional dental features such as the gingiva or interproximal spaces between teeth are not misidentified or missed entirely (see Cramer, ¶ 0005). Claims 3, 14-15, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Pei and Brown in view of An et al. (US 20220300767 A1). Regarding Claim 3, Pei and Brown teach the limitations as seen in the rejection of Claim 1 above. Pei and Brown do not teach the following limitations met by An: generating, by the one or more computer processors, one or more suggestions of how to correct the first digital representation when it is determined, based on the analyzing, that the first digital representation is not correctly labelled. (An teaches the correcting the definition labels of the at least some images in the to-be-expanded images, includes: displaying a correction interface, wherein the correction interface includes a correction control, at least some images in the to-be-expanded images and corresponding definition labels;… [0012]. The correction interface displays at least some face images in the to-be-expanded images and corresponding definition labels obtained according to the extracted definition feature (i.e., face images with to-be-corrected definition labels and the to-be-corrected definition labels), and a correction control (for example, the correction control may include five selection controls representing definition levels 1-5 below the face image). In response to operations of the correction control by the annotator, a corrected definition label of the corresponding face image is obtained [0111].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the method and system for a first label of a dentition assigned by machine learning models and a second representation having predefined labels, determining that the labels are substantially similar, and training the machine learning model as disclosed by Pei to incorporate determining if the label is not correct and how to fix it as taught by An. This modification would create a method capable of determining whether images meet definition requirements and whether they can be used for subsequent applications (see An, ¶ 0003). Regarding Claim 18, this claim is substantially similar to those recited in Claim 3 above; thus, the same rejection applies. Regarding Claim 14, Pei and Brown teach the limitations as seen in the rejection of Claim 11 above. Pei and Brown do not teach the following limitations met by An: automatically generating, by the one or more computer processors, output… (An teaches the correcting the definition labels of the at least some images in the to-be-expanded images, includes: displaying a correction interface, wherein the correction interface includes a correction control, at least some images in the to-be-expanded images and corresponding definition labels; and in response to operation of the correction control, correcting the definition label of the corresponding image in the correction interface [0012].) …that specifies whether the one or more aspects of the first digital representation has not been correctly labelled. (An teaches in response to operations of the correction control by the annotator, a corrected definition label of the corresponding face image is obtained [0111].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the method and system for a first label of a dentition assigned by machine learning models and a second representation having predefined labels, determining that the labels are substantially similar, and training the machine learning model as disclosed by Pei to incorporate outputting that the label is not correct as taught by An. This modification would create a method capable of determining whether images meet definition requirements and whether they can be used for subsequent applications (see An, ¶ 0003). Regarding Claim 15, Pei and Brown teach the limitations as seen in the rejection of Claim 1 above. Pei further discloses: performing, by the computer processor, the method of claim 1. (See the rejection of Claim 1 above.) Pei and Brown do not teach the following limitations met by An: when it is determined, based on the analyzing, that one or more aspects of the first digital representation has not been correctly labeled, (An teaches the correcting the definition labels of the at least some images in the to-be-expanded images, includes: displaying a correction interface, wherein the correction interface includes a correction control, at least some images in the to-be-expanded images and corresponding definition labels;… [0012]. The correction interface displays at least some face images in the to-be-expanded images and corresponding definition labels obtained according to the extracted definition feature (i.e., face images with to-be-corrected definition labels and the to-be-corrected definition labels), and a correction control (for example, the correction control may include five selection controls representing definition levels 1-5 below the face image). In response to operations of the correction control by the annotator, a corrected definition label of the corresponding face image is obtained [0111].) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the method and system for a first label of a dentition assigned by machine learning models and a second representation having predefined labels, determining that the labels are substantially similar, and training the machine learning model as disclosed by Pei to incorporate determining if the label is not correct as taught by An. This modification would create a method capable of determining whether images meet definition requirements and whether they can be used for subsequent applications (see An, ¶ 0003). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Pei and Brown in view of Choi et al. (KR 20180049302 A). Regarding Claim 7, Pei and Brown teach the limitation as seen in the rejection of Claim 1 above. Pei further discloses: the labels on the one or more aspects of the first digital representation describe a boundary between one portion of the gums of the patient… (Pei discloses dividing boundary between teeth and gum …(p. 3, ¶ 0002). [T]he boundary of the dental crown and the gum and the dental crown has a concave boundary;…(p. 6, ¶ 0004).) Pei and Brown do not teach the following limitations met by Choi: …and another portion of the gums of the patient. (Choi teaches the tongue area dividing unit 305 can divide the gum area obtained by dividing the tongue area into three equal parts in the up and down directions. The tongue area dividing unit 305 can calculate the initial centerline of the tongue by applying linear interpolation to the center points of the trisected lines (p. 5, ¶ 0003).) It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the method and system for a first label of a dentition assigned by machine learning models and a second representation having predefined labels, determining that the labels are substantially similar, and training the machine learning model as disclosed by Pei to incorporate the boundaries being between portions of gums as taught by Choi. This modification would create a method capable of effectively acquiring and analyzing images of the mouth (see Choi, p. 9, ¶ 0002). Relevant Art Not Currently Being Applied The following reference is considered pertinent to Applicant’s disclosure but is not currently being applied: Claessen et al. (EP-3462373-A1) teaches a system for the automated classification of 3D teeth images using trained deep neural network classifying to determine the most feasible assignment of labels to images. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLIVIA R GEDRA whose telephone number is (571)270-0944. The examiner can normally be reached Monday - Friday 8:00am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter H Choi can be reached at (469)295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLIVIA R. GEDRA/Examiner, Art Unit 3681 /PETER H CHOI/Supervisory Patent Examiner, Art Unit 3681
Read full office action

Prosecution Timeline

Dec 13, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §101, §103
Apr 09, 2026
Interview Requested

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month