DETAILED ACTION
In Applicant’s Response dated 1/22/2026, Applicant amended claims 1, 3, 5-17; and argued against all rejections previously set forth in the Office action dated 10/23/2025.
Response to Argument
After careful reading of the prior arts, the examiner has found that the prior arts in combination do teach the amended limitations. Somasundaram teaches the limitation wherein the machine learning system output the identification result based on the coordinates for each of the plurality of points included in the three-dimensional data wherein the system input a 3D mesh with a set of vertices V specified in 3D coordinate system X, Y and Z and Output Predicted or identified tooth type of the presented tooth (see paragraph 40). And Anssari discloses the aspect Wherein the machine learning system is a neural network for classifying teeth. Therefore, the amended limitation is taught by the prior arts. And applicant’s argument is unpersuasive.
Allowable Subject Matter
Claims 6 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
With regard to claim 6, the prior arts do not teach the limitation of The identification device according to claim 1, wherein in a case where the tooth corresponding to the three-dimensional data received by the input unit is an incisor in an upper jaw, a three-dimensional image corresponding to the three-dimensional data includes at least: an image of an area on an upper lip side; an image of an area on a palate side; and an image of an area on an incisal edge side, in a case where the tooth corresponding to the three-dimensional data received by the input unit is each of a canine and a molar in the upper jaw, the three-dimensional image corresponding to the three-dimensional data includes at least: an image of an area on a buccal side, an image of an area on a palate side, and an image of an occlusion area, in a case where the tooth corresponding to the three-dimensional data received by the input unit is an incisor in a lower jaw, the three-dimensional image corresponding to the three-dimensional data includes at least: an image of an area on a lower lip side, an image of an area on a tongue side, and an image of an area on the incisal edge side, and in a case where the tooth corresponding to the three-dimensional data received by the input unit is each of a canine and a molar in the lower jaw, the three-dimensional image corresponding to the three-dimensional data includes at least: an image of an area on the buccal side; an image of an area on the tongue side; and an image of the occlusion area.
Claim Rejections – 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3, 5, 7, 10, 11, 12, 13, 16, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Somasundaram, Pub. No.: US 20170169562 A1, in view of Anssari, Pub. No.: 20200320685 A1, and further in view of Chen, CN 108491850 B.
With regard to claim 1:
Somasundaram discloses an identification device that identifies a type of a tooth (Embodiments of the present invention include an approach to recognize or identify the tooth type of a given tooth, paragraph 24: “Embodiments of the present invention include an approach to recognize or identify the tooth type of a given tooth by computing shape features from the 3D scanned surface mesh of the tooth. The approach includes use of a classifier that can discriminate between the 32 different tooth types. In one approach, the input to the algorithm is a segmented individual tooth, and the 3D mesh is processed to extract different shape features at each vertex on the mesh. The shape features over the entire tooth are consolidated into a single covariance matrix, which is then used as the input to a classification algorithm. Since the covariance of the features is used, this approach is robust to the orientation and alignment of the tooth scan. Alternatively, other forms of aggregation with desirable properties can be used, for example feature averaging, feature histograms, sparse coding of features, bag of features, or others. In another approach not using segmentation, teeth within a digital 3D model of an arch are identified based upon tooth widths and locations within the arch.”), the identification device comprising: an input unit that receives three-dimensional data including three-dimensional at coordinates, obtained for each of a plurality of points forming the tooth (the system 3d scan the teeth and obtain 3d points, paragraph 32: “The steps of methods 80 and 87 f tooth identification by point classification (corresponding with step 26 in FIG. 4) can be implemented as follows. The 3D scans of teeth are represented as triangulated meshes, comprising faces and vertices. The triangular mesh is a common representation of 3D surfaces and has two components. The first component referred to as the vertices of the mesh, are simply the coordinates of the 3D points that have been reconstructed on the surface—a point cloud. The second component, the mesh faces, encodes the connections between points on the object and is an efficient way of interpolating between the discrete sample points on the continuous surface. Each face is a triangle defined by three vertices, resulting in a surface that can be represented as a set of small triangular planar patches.”); an identification unit that identifies a type of the tooth based on the three-dimensional data received by the input unit (the system uses the point model to identify the tooth type paragraph 42: “Given an input 3D scan of a patient's dental arch, the point classification of step 26 as described above rises 3D mesh features along with learned models of 3D tooth shapes to predict the tooth types of the individual teeth. In particular, each segmented tooth is passed to a tooth type classifier, which computes the covariance descriptor of 3D mesh features over the entire tooth shape, and classifies this feature to one of thirty-two tooth types based on the learned classification model. In the aforementioned approach, the individual teeth are being classified independently of each other. There is not necessarily any influence on a tooth's structure, location, and predicted tooth type on the predicted tooth types for the neighboring teeth, or any other teeth in that particular patient's mouth. However, since the teeth are arranged in a particular order, they can be considered as a chain-connected graph of mesh objects, where each object is an individual tooth. Based on this layout, the labels of individual teeth will affect the labels of adjacent teeth. If the independent tooth recognition algorithm, provides as output probabilities of likely labels for a particular tooth, then the ranked ordering of likely labels can be used for further refinement. For example, if one tooth object is assigned a particular label with very high probability, it is equally highly unlikely that any other tooth in the mouth will be assigned that same label, meaning the probability of that label in the other teeth would be down-weighted. This contextual information can thus be used to develop rules to adjust the weighting of the predicted probability of tooth labels. For example, given a location of a particular tooth within an arch and the predicted labels of neighboring teeth, the predicted label (identification) of the particular tooth can be adjusted to refine the accuracy of the prediction.”), and an estimation model including machine learning (the system uses machine learning to train the estimation model paragraph 40: “Table 1 provides exemplary pseudocode for implementing the point classification (machine learning) training data algorithm. Table 2 provides exemplary pseudocode for implementing the point classification (machine learning) algorithm for tooth identification. TABLE-US-00001 TABLE 1 Pseudocode for Machine Learning Training for Tooth Identification Input: Multiple 3D meshes with a sets of vertices V specified in 3D coordinate system X, Y, and Z. The mesh also has a set of triangulations or faces F based on the vertices. Each mesh corresponds to an individual segmented tooth. Also the groundtruth labels in the form of the tooth type names as indicated by manual annotation. Output: A predictive model that is capable of classifying each tooth mesh according to its tooth type. Assumptions: Each individual mesh corresponds to an individual segmented tooth, without any gingiva. Method steps: 1 For each vertex in every mesh in the training set of data, compute the following features: a. Normal directions b. Absolute, mean, and Gaussian curvatures, and directions of maximum and minimum curvature c. Shape context d. Mesh fourier e. Spin image f. Mesh local covariance g. PCA features 2 For each tooth mesh, aggregate the features over all vertices in that mesh to form a feature descriptor for the entire tooth mesh. This aggregation may be done by computing the vectorized log-covariance of all features across all vertices in the mesh. (An alternate aggregation aggroach may be used, such as histograms, or means, or others.) 3 Construct a data matrix X which is M × N where M is the total number of segmented tooth meshes and N is the total number of feature dimensions from Step 2. 4 Train a RUSBoosted decision tree classifier that can predict the labels corresponding to the tooth type. (An alternate classifier can be used.) TABLE-US-00002 TABLE 2 Pseudocode for Machine Learning Prediction for Tooth Identification Input: a 3D mesh with a set of vertices V specified in 3D coordinate system X, Y and Z. The mesh also has a set of triangulations or faces F based on the vertices. The mesh corresponds to an individual segmented tooth. Output: Predicted or identified tooth type of the presented tooth. Assumptions: The mesh corresponds to an individual segmented tooth without any gingiva. Method steps: 1 For each vertex v.sub.i in V, compute the following features: a. Normal directions b. Absolute, mean, and Gaussian curvatures, and directions of maximum and minimum curvature c. Shape context d. Mesh fourier e. Spin image f. Mesh local covariance g. PCA features 2 Aggregate the features over all vertices in this mesh, to form a feature descriptor for the entire tooth mesh. 3 Construct a data vector X which is N × 1 dimensional where N is the total number of feature dimensions from Step 2 4 Predict using the learned decision tree RUSBoost classifier the label corresponding to the tooth type”) and an output unit that outputs an identification result obtained by the identification unit (see fig. 11 for the output from the user interface with identified teeth, paragraph 51: “FIG. 11 is a diagram of a user interface illustrating a digital 3D model of upper and lower arches of teeth along with numbers associated with each tooth in the model. Those numbers can be correlated with predicted types of each tooth in the model using any of the tooth identification or prediction methods described herein. The user interface in FIG. 11 can be displayed on, for example, display device 16.”); wherein the estimation model is learned using learning data configured with tooth information corresponding to a type of the tooth associated with the coordinates (paragraph 32 and 33: “The steps of methods 80 and 87 f tooth identification by point classification (corresponding with step 26 in FIG. 4) can be implemented as follows. The 3D scans of teeth are represented as triangulated meshes, comprising faces and vertices. The triangular mesh is a common representation of 3D surfaces and has two components. The first component referred to as the vertices of the mesh, are simply the coordinates of the 3D points that have been reconstructed on the surface—a point cloud. The second component, the mesh faces, encodes the connections between points on the object and is an efficient way of interpolating between the discrete sample points on the continuous surface. Each face is a triangle defined by three vertices, resulting in a surface that can be represented as a set of small triangular planar patches. Each vertex is represented by a 243-dimensional feature vector, comprising a combination of feature descriptors, namely: vertex coordinates; magnitude and direction of minimum and maximum curvature; mean-, absolute- and Gaussian-curvature; vertex normals; mesh local covariance and its eigenvalues and eigenvectors; spin image features; shape context features; principal component analysis (PCA) features; and mesh Fourier features. These features are consolidated into a 243-dimensional feature descriptor per vertex, including but not limited to these features. Any subset of these features, as well as optional additional features can also be used for tooth classification. Additional features can include tooth cross-sectional area, perimeter of a cross-section, tooth length, width, and height, surface area, volume, profiles as viewed along any dental plane (occlusal, facial, etc.), Radon transform, features, bag-of-words descriptors, or other features.”) for each of the plurality of points included in the three-dimensional data (where the identification model is trained use labeled three models of the teeth in the training data, paragraph 31: “Method 80 for the training phase involves: receiving a segmented tooth mesh with faces and vertices (step 82); computing features at each vertex of the tooth (step 83); computing an aggregated feature for the entire tooth (step 84); training the classifier by associating a tooth label 81 with the computed aggregated feature (step 85); and providing the trained tooth model (step 86). Method 87 for the test phase involves: receiving, a segmented tooth mesh with faces and vertices (step 88); computing features at, each vertex of the tooth (step 89); computing an aggregated feature for the entire tooth (step 90); obtaining from the trained tooth model a label for the computed aggregated feature (step 91); and providing a predicted tooth label for the segmented tooth (step 92).”); and the identification unit directly inputs the coordinates (paragraph 32 and 33: “The steps of methods 80 and 87 f tooth identification by point classification (corresponding with step 26 in FIG. 4) can be implemented as follows. The 3D scans of teeth are represented as triangulated meshes, comprising faces and vertices. The triangular mesh is a common representation of 3D surfaces and has two components. The first component referred to as the vertices of the mesh, are simply the coordinates of the 3D points that have been reconstructed on the surface—a point cloud. The second component, the mesh faces, encodes the connections between points on the object and is an efficient way of interpolating between the discrete sample points on the continuous surface. Each face is a triangle defined by three vertices, resulting in a surface that can be represented as a set of small triangular planar patches. Each vertex is represented by a 243-dimensional feature vector, comprising a combination of feature descriptors, namely: vertex coordinates; magnitude and direction of minimum and maximum curvature; mean-, absolute- and Gaussian-curvature; vertex normals; mesh local covariance and its eigenvalues and eigenvectors; spin image features; shape context features; principal component analysis (PCA) features; and mesh Fourier features. These features are consolidated into a 243-dimensional feature descriptor per vertex, including but not limited to these features. Any subset of these features, as well as optional additional features can also be used for tooth classification. Additional features can include tooth cross-sectional area, perimeter of a cross-section, tooth length, width, and height, surface area, volume, profiles as viewed along any dental plane (occlusal, facial, etc.), Radon transform, features, bag-of-words descriptors, or other features.”) included in the three-dimensional data received by the input unit to the machine learning included in the estimation model to identify the type of the tooth corresponding to the coordinates (paragraph 32 and 33: “The steps of methods 80 and 87 f tooth identification by point classification (corresponding with step 26 in FIG. 4) can be implemented as follows. The 3D scans of teeth are represented as triangulated meshes, comprising faces and vertices. The triangular mesh is a common representation of 3D surfaces and has two components. The first component referred to as the vertices of the mesh, are simply the coordinates of the 3D points that have been reconstructed on the surface—a point cloud. The second component, the mesh faces, encodes the connections between points on the object and is an efficient way of interpolating between the discrete sample points on the continuous surface. Each face is a triangle defined by three vertices, resulting in a surface that can be represented as a set of small triangular planar patches. Each vertex is represented by a 243-dimensional feature vector, comprising a combination of feature descriptors, namely: vertex coordinates; magnitude and direction of minimum and maximum curvature; mean-, absolute- and Gaussian-curvature; vertex normals; mesh local covariance and its eigenvalues and eigenvectors; spin image features; shape context features; principal component analysis (PCA) features; and mesh Fourier features. These features are consolidated into a 243-dimensional feature descriptor per vertex, including but not limited to these features. Any subset of these features, as well as optional additional features can also be used for tooth classification. Additional features can include tooth cross-sectional area, perimeter of a cross-section, tooth length, width, and height, surface area, volume, profiles as viewed along any dental plane (occlusal, facial, etc.), Radon transform, features, bag-of-words descriptors, or other features.”) for each of the plurality of points (the trained model is then used to identify the teeth based on the positions of the positions paragraph 40: “Table 1 provides exemplary pseudocode for implementing the point classification (machine learning) training data algorithm. Table 2 provides exemplary pseudocode for implementing the point classification (machine learning) algorithm for tooth identification. TABLE-US-00001 TABLE 1 Pseudocode for Machine Learning Training for Tooth Identification Input: Multiple 3D meshes with a sets of vertices V specified in 3D coordinate system X, Y, and Z. The mesh also has a set of triangulations or faces F based on the vertices. Each mesh corresponds to an individual segmented tooth. Also the ground truth labels in the form of the tooth type names as indicated by manual annotation. Output: A predictive model that is capable of classifying each tooth mesh according to its tooth type. Assumptions: Each individual mesh corresponds to an individual segmented tooth, without any gingiva. Method steps: 1 For each vertex in every mesh in the training set of data, compute the following features: a. Normal directions b. Absolute, mean, and Gaussian curvatures, and directions of maximum and minimum curvature c. Shape context d. Mesh fourier e. Spin image f. Mesh local covariance g. PCA features 2 For each tooth mesh, aggregate the features over all vertices in that mesh to form a feature descriptor for the entire tooth mesh. This aggregation may be done by computing the vectorized log-covariance of all features across all vertices in the mesh. (An alternate aggregation aggroach may be used, such as histograms, or means, or others.) 3 Construct a data matrix X which is M × N where M is the total number of segmented tooth meshes and N is the total number of feature dimensions from Step 2. 4 Train a RUSBoosted decision tree classifier that can predict the labels corresponding to the tooth type. (An alternate classifier can be used.) TABLE-US-00002 TABLE 2 Pseudocode for Machine Learning Prediction for Tooth Identification Input: a 3D mesh with a set of vertices V specified in 3D coordinate system X, Y and Z. The mesh also has a set of triangulations or faces F based on the vertices. The mesh corresponds to an individual segmented tooth. Output: Predicted or identified tooth type of the presented tooth. Assumptions: The mesh corresponds to an individual segmented tooth without any gingiva. Method steps: 1 For each vertex v.sub.i in V, compute the following features: a. Normal directions b. Absolute, mean, and Gaussian curvatures, and directions of maximum and minimum curvature c. Shape context d. Mesh fourier e. Spin image f. Mesh local covariance g. PCA features 2 Aggregate the features over all vertices in this mesh, to form a feature descriptor for the entire tooth mesh. 3 Construct a data vector X which is N × 1 dimensional where N is the total number of feature dimensions from Step 2 4 Predict using the learned decision tree RUSBoost classifier the label corresponding to the tooth type”) and the machine learning system output the identification result (Somasundaram see fig. 11 for numbering the of the teeth, paragraph 51: “FIG. 11 is a diagram of a user interface illustrating a digital 3D model of upper and lower arches of teeth along with numbers associated with each tooth in the model. Those numbers can be correlated with predicted types of each tooth in the model using any of the tooth identification or prediction methods described herein. The user interface in FIG. 11 can be displayed on, for example, display device 16.”) based on the coordinates for each of the plurality of points included in the three-dimensional data (the trained model is then used to identify the teeth based on the positions of the positions paragraph 40: “Table 1 provides exemplary pseudocode for implementing the point classification (machine learning) training data algorithm. Table 2 provides exemplary pseudocode for implementing the point classification (machine learning) algorithm for tooth identification. TABLE-US-00001 TABLE 1 Pseudocode for Machine Learning Training for Tooth Identification Input: Multiple 3D meshes with a sets of vertices V specified in 3D coordinate system X, Y, and Z. The mesh also has a set of triangulations or faces F based on the vertices. Each mesh corresponds to an individual segmented tooth. Also the ground truth labels in the form of the tooth type names as indicated by manual annotation. Output: A predictive model that is capable of classifying each tooth mesh according to its tooth type. Assumptions: Each individual mesh corresponds to an individual segmented tooth, without any gingiva. Method steps: 1 For each vertex in every mesh in the training set of data, compute the following features: a. Normal directions b. Absolute, mean, and Gaussian curvatures, and directions of maximum and minimum curvature c. Shape context d. Mesh fourier e. Spin image f. Mesh local covariance g. PCA features 2 For each tooth mesh, aggregate the features over all vertices in that mesh to form a feature descriptor for the entire tooth mesh. This aggregation may be done by computing the vectorized log-covariance of all features across all vertices in the mesh. (An alternate aggregation aggroach may be used, such as histograms, or means, or others.) 3 Construct a data matrix X which is M × N where M is the total number of segmented tooth meshes and N is the total number of feature dimensions from Step 2. 4 Train a RUSBoosted decision tree classifier that can predict the labels corresponding to the tooth type. (An alternate classifier can be used.) TABLE-US-00002 TABLE 2 Pseudocode for Machine Learning Prediction for Tooth Identification Input: a 3D mesh with a set of vertices V specified in 3D coordinate system X, Y and Z. The mesh also has a set of triangulations or faces F based on the vertices. The mesh corresponds to an individual segmented tooth. Output: Predicted or identified tooth type of the presented tooth. Assumptions: The mesh corresponds to an individual segmented tooth without any gingiva. Method steps: 1 For each vertex v.sub.i in V, compute the following features: a. Normal directions b. Absolute, mean, and Gaussian curvatures, and directions of maximum and minimum curvature c. Shape context d. Mesh fourier e. Spin image f. Mesh local covariance g. PCA features 2 Aggregate the features over all vertices in this mesh, to form a feature descriptor for the entire tooth mesh. 3 Construct a data vector X which is N × 1 dimensional where N is the total number of feature dimensions from Step 2 4 Predict using the learned decision tree RUSBoost classifier the label corresponding to the tooth type”).
Somasundaram does not disclose the aspect wherein the machine learning system is a neural network.
However Anssari discloses the aspect Wherein the machine learning system is a neural network for classifying teeth (The system uses neural network to identify teeth classes including 32 possible tooth types, or a different classification such as incisor, canine, molar, etc., paragraph 62: “This way, two sets of classification results per individual tooth object may be identified, a first set of classification results classifying voxels into in different voxel classes (e.g. individual tooth classes, or 32 possible tooth types) generated by the first 3D deep neural network and a second set of classification results classifying a voxel representation of an individual tooth into different tooth classes (e.g. again 32 possible tooth types, or a different classification such as incisor, canine, molar, etc.) generated by the second 3D deep neural network. The plurality of tooth objects forming (part of) a dentition may finally be post-processed in order to determine the most accurate taxonomy possible, making use of the predictions resulting from the first and, optionally, second neural network, which are both adapted to classify 3D data of individual teeth.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Anssari to Somasundaram so the system can use neural network to train the estimation model so the system can make intelligent decision using neural network to make more precision estimation on the type teeth.
Somasundaram and Anssai do not disclose an input unit that receives three-dimensional data including three-dimensional at coordinates, each representing an absolution position (x, y, z) with respect to a predetermined position.
However Chen discloses the aspect of identification device comprising: an input unit that receives three-dimensional data including three-dimensional at coordinates (“wherein the training set includes three-dimensional tooth mesh model at least one sleeve has been teeth of manual extraction of related data, the related data comprises each of the feature point fleshiness belongs to the whole jaw part coordinate system of coordinate and the whole jaw part. That is, the physician can collect multiple sets of teeth has been manual extraction related data of good feature points, and sends these data import in the embodiment of the present invention three-dimensional tooth mesh model of the device for automatically extracting characteristic point for analysis, operation on these data by device of an embodiment of the present invention, converting characteristic point coordinate in the coordinate system of the whole jaw part to the corresponding coordinate of the three-dimensional tooth mesh model. wherein the three-dimensional teeth mesh model corresponding to the feature points refers to the three-dimensional training set a dental teeth mesh model feature point belongs to, that is to say, the three-dimensional teeth mesh model is extracted three-dimensional teeth mesh model with the feature point.”), each representing an absolution position (x, y, z) with respect to a predetermined position, obtained for one or more points forming the tooth (wherein, (x ', y ', z ') is the feature point absolute coordinate in the tooth coordinate system where it is, xmin, xmax, ymin, ymax, zmin, Zmax is the x ', y', z ' of the maximum value, (x ", y", z ") is a feature point after normalization in the relative coordinates of the tooth coordinate system.). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Chen to Somasundaram and Anssari so the system can use the predetermined location data to map out the absolute position of each identified teeth to have an overall coordinate for teeth in order to learn the position of the teeth with respect to each other and where exactly are the individual teeth with respect to the absolute coordinate.
With regard to claim 3:
Somasundaram and Anssari and Chen disclose The identification device according to claim 1, wherein the input unit receives at least the three-dimensional data corresponding to a plurality of teeth adjacent to the tooth and gums in an oral cavity (Somasundaram see fig. 9 and 3 wherein the 3d scan include gum and multiple teeth, paragraph 14: “FIG. 9 is a diagram illustrating fitting a polynomial to vertices of a 3D scan for use in tooth identification without segmentation.”), and the identification unit identifies a type of each of the teeth based on the three-dimensional data including a feature of each of the teeth (Somasundaram the system uses the point model to identify the tooth type paragraph 42: “Given an input 3D scan of a patient's dental arch, the point classification of step 26 as described above rises 3D mesh features along with learned models of 3D tooth shapes to predict the tooth types of the individual teeth. In particular, each segmented tooth is passed to a tooth type classifier, which computes the covariance descriptor of 3D mesh features over the entire tooth shape, and classifies this feature to one of thirty-two tooth types based on the learned classification model. In the aforementioned approach, the individual teeth are being classified independently of each other. There is not necessarily any influence on a tooth's structure, location, and predicted tooth type on the predicted tooth types for the neighboring teeth, or any other teeth in that particular patient's mouth. However, since the teeth are arranged in a particular order, they can be considered as a chain-connected graph of mesh objects, where each object is an individual tooth. Based on this layout, the labels of individual teeth will affect the labels of adjacent teeth. If the independent tooth recognition algorithm, provides as output probabilities of likely labels for a particular tooth, then the ranked ordering of likely labels can be used for further refinement. For example, if one tooth object is assigned a particular label with very high probability, it is equally highly unlikely that any other tooth in the mouth will be assigned that same label, meaning the probability of that label in the other teeth would be down-weighted. This contextual information can thus be used to develop rules to adjust the weighting of the predicted probability of tooth labels. For example, given a location of a particular tooth within an arch and the predicted labels of neighboring teeth, the predicted label (identification) of the particular tooth can be adjusted to refine the accuracy of the prediction.”).
With regard to claim 5:
Somasundaram and Anssari and Chen disclose The identification device according to claim 1, wherein a type of the tooth is identified further based on a normal line generated for each of a plurality of points forming the tooth corresponding to the three-dimensional data (Somasundaram the system 3d scan the teeth and obtain 3d points, the points form triangles which are made of lines, these triangles are the meshes that form the 3d teeth, paragraph 32: “The steps of methods 80 and 87 f tooth identification by point classification (corresponding with step 26 in FIG. 4) can be implemented as follows. The 3D scans of teeth are represented as triangulated meshes, comprising faces and vertices. The triangular mesh is a common representation of 3D surfaces and has two components. The first component referred to as the vertices of the mesh, are simply the coordinates of the 3D points that have been reconstructed on the surface—a point cloud. The second component, the mesh faces, encodes the connections between points on the object and is an efficient way of interpolating between the discrete sample points on the continuous surface. Each face is a triangle defined by three vertices, resulting in a surface that can be represented as a set of small triangular planar patches.”).
With regard to claim 7:
Somasundaram and Anssari and Chen disclose The identification device according to claim 1, wherein the output unit outputs the identification result to a display unit, and the display unit shows at least one of an image, a character, a numeral, an icon, and a symbol that correspond to the identification result (Somasundaram see fig. 11 for numbering the of the teeth, paragraph 51: “FIG. 11 is a diagram of a user interface illustrating a digital 3D model of upper and lower arches of teeth along with numbers associated with each tooth in the model. Those numbers can be correlated with predicted types of each tooth in the model using any of the tooth identification or prediction methods described herein. The user interface in FIG. 11 can be displayed on, for example, display device 16.”).
With regard to claim 10:
Somasundaram and Anssari and Chen disclose the aspect wherein the estimation model includes at least one of a weighting factor and a determination value as a parameter used by the neural network (Anssari 108: “FIG. 5 depicts an example of a 3D deep neural network architecture for classification of individual teeth for use in the methods and systems for automated taxonomy of 3D image data as described in this application. The network may be implemented using 3D convolutional layers (3D CNNs). The convolutions may use an activation function as known in the field. A plurality of 3D convolutional layers, 504-508, may be used wherein minor variations in the number of layers and their defining parameters, e.g. differing activation functions, kernel amounts, use of subsampling and sizes, and additional functional layers such as dropout layers and batch normalization may be used in the implementation without losing the essence of the design of the deep neural network.”), and the estimation model is learned by updating the parameter based on the tooth information and the identification result. (Anssari paragraph 112“For each sample (being a 3D representation of a single tooth) a matching representation of the correct label 516 may be used to determine a loss between desired and actual output 514. This loss may be used during training as a measure to adjust parameters within the layers of the deep neural network. Optimizer functions may be used during training to aid in the efficiency of the training effort. The network may be trained for any number of iterations until the internal parameters lead to a desired accuracy of results. When appropriately trained, an unlabeled sample may be presented as input and the deep neural network may be used to derive a prediction for each potential label..”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Anssari to Somasundaram so the neural network can be refined using weighting factor related to the teeth and the parameter can be updated to further improve the neural network model and provide more precise identifications.
With regard to claim 11:
Somasundaram and Anssari and Chen disclose the identification device according to claim 2, wherein the tooth information includes at least one piece of information of a color, a character, a numeral, and a symbol that are associated with a type of the tooth corresponding to the three-dimensional data. (Somasundaram see fig. 11, paragraph 51: “FIG. 11 is a diagram of a user interface illustrating a digital 3D model of upper and lower arches of teeth along with numbers associated with each tooth in the model. Those numbers can be correlated with predicted types of each tooth in the model using any of the tooth identification or prediction methods described herein. The user interface in FIG. 11 can be displayed on, for example, display device 16.”).
With regard to claim 12:
Somasundaram and Anssari and Chen disclose the identification device according to claim 2, wherein the tooth information is associated with the three-dimensional data to allow a range of each of a plurality of the teeth corresponding to the three-dimensional data to be specified (Somasundaram see fig. 11, paragraph 51: “FIG. 11 is a diagram of a user interface illustrating a digital 3D model of upper and lower arches of teeth along with numbers associated with each tooth in the model. Those numbers can be correlated with predicted types of each tooth in the model using any of the tooth identification or prediction methods described herein. The user interface in FIG. 11 can be displayed on, for example, display device 16.”).
With regard to claim 13:
Somasundaram and Anssari and Chen disclose the identification device according to claim 2, wherein the tooth information is associated with each of a plurality of points forming the tooth corresponding to the three-dimensional data. (Somasundaram the system 3d scan the teeth and obtain 3d points, paragraph 32: “The steps of methods 80 and 87 f tooth identification by point classification (corresponding with step 26 in FIG. 4) can be implemented as follows. The 3D scans of teeth are represented as triangulated meshes, comprising faces and vertices. The triangular mesh is a common representation of 3D surfaces and has two components. The first component referred to as the vertices of the mesh, are simply the coordinates of the 3D points that have been reconstructed on the surface—a point cloud. The second component, the mesh faces, encodes the connections between points on the object and is an efficient way of interpolating between the discrete sample points on the continuous surface. Each face is a triangle defined by three vertices, resulting in a surface that can be represented as a set of small triangular planar patches.”);
With regard to claim 16:
Somasundaram discloses a scanner system that acquires shape information about a tooth, the scanner system comprising: a three-dimensional scanner that acquires three-dimensional data including coordinates, obtained for each of a plurality of points forming the tooth using a three-dimensional camera (the system 3d scan the teeth and obtain 3d points, paragraph 32: “The steps of methods 80 and 87 f tooth identification by point classification (corresponding with step 26 in FIG. 4) can be implemented as follows. The 3D scans of teeth are represented as triangulated meshes, comprising faces and vertices. The triangular mesh is a common representation of 3D surfaces and has two components. The first component referred to as the vertices of the mesh, are simply the coordinates of the 3D points that have been reconstructed on the surface—a point cloud. The second component, the mesh faces, encodes the connections between points on the object and is an efficient way of interpolating between the discrete sample points on the continuous surface. Each face is a triangle defined by three vertices, resulting in a surface that can be represented as a set of small triangular planar patches.”); and an identification device that identifies a type of the tooth based on the three-dimensional data acquired by the three-dimensional scanner (the system uses the point model to identify the tooth type paragraph 42: “Given an input 3D scan of a patient's dental arch, the point classification of step 26 as described above rises 3D mesh features along with learned models of 3D tooth shapes to predict the tooth types of the individual teeth. In particular, each segmented tooth is passed to a tooth type classifier, which computes the covariance descriptor of 3D mesh features over the entire tooth shape, and classifies this feature to one of thirty-two tooth types based on the learned classification model. In the aforementioned approach, the individual teeth are being classified independently of each other. There is not necessarily any influence on a tooth's structure, location, and predicted tooth type on the predicted tooth types for the neighboring teeth, or any other teeth in that particular patient's mouth. However, since the teeth are arranged in a particular order, they can be considered as a chain-connected graph of mesh objects, where each object is an individual tooth. Based on this layout, the labels of individual teeth will affect the labels of adjacent teeth. If the independent tooth recognition algorithm, provides as output probabilities of likely labels for a particular tooth, then the ranked ordering of likely labels can be used for further refinement. For example, if one tooth object is assigned a particular label with very high probability, it is equally highly unlikely that any other tooth in the mouth will be assigned that same label, meaning the probability of that label in the other teeth would be down-weighted. This contextual information can thus be used to develop rules to adjust the weighting of the predicted probability of tooth labels. For example, given a location of a particular tooth within an arch and the predicted labels of neighboring teeth, the predicted label (identification) of the particular tooth can be adjusted to refine the accuracy of the prediction.”), wherein the identification device includes: an input unit that receives the three-dimensional data (the system 3d scan the teeth and obtain 3d points, paragraph 32: “The steps of methods 80 and 87 f tooth identification by point classification (corresponding with step 26 in FIG. 4) can be implemented as follows. The 3D scans of teeth are represented as triangulated meshes, comprising faces and vertices. The triangular mesh is a common representation of 3D surfaces and has two components. The first component referred to as the vertices of the mesh, are simply the coordinates of the 3D points that have been reconstructed on the surface—a point cloud. The second component, the mesh faces, encodes the connections between points on the object and is an efficient way of interpolating between the discrete sample points on the continuous surface. Each face is a triangle defined by three vertices, resulting in a surface that can be represented as a set of small triangular planar patches.”), an identification unit that identifies a type of the tooth based on the three-dimensional data including a feature of the tooth received by the input unit (the system uses the point model to identify the tooth type paragraph 42: “Given an input 3D scan of a patient's dental arch, the point classification of step 26 as described above rises 3D mesh features along with learned models of 3D tooth shapes to predict the tooth types of the individual teeth. In particular, each segmented tooth is passed to a tooth type classifier, which computes the covariance descriptor of 3D mesh features over the entire tooth shape, and classifies this feature to one of thirty-two tooth types based on the learned classification model. In the aforementioned approach, the individual teeth are being classified independently of each other. There is not necessarily any influence on a tooth's structure, location, and predicted tooth type on the predicted tooth types for the neighboring teeth, or any other teeth in that particular patient's mouth. However, since the teeth are arranged in a particular order, they can be considered as a chain-connected graph of mesh objects, where each object is an individual tooth. Based on this layout, the labels of individual teeth will affect the labels of adjacent teeth. If the independent tooth recognition algorithm, provides as output probabilities of likely labels for a particular tooth, then the ranked ordering of likely labels can be used for further refinement. For example, if one tooth object is assigned a particular label with very high probability, it is equally highly unlikely that any other tooth in the mouth will be assigned that same label, meaning the probability of that label in the other teeth would be down-weighted. This contextual information can thus be used to develop rules to adjust the weighting of the predicted probability of tooth labels. For example, given a location of a particular tooth within an arch and the predicted labels of neighboring teeth, the predicted label (identification) of the particular tooth can be adjusted to refine the accuracy of the prediction.”), and an estimation model including machine learning (the system uses machine learning to train the estimation model paragraph 40: “Table 1 provides exemplary pseudocode for implementing the point classification (machine learning) training data algorithm. Table 2 provides exemplary pseudocode for implementing the point classification (machine learning) algorithm for tooth identification. TABLE-US-00001 TABLE 1 Pseudocode for Machine Learning Training for Tooth Identification Input: Multiple 3D meshes with a sets of vertices V specified in 3D coordinate system X, Y, and Z. The mesh also has a set of triangulations or faces F based on the vertices. Each mesh corresponds to an individual segmented tooth. Also the groundtruth labels in the form of the tooth type names as indicated by manual annotation. Output: A predictive model that is capable of classifying each tooth mesh according to its tooth type. Assumptions: Each individual mesh corresponds to an individual segmented tooth, without any gingiva. Method steps: 1 For each vertex in every mesh in the training set of data, compute the following features: a. Normal directions b. Absolute, mean, and Gaussian curvatures, and directions of maximum and minimum curvature c. Shape context d. Mesh fourier e. Spin image f. Mesh local covariance g. PCA features 2 For each tooth mesh, aggregate the features over all vertices in that mesh to form a feature descriptor for the entire tooth mesh. This aggregation may be done by computing the vectorized log-covariance of all features across all vertices in the mesh. (An alternate aggregation aggroach may be used, such as histograms, or means, or others.) 3 Construct a data matrix X which is M × N where M is the total number of segmented tooth meshes and N is the total number of feature dimensions from Step 2. 4 Train a RUSBoosted decision tree classifier that can predict the labels corresponding to the tooth type. (An alternate classifier can be used.) TABLE-US-00002 TABLE 2 Pseudocode for Machine Learning Prediction for Tooth Identification Input: a 3D mesh with a set of vertices V specified in 3D coordinate system X, Y and Z. The mesh also has a set of triangulations or faces F based on the vertices. The mesh corresponds to an individual segmented tooth. Output: Predicted or identified tooth type of the presented tooth. Assumptions: The mesh corresponds to an individual segmented tooth without any gingiva. Method steps: 1 For each vertex v.sub.i in V, compute the following features: a. Normal directions b. Absolute, mean, and Gaussian curvatures, and directions of maximum and minimum curvature c. Shape context d. Mesh fourier e. Spin image f. Mesh local covariance g. PCA features 2 Aggregate the features over all vertices in this mesh, to form a feature descriptor for the entire tooth mesh. 3 Construct a data vector X which is N × 1 dimensional where N is the total number of feature dimensions from Step 2 4 Predict using the learned decision tree RUSBoost classifier the label corresponding to the tooth type”); and an output unit that outputs an identification result obtained by the identification unit (see fig. 11 for the output from the user interface with identified teeth, paragraph 51: “FIG. 11 is a diagram of a user interface illustrating a digital 3D model of upper and lower arches of teeth along with numbers associated with each tooth in the model. Those numbers can be correlated with predicted types of each tooth in the model using any of the tooth identification or prediction methods described herein. The user interface in FIG. 11 can be displayed on, for example, display device 16.”); wherein the estimation model is learned using learning data configured with tooth information corresponding to a type of the tooth associated with the coordinates (paragraph 32 and 33: “The steps of methods 80 and 87 f tooth identification by point classification (corresponding with step 26 in FIG. 4) can be implemented as follows. The 3D scans of teeth are represented as triangulated meshes, comprising faces and vertices. The triangular mesh is a common representation of 3D surfaces and has two components. The first component referred to as the vertices of the mesh, are simply the coordinates of the 3D points that have been reconstructed on the surface—a point cloud. The second component, the mesh faces, encodes the connections between points on the object and is an efficient way of interpolating between the discrete sample points on the continuous surface. Each face is a triangle defined by three vertices, resulting in a surface that can be represented as a set of small triangular planar patches. Each vertex is represented by a 243-dimensional feature vector, comprising a combination of feature descriptors, namely: vertex coordinates; magnitude and direction of minimum and maximum curvature; mean-, absolute- and Gaussian-curvature; vertex normals; mesh local covariance and its eigenvalues and eigenvectors; spin image features; shape context features; principal component analysis (PCA) features; and mesh Fourier features. These features are consolidated into a 243-dimensional feature descriptor per vertex, including but not limited to these features. Any subset of these features, as well as optional additional features can also be used for tooth classification. Additional features can include tooth cross-sectional area, perimeter of a cross-section, tooth length, width, and height, surface area, volume, profiles as viewed along any dental plane (occlusal, facial, etc.), Radon transform, features, bag-of-words descriptors, or other features.”) for each of the plurality of points included in the three- dimensional data (where the identification model is trained use labeled three models of the teeth in the training data, paragraph 31: “Method 80 for the training phase involves: receiving a segmented tooth mesh with faces and vertices (step 82); computing features at each vertex of the tooth (step 83); computing an aggregated feature for the entire tooth (step 84); training the classifier by associating a tooth label 81 with the computed aggregated feature (step 85); and providing the trained tooth model (step 86). Method 87 for the test phase involves: receiving, a segmented tooth mesh with faces and vertices (step 88); computing features at, each vertex of the tooth (step 89); computing an aggregated feature for the entire tooth (step 90); obtaining from the trained tooth model a label for the computed aggregated feature (step 91); and providing a predicted tooth label for the segmented tooth (step 92).”); and the identification unit directly inputs the coordinates (paragraph 32 and 33: “The steps of methods 80 and 87 f tooth identification by point classification (corresponding with step 26 in FIG. 4) can be implemented as follows. The 3D scans of teeth are represented as triangulated meshes, comprising faces and vertices. The triangular mesh is a common representation of 3D surfaces and has two components. The first component referred to as the vertices of the mesh, are simply the coordinates of the 3D points that have been reconstructed on the surface—a point cloud. The second component, the mesh faces, encodes the connections between points on the object and is an efficient way of interpolating between the discrete sample points on the continuous surface. Each face is a triangle defined by three vertices, resulting in a surface that can be represented as a set of small triangular planar patches. Each vertex is represented by a 243-dimensional feature vector, comprising a combination of feature descriptors, namely: vertex coordinates; magnitude and direction of minimum and maximum curvature; mean-, absolute- and Gaussian-curvature; vertex normals; mesh local covariance and its eigenvalues and eigenvectors; spin image features; shape context features; principal component analysis (PCA) features; and mesh Fourier features. These features are consolidated into a 243-dimensional feature descriptor per vertex, including but not limited to these features. Any subset of these features, as well as optional additional features can also be used for tooth classification. Additional features can include tooth cross-sectional area, perimeter of a cross-section, tooth length, width, and height, surface area, volume, profiles as viewed along any dental plane (occlusal, facial, etc.), Radon transform, features, bag-of-words descriptors, or other features.”) included in the three-dimensional data received by the input unit to the machine learning included in the estimation model to identify the type of the tooth corresponding to the coordinates (paragraph 32 and 33: “The steps of methods 80 and 87 f tooth identification by point classification (corresponding with step 26 in FIG. 4) can be implemented as follows. The 3D scans of teeth are represented as triangulated meshes, comprising faces and vertices. The triangular mesh is a common representation of 3D surfaces and has two components. The first component referred to as the vertices of the mesh, are simply the coordinates of the 3D points that have been reconstructed on the surface—a point cloud. The second component, the mesh faces, encodes the connections between points on the object and is an efficient way of interpolating between the discrete sample points on the continuous surface. Each face is a triangle defined by three vertices, resulting in a surface that can be represented as a set of small triangular planar patches. Each vertex is represented by a 243-dimensional feature vector, comprising a combination of feature descriptors, namely: vertex coordinates; magnitude and direction of minimum and maximum curvature; mean-, absolute- and Gaussian-curvature; vertex normals; mesh local covariance and its eigenvalues and eigenvectors; spin image features; shape context features; principal component analysis (PCA) features; and mesh Fourier features. These features are consolidated into a 243-dimensional feature descriptor per vertex, including but not limited to these features. Any subset of these features, as well as optional additional features can also be used for tooth classification. Additional features can include tooth cross-sectional area, perimeter of a cross-section, tooth length, width, and height, surface area, volume, profiles as viewed along any dental plane (occlusal, facial, etc.), Radon transform, features, bag-of-words descriptors, or other features.”) for at each of the plurality of points (3d data is used to train the estimation model using machine learning paragraph 40: “Table 1 provides exemplary pseudocode for implementing the point classification (machine learning) training data algorithm. Table 2 provides exemplary pseudocode for implementing the point classification (machine learning) algorithm for tooth identification. TABLE-US-00001 TABLE 1 Pseudocode for Machine Learning Training for Tooth Identification Input: Multiple 3D meshes with a sets of vertices V specified in 3D coordinate system X, Y, and Z. The mesh also has a set of triangulations or faces F based on the vertices. Each mesh corresponds to an individual segmented tooth. Also the ground truth labels in the form of the tooth type names as indicated by manual annotation. Output: A predictive model that is capable of classifying each tooth mesh according to its tooth type. Assumptions: Each individual mesh corresponds to an individual segmented tooth, without any gingiva. Method steps: 1 For each vertex in every mesh in the training set of data, compute the following features: a. Normal directions b. Absolute, mean, and Gaussian curvatures, and directions of maximum and minimum curvature c. Shape context d. Mesh fourier e. Spin image f. Mesh local covariance g. PCA features 2 For each tooth mesh, aggregate the features over all vertices in that mesh to form a feature descriptor for the entire tooth mesh. This aggregation may be done by computing the vectorized log-covariance of all features across all vertices in the mesh. (An alternate aggregation aggroach may be used, such as histograms, or means, or others.) 3 Construct a data matrix X which is M × N where M is the total number of segmented tooth meshes and N is the total number of feature dimensions from Step 2. 4 Train a RUSBoosted decision tree classifier that can predict the labels corresponding to the tooth type. (An alternate classifier can be used.) TABLE-US-00002 TABLE 2 Pseudocode for Machine Learning Prediction for Tooth Identification Input: a 3D mesh with a set of vertices V specified in 3D coordinate system X, Y and Z. The mesh also has a set of triangulations or faces F based on the vertices. The mesh corresponds to an individual segmented tooth. Output: Predicted or identified tooth type of the presented tooth. Assumptions: The mesh corresponds to an individual segmented tooth without any gingiva. Method steps: 1 For each vertex v.sub.i in V, compute the following features: a. Normal directions b. Absolute, mean, and Gaussian curvatures, and directions of maximum and minimum curvature c. Shape context d. Mesh fourier e. Spin image f. Mesh local covariance g. PCA features 2 Aggregate the features over all vertices in this mesh, to form a feature descriptor for the entire tooth mesh. 3 Construct a data vector X which is N × 1 dimensional where N is the total number of feature dimensions from Step 2 4 Predict using the learned decision tree RUSBoost classifier the label corresponding to the tooth type”) and the machine learning system output the identification result (Somasundaram see fig. 11 for numbering the of the teeth, paragraph 51: “FIG. 11 is a diagram of a user interface illustrating a digital 3D model of upper and lower arches of teeth along with numbers associated with each tooth in the model. Those numbers can be correlated with predicted types of each tooth in the model using any of the tooth identification or prediction methods described herein. The user interface in FIG. 11 can be displayed on, for example, display device 16.”) based on the coordinates for each of the plurality of points included in the three-dimensional data (the trained model is then used to identify the teeth based on the positions of the positions paragraph 40: “Table 1 provides exemplary pseudocode for implementing the point classification (machine learning) training data algorithm. Table 2 provides exemplary pseudocode for implementing the point classification (machine learning) algorithm for tooth identification. TABLE-US-00001 TABLE 1 Pseudocode for Machine Learning Training for Tooth Identification Input: Multiple 3D meshes with a sets of vertices V specified in 3D coordinate system X, Y, and Z. The mesh also has a set of triangulations or faces F based on the vertices. Each mesh corresponds to an individual segmented tooth. Also the ground truth labels in the form of the tooth type names as indicated by manual annotation. Output: A predictive model that is capable of classifying each tooth mesh according to its tooth type. Assumptions: Each individual mesh corresponds to an individual segmented tooth, without any gingiva. Method steps: 1 For each vertex in every mesh in the training set of data, compute the following features: a. Normal directions b. Absolute, mean, and Gaussian curvatures, and directions of maximum and minimum curvature c. Shape context d. Mesh fourier e. Spin image f. Mesh local covariance g. PCA features 2 For each tooth mesh, aggregate the features over all vertices in that mesh to form a feature descriptor for the entire tooth mesh. This aggregation may be done by computing the vectorized log-covariance of all features across all vertices in the mesh. (An alternate aggregation aggroach may be used, such as histograms, or means, or others.) 3 Construct a data matrix X which is M × N where M is the total number of segmented tooth meshes and N is the total number of feature dimensions from Step 2. 4 Train a RUSBoosted decision tree classifier that can predict the labels corresponding to the tooth type. (An alternate classifier can be used.) TABLE-US-00002 TABLE 2 Pseudocode for Machine Learning Prediction for Tooth Identification Input: a 3D mesh with a set of vertices V specified in 3D coordinate system X, Y and Z. The mesh also has a set of triangulations or faces F based on the vertices. The mesh corresponds to an individual segmented tooth. Output: Predicted or identified tooth type of the presented tooth. Assumptions: The mesh corresponds to an individual segmented tooth without any gingiva. Method steps: 1 For each vertex v.sub.i in V, compute the following features: a. Normal directions b. Absolute, mean, and Gaussian curvatures, and directions of maximum and minimum curvature c. Shape context d. Mesh fourier e. Spin image f. Mesh local covariance g. PCA features 2 Aggregate the features over all vertices in this mesh, to form a feature descriptor for the entire tooth mesh. 3 Construct a data vector X which is N × 1 dimensional where N is the total number of feature dimensions from Step 2 4 Predict using the learned decision tree RUSBoost classifier the label corresponding to the tooth type”).
Somasundaram does not disclose the aspect wherein the machine learning system is a neural network.
However Anssari discloses the aspect Wherein the machine learning system is a neural network which is used to classify teeth (system uses neural network to identify teeth classes including 32 possible tooth types, or a different classification such as incisor, canine, molar, etc., paragraph 62: “This way, two sets of classification results per individual tooth object may be identified, a first set of classification results classifying voxels into in different voxel classes (e.g. individual tooth classes, or 32 possible tooth types) generated by the first 3D deep neural network and a second set of classification results classifying a voxel representation of an individual tooth into different tooth classes (e.g. again 32 possible tooth types, or a different classification such as incisor, canine, molar, etc.) generated by the second 3D deep neural network. The plurality of tooth objects forming (part of) a dentition may finally be post-processed in order to determine the most accurate taxonomy possible, making use of the predictions resulting from the first and, optionally, second neural network, which are both adapted to classify 3D data of individual teeth.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Anssari to Somasundaram so the system can use neural network to train the estimation model so the system can make intelligent decision using neural to make more precision estimation on the type teeth.
Somasundaram and Anssari do not disclose the aspect wherein each representing an absolution position (x, y, z) with respect to a predetermined position,
However Chen discloses the aspect wherein each representing an absolution position (x, y, z) with respect to a predetermined position, obtained for one or more points forming the tooth (wherein, (x ', y ', z ') is the feature point absolute coordinate in the tooth coordinate system where it is, xmin, xmax, ymin, ymax, zmin, Zmax is the x ', y', z ' of the maximum value, (x ", y", z ") is a feature point after normalization in the relative coordinates of the tooth coordinate system.). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Chen to Somasundaram and Anssari so the system can use the predetermined location data to map out the absolute position of each identified teeth to have an overall coordinate for teeth in order to learn the position of the teeth with respect to each other and where exactly are the individual teeth with respect to the absolute coordinate.
Claim 17 is rejected for the same reason as claim 1.
Claim 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Somasundaram, Pub. No.: US 20170169562 A1, in view of Anssari and Chen, and further in Hourmand et al., Pub. No.: 20090098504 A1.
With regard to claim 8:
Somasundaram and Anssari and Chen do not disclose the aspect wherein the output unit outputs the identification result to an audio output unit, and the audio output unit outputs a sound corresponding to the identification result.
However Hourmand discloses the aspect wherein the output unit outputs the identification result to an audio output unit, and the audio output unit outputs a sound corresponding to the identification result. (paragraph 11: “In the present invention, the pocket depths are measured by reading markings inscribed onto the head of the instrument as it is normally done in a dentist's office, and data is entered into the probe via a rotary switch and an integrated pushbutton switch using one finger. The data is displayed on the probe for visual feedback. The probe also generates voice feedback and voice commands using voice synthesis techniques. Audio may include depth measurements, tooth number, "front", "back", "low battery", and other pertinent information. If desired, the audio may be turned off during dental examination. The probe supports several modes of operation in regards to the order of depth measurements. The modes of operation are selected via the rotary switch and pushbutton switch. For example, in one mode, the 32 teeth are divided into 4 quadrants. Each quadrant consists of 8 teeth. The probe guides the operator to measure the depths for front and back of each tooth in a quadrant. In another mode, the fronts of all 32 teeth are measured first followed by the backs. Upon completion of dental examination, the data can then be transferred to a Personal Computer (PC).”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Hourmand to Somasundaram and Anssari and Chen so the system can provide audio output with respect to the identified teeth to provide guidance to the medical personal and help them identify the teeth without having to look at a screen.
Claim 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Somasundaram, Pub. No.: US 20170169562 A1, in view of Anssari and Chen, and further in Azernikov et al., 2019/0282344A1.
With regard 9:
Somasundaram and Anssari and Chen do not disclose the identification device according to claim 1, wherein the output unit outputs the identification result to a server device, and the server device stores an accumulation of the identification result.
However Aleynikov disclose the aspect wherein the output unit outputs the identification result to a server device, and the server device stores an accumulation of the identification result. (paragraph 42: “Dental restoration server 101 also includes a database 150 to store data related to the deep neural networks and the identified dental information associated with the dental models. Dental restoration server 101 may then feed the automatically identified dental information of the dental models to design device 103 or third party server 151 for facilitating the restoration design. Database 150 can also be remotely located from dental restoration server 101 or be distributedly located. In some embodiments, dental restoration server 101 may send the identified dental information of the dental models to client device 107 for the client's 175 review. Other embodiments of dental restoration server 101 may include different and/or additional components. Moreover, the functions may be distributed among the components in a different manner than described herein. Furthermore, system 100 may include a plurality of dental restoration servers 101 and/or other devices performing the work for a plurality of requesting clients 175.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Aleynikov to Somasundaram and Anssari and Chen so the identification information can be stored remote to provide better security and save local device resources.
Claims 14 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Somasundaram, Pub. No.: US 20170169562 A1, in view of Anssari and Chen, and further in view of Xue, Pub. No.: 2018/0360567A1.
With regard to claim 14:
Somasundaram and Anssari and Chen do not disclose the identification device according to claim 1, wherein the estimation model is learned based on attribute information related to a subject having the teeth in addition to the tooth information and the identification result
However Xue discloses The identification device according to claim 1, wherein the estimation model is learned based on attribute information related to a subject having the teeth (paragraph 40: “The instructions may be further configured to receive the 3D model from a three-dimensional scanner. The instructions may be configured to get patient information, wherein the patient information includes one or more of: patient age, eruption sequence, measured space available for eruption, and patient gender; further wherein the instructions are configured to include the patient information with the normalized tooth shape features applied to the classifier.”) in addition to the tooth information and the identification result (paragraph 89: “The detector engine(s) 164 may implement one or more automated agents configured to predict tooth state (e.g., tooth type and/or eruption status) of a target tooth using extracted dental features. In some implementations, the detector engine(s) 164 assign physical and/or geometrical properties to a 3D dental mesh model that are related to physical/geometrical properties of dental arches or teeth. The detector engine(s) 164 may receive dental features from the feature extraction engine(s) 162 and apply machine learning algorithms to predict tooth type and/or eruption status of a target tooth using extracted dental features. In some implementations, the detector engine(s) 164 use a trained convolutional neural network and/or trained classifiers to classify a target tooth into one or more identified categories of teeth type, eruption status, tooth number, etc. Examples of machine learning systems implemented by the detector engine(s) 164 may include Decision Tree, Random Forest, Logistic Regression, Support Vector Machine, AdaBOOST, K-Nearest Neighbor (KNN), Quadratic Discriminant Analysis, Neural Network, etc., to determine a tooth type (e.g., incisor, canine, pre-molar, molar, etc.), eruption status (e.g., permanent, permanent erupting, primary), and/or tooth number of the target tooth. The detector engine(s) 164 can incorporate predicted tooth type and/or eruption status into a final segmentation result. The detector engine(s) 164 may also output a final segmentation result to other modules, for example, the optional treatment modeling engine(s) 166. As an example, the detector engine(s) 164 may implement one or more automated segmentation agents that assign tooth identifiers (e.g., universal tooth numbers, tooth type, or eruption status) to specific portions of a 3D dental mesh model.”),. It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Xue to Somasundaram and Anssari and Chen so the neural network can be refined using weighting factor related to the teeth and patient.
With regard to claim 15:
Somasundaram and Anssari and Chen and Xue disclose the identification device according to claim 14, wherein the attribute information includes at least one piece of information of an age, a gender, a race, a height, a weight, and a place of residence about the subject (Xue paragraph 40: “The instructions may be further configured to receive the 3D model from a three-dimensional scanner. The instructions may be configured to get patient information, wherein the patient information includes one or more of: patient age, eruption sequence, measured space available for eruption, and patient gender; further wherein the instructions are configured to include the patient information with the normalized tooth shape features applied to the classifier.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Xue to Somasundaram and Anssari and Chen so the neural network can be refined using weighting factor related to the teeth and patient.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DI XIAO whose telephone number is (571)270-1758. The examiner can normally be reached 9Am-5Pm est M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DI XIAO/Primary Examiner, Art Unit 2178