Prosecution Insights
Last updated: April 19, 2026
Application No. 18/253,699

Automated Processing of Dental Scans Using Geometric Deep Learning

Non-Final OA §102§103§Other
Filed
May 19, 2023
Examiner
BARRETT, RYAN S
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
3M Company
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
263 granted / 409 resolved
+9.3% vs TC avg
Strong +44% interview lift
Without
With
+43.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
24 currently pending
Career history
433
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
38.7%
-1.3% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 409 resolved cases

Office Action

§102 §103 §Other
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the Preliminary Amendment filed on 5/19/2023. Claims 1-20 are pending in the case. Claims 1 and 13 are independent claims. Claim Objections Claims 1 and 13 are objected to because of the following informalities: Claims 1 and 13 recite “the trained machine learning model” where “the first trained machine learning model” was apparently intended. Claim 1 appears to contain a primary list of three clauses (receiving, applying, outputting) wherein the “applying” clause contains a secondary list of the remaining five clauses. If this is correct, an additional “and” is needed between the last two clauses of the secondary list to clarify that the implied conjunction is not “or.” Claim 13 appears to contain a primary list of two clauses (memory, processors) wherein the “processors” clause contains a secondary list of three clauses (receive, apply, output) wherein the “apply” clause contains a tertiary list of the remaining five clauses. If this is correct: The indentation of the “output” clause should be adjusted to match the indentation of the “receive” and “apply” clauses. An additional “and” is needed between the two clauses of the primary list to clarify that the implied conjunction is not “or.” An additional “and” is needed between the last two clauses of the tertiary list to clarify that the implied conjunction is not “or.” Appropriate correction is required. Claim Rejections - 35 U.S.C. § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-5 and 11-15 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by Minchenkov et al. (US 2020/0349698 A1, hereinafter Minchenkov). As to independent claim 1, Minchenkov discloses a computer-implemented (“computing device 105,” paragraph 0042 line 1) method of digital 3D model modification, comprising steps of: receiving a digital 3D representation of one or more intra-oral structures (“A set of many (e.g., thousands to millions) 3D models of dental arches with labeled dental classes 212 may be collected,” paragraph 0068 lines 3-5); applying a first trained machine learning model on the digital 3D representation, wherein the trained machine learning model is trained using the steps comprising: accessing a partially trained machine learning model (“If the stopping criterion has not been met, then the method returns to block 310 and another input is provided to the machine learning model,” paragraph 0092 lines 16-18); receiving an associated ground truth (“the known probability map that was included in the training data item,” paragraph 0088 lines 2-3) segmentation (“A training dataset may be gathered, where each data item in the training dataset may include an image (e.g., an image comprising a height map) and an associated probability map. Additional data may also be included in the training data items. Accuracy of segmentation can be improved by means of additional classes, inputs and multiple views support,” paragraph 0070 lines 1-7); using the partially trained machine learning model, generating a predicted segmentation (“At block 312, the machine learning model processes the input to generate an output. … For the artificial neural network being trained, there may be a first class (excess material), a second class (teeth), a third class (gums), and/or one or more additional dental classes. Moreover, the class is determined for each pixel in the image. For each pixel in the image, the final layer applies a probability that the pixel of the image belongs to the first class, a probability that the pixel belongs to the second class, a probability that the pixel belongs to the third class, and/or one or more additional probabilities that the pixel belongs to other classes. Accordingly, the output comprises a probability map comprising, for each pixel in the image, a first probability that the pixel belongs to a first dental class (e.g., an excess material dental class) and a second probability that the pixel belongs to a second dental class (e.g., a not excess material dental class),” paragraph 0086 line 1 to paragraph 0087 line 6), wherein the partially trained machine learning model is configured such that the generating is invariant to one or more rotation, scaling, or translation changes to the digital 3D representation (“Training of large-scale neural networks generally uses tens of thousands of images, which are not easy to acquire in many real-world applications. Data augmentation can be used to artificially increase the effective sample size. Common techniques include random rotation, shifts, shear, flips and so on to existing images to increase the sample size,” paragraph 0083 lines 15-21); computing a loss value that quantifies a dissimilarity between the associated ground truth segmentation and the predicted segmentation (“At block 314, processing logic may then compare the generated probability map to the known probability map that was included in the training data item. At block 316, processing logic determines an error (i.e., a classification error) based on the differences between the output probability map and the provided probability map,” paragraph 0088 lines 1-6); modifying one or more aspects of the partially trained machine learning model to generate the trained machine learning model (“At block 318, processing logic adjusts weights of one or more nodes in the machine learning model based on the error. An error term or delta may be determined for each node in the artificial neural network. Based on this error, the artificial neural network adjusts one or more of its parameters for one or more of its nodes (the weights for one or more inputs of a node),” paragraph 0088 lines 6-12); and outputting via the trained machine learning model one or more labels for one or more aspects of the 3D representation (“The trained machine learning model 255 outputs a probability map 260, where each point in the probability map corresponds to a pixel in the image and indicates probabilities that the pixel represents one or more dental classes. In the case of teeth/gums/excess material segmentation, three valued labels are generated for each pixel. The corresponding predictions have a probability nature: for each pixel there are three numbers that sum up to 1.0 and can be interpreted as probabilities of the pixel to correspond to these three classes,” paragraph 0105 lines 1-10). As to dependent claim 2, Minchenkov further discloses a method wherein the first machine learning model is a neural network (“One type of machine learning model that may be used is an artificial neural network, such as a deep neural network,” paragraph 0053 lines 1-3). As to dependent claim 3, Minchenkov further discloses a method wherein the one or more aspects of the neural network are weights and modifying the one or more aspects comprises modifying the one or more weights based, at least in part, on the loss value (“At block 318, processing logic adjusts weights of one or more nodes in the machine learning model based on the error. An error term or delta may be determined for each node in the artificial neural network. Based on this error, the artificial neural network adjusts one or more of its parameters for one or more of its nodes (the weights for one or more inputs of a node),” paragraph 0088 lines 6-12). As to dependent claim 4, Minchenkov further discloses a method wherein the neural network comprises at least one of one or more convolution layers, one or more pooling layers, or one or more unpooling layers (“A convolutional neural network (CNN), for example, hosts multiple layers of convolutional filters. Pooling is performed, and non-linearities may be addressed, at lower layers, on top of which a multi-layer perceptron is commonly appended, mapping top layer features extracted by the convolutional layers to decisions (e.g. classification outputs),” paragraph 0053 lines 5-11). As to dependent claim 5, Minchenkov further discloses a method wherein the predicted segmentation comprises at least one tooth pertaining to a patient's dental anatomy (“the probability map may include probabilities of pixels belonging to dental classes representing an upper palate, a gingival line, a scan body, a finger, or a preparation tooth,” paragraph 0087 lines 17-20). As to dependent claim 11, Minchenkov further discloses a method wherein the first trained machine learning model is configured to infer at least one feature using a combination of a plurality of non-linear functions of higher dimensional latent or hidden features (“The next layer is called a hidden layer, and nodes at the hidden layer each receive one or more of the input values. Each node contains parameters (e.g., weights) to apply to the input values. Each node therefore essentially inputs the input values into a multivariate function (e.g., a non-linear mathematical transformation) to produce an output value. A next layer may be another hidden layer or an output layer. In either case, the nodes at the next layer receive the output values from the nodes at the previous layer, and each node applies weights to those values and then generates its own output value. This may be performed at each layer,” paragraph 0086 lines 5-16). As to dependent claim 12, Minchenkov further discloses a method wherein a second machine learning model is initially generated based on at least one weight of the first machine learning model (“If the stopping criterion has not been met, then the method returns to block 310 and another input is provided to the machine learning model,” paragraph 0092 lines 16-18). As to independent claim 13, Minchenkov discloses a computer system (“computing device 105,” paragraph 0042 line 1) comprising: a non-transitory computer-readable memory (“computing device 105,” paragraph 0042 line 1); one or more computer processors in communication with the memory (“computing device 105,” paragraph 0042 line 1), wherein the one or more processors are configured to: receive a digital 3D representation of one or more intra-oral structures (“A set of many (e.g., thousands to millions) 3D models of dental arches with labeled dental classes 212 may be collected,” paragraph 0068 lines 3-5); apply a first trained machine learning model on the digital 3D representation, wherein the first trained machine learning model is trained using the steps comprising: accessing a partially trained machine learning model (“If the stopping criterion has not been met, then the method returns to block 310 and another input is provided to the machine learning model,” paragraph 0092 lines 16-18); receiving an associated ground truth (“the known probability map that was included in the training data item,” paragraph 0088 lines 2-3) segmentation (“A training dataset may be gathered, where each data item in the training dataset may include an image (e.g., an image comprising a height map) and an associated probability map. Additional data may also be included in the training data items. Accuracy of segmentation can be improved by means of additional classes, inputs and multiple views support,” paragraph 0070 lines 1-7); using the partially trained machine learning model, generating a predicted segmentation (“At block 312, the machine learning model processes the input to generate an output. … For the artificial neural network being trained, there may be a first class (excess material), a second class (teeth), a third class (gums), and/or one or more additional dental classes. Moreover, the class is determined for each pixel in the image. For each pixel in the image, the final layer applies a probability that the pixel of the image belongs to the first class, a probability that the pixel belongs to the second class, a probability that the pixel belongs to the third class, and/or one or more additional probabilities that the pixel belongs to other classes. Accordingly, the output comprises a probability map comprising, for each pixel in the image, a first probability that the pixel belongs to a first dental class (e.g., an excess material dental class) and a second probability that the pixel belongs to a second dental class (e.g., a not excess material dental class),” paragraph 0086 line 1 to paragraph 0087 line 6); computing a loss value that quantifies a dissimilarity between the associated ground truth segmentation and the predicted segmentation (“At block 314, processing logic may then compare the generated probability map to the known probability map that was included in the training data item. At block 316, processing logic determines an error (i.e., a classification error) based on the differences between the output probability map and the provided probability map,” paragraph 0088 lines 1-6); modifying one or more aspects of the partially trained machine learning model to generate the trained machine learning model (“At block 318, processing logic adjusts weights of one or more nodes in the machine learning model based on the error. An error term or delta may be determined for each node in the artificial neural network. Based on this error, the artificial neural network adjusts one or more of its parameters for one or more of its nodes (the weights for one or more inputs of a node),” paragraph 0088 lines 6-12); and output via the trained machine learning model one or more labels for one or more aspects of the 3D representation (“The trained machine learning model 255 outputs a probability map 260, where each point in the probability map corresponds to a pixel in the image and indicates probabilities that the pixel represents one or more dental classes. In the case of teeth/gums/excess material segmentation, three valued labels are generated for each pixel. The corresponding predictions have a probability nature: for each pixel there are three numbers that sum up to 1.0 and can be interpreted as probabilities of the pixel to correspond to these three classes,” paragraph 0105 lines 1-10). As to dependent claim 14, Minchenkov further discloses a system wherein the first machine learning model is a neural network (“One type of machine learning model that may be used is an artificial neural network, such as a deep neural network,” paragraph 0053 lines 1-3). As to dependent claim 15, Minchenkov further discloses a system wherein the neural network comprises at least one of one or more convolution layers, one or more pooling layers, or one or more unpooling layers (“A convolutional neural network (CNN), for example, hosts multiple layers of convolutional filters. Pooling is performed, and non-linearities may be addressed, at lower layers, on top of which a multi-layer perceptron is commonly appended, mapping top layer features extracted by the convolutional layers to decisions (e.g. classification outputs),” paragraph 0053 lines 5-11). Claim Rejections - 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. Claims 6-7, 9, 17-18, and 20 are rejected under 35 U.S.C. § 103 as being unpatentable over Minchenkov in view of Azernikov et al. (US 2019/0282344 A1, hereinafter Azernikov). As to dependent claim 6, the rejection of claim 1 is incorporated. Minchenkov does not appear to expressly teach a method comprising training a second machine learning model for the validation of at least one of a dental appliance or an orthodontic appliance. Azernikov teaches a method comprising training a second machine learning model for the validation of at least one of a dental appliance or an orthodontic appliance (“At 660, and at substantially the same [sic], training module 120 may also train a discriminating deep neural network (e.g., discriminator 620) to recognize that the dental restoration generated by the generative deep neural network is a model versus a digital model of a real dental restoration. In the recognition process, the discriminating deep neural network can generate a loss function based on comparison of a real dental restoration and the generated model of the dental restoration. The loss function provides a feedback mechanism for the generative deep neural network. Using information from the outputted loss function, the generative deep neural network may generate a better model that can better trick the discriminating neural network to think the generated model is a real model,” paragraph 0072 lines 1-14). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Minchenkov to comprise the second machine learning model of Azernikov. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely validation of at least one of a dental appliance or an orthodontic appliance (“At 660, and at substantially the same [sic], training module 120 may also train a discriminating deep neural network (e.g., discriminator 620) to recognize that the dental restoration generated by the generative deep neural network is a model versus a digital model of a real dental restoration. In the recognition process, the discriminating deep neural network can generate a loss function based on comparison of a real dental restoration and the generated model of the dental restoration. The loss function provides a feedback mechanism for the generative deep neural network. Using information from the outputted loss function, the generative deep neural network may generate a better model that can better trick the discriminating neural network to think the generated model is a real model,” Azernikov paragraph 0072 lines 1-14). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). As to dependent claim 7, the rejection of claim 1 is incorporated. Minchenkov does not appear to expressly teach a method comprising training a second machine learning model for modifying one or more aspects of the digital 3D representation. Azernikov teaches a method comprising training a second machine learning model for modifying one or more aspects of the digital 3D representation (“At 655, training module 120 may train a generative deep neural network (e.g., GAN generator 610) using unlabeled dentition data sets to generate a 3D model of a dental prosthesis such as a crown,” paragraph 0071 lines 1-4). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Minchenkov to comprise the second machine learning model of Azernikov. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely modifying one or more aspects of the digital 3D representation (“At 655, training module 120 may train a generative deep neural network (e.g., GAN generator 610) using unlabeled dentition data sets to generate a 3D model of a dental prosthesis such as a crown,” Azernikov paragraph 0071 lines 1-4). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). As to dependent claim 9, the rejection of claim 1 is incorporated. Minchenkov does not appear to expressly teach a method comprising training a second machine learning model for predicting one or more tooth shapes resulting from of one or more dental restoration procedures. Azernikov teaches a method comprising training a second machine learning model for predicting one or more tooth shapes resulting from of one or more dental restoration procedures (“At 655, training module 120 may train a generative deep neural network (e.g., GAN generator 610) using unlabeled dentition data sets to generate a 3D model of a dental prosthesis such as a crown,” paragraph 0071 lines 1-4). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Minchenkov to comprise the second machine learning model of Azernikov. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely predicting one or more tooth shapes resulting from of one or more dental restoration procedures (“At 655, training module 120 may train a generative deep neural network (e.g., GAN generator 610) using unlabeled dentition data sets to generate a 3D model of a dental prosthesis such as a crown,” Azernikov paragraph 0071 lines 1-4). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). As to dependent claim 17, the rejection of claim 13 is incorporated. Minchenkov does not appear to expressly teach a system wherein the one or more processors are further configured to train a second machine learning model for the validation of at least one of a dental appliance or an orthodontic appliance. Azernikov teaches a system wherein the one or more processors are further configured to train a second machine learning model for the validation of at least one of a dental appliance or an orthodontic appliance (“At 660, and at substantially the same [sic], training module 120 may also train a discriminating deep neural network (e.g., discriminator 620) to recognize that the dental restoration generated by the generative deep neural network is a model versus a digital model of a real dental restoration. In the recognition process, the discriminating deep neural network can generate a loss function based on comparison of a real dental restoration and the generated model of the dental restoration. The loss function provides a feedback mechanism for the generative deep neural network. Using information from the outputted loss function, the generative deep neural network may generate a better model that can better trick the discriminating neural network to think the generated model is a real model,” paragraph 0072 lines 1-14). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Minchenkov to comprise the second machine learning model of Azernikov. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely validation of at least one of a dental appliance or an orthodontic appliance (“At 660, and at substantially the same [sic], training module 120 may also train a discriminating deep neural network (e.g., discriminator 620) to recognize that the dental restoration generated by the generative deep neural network is a model versus a digital model of a real dental restoration. In the recognition process, the discriminating deep neural network can generate a loss function based on comparison of a real dental restoration and the generated model of the dental restoration. The loss function provides a feedback mechanism for the generative deep neural network. Using information from the outputted loss function, the generative deep neural network may generate a better model that can better trick the discriminating neural network to think the generated model is a real model,” Azernikov paragraph 0072 lines 1-14). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). As to dependent claim 18, the rejection of claim 13 is incorporated. Minchenkov does not appear to expressly teach a system wherein the one or more processors are further configured to train a second machine learning model for modifying one or more aspects of the digital 3D representation. Azernikov teaches a system wherein the one or more processors are further configured to train a second machine learning model for modifying one or more aspects of the digital 3D representation (“At 655, training module 120 may train a generative deep neural network (e.g., GAN generator 610) using unlabeled dentition data sets to generate a 3D model of a dental prosthesis such as a crown,” paragraph 0071 lines 1-4). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Minchenkov to comprise the second machine learning model of Azernikov. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely modifying one or more aspects of the digital 3D representation (“At 655, training module 120 may train a generative deep neural network (e.g., GAN generator 610) using unlabeled dentition data sets to generate a 3D model of a dental prosthesis such as a crown,” Azernikov paragraph 0071 lines 1-4). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). As to dependent claim 20, the rejection of claim 13 is incorporated. Minchenkov does not appear to expressly teach a system wherein the one or more processors are further configured to train a second machine learning model for predicting one or more tooth shapes resulting from of one or more dental restoration procedures. Azernikov teaches a system wherein the one or more processors are further configured to train a second machine learning model for predicting one or more tooth shapes resulting from of one or more dental restoration procedures (“At 655, training module 120 may train a generative deep neural network (e.g., GAN generator 610) using unlabeled dentition data sets to generate a 3D model of a dental prosthesis such as a crown,” paragraph 0071 lines 1-4). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Minchenkov to comprise the second machine learning model of Azernikov. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely predicting one or more tooth shapes resulting from of one or more dental restoration procedures (“At 655, training module 120 may train a generative deep neural network (e.g., GAN generator 610) using unlabeled dentition data sets to generate a 3D model of a dental prosthesis such as a crown,” Azernikov paragraph 0071 lines 1-4). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). Claims 8 and 19 are rejected under 35 U.S.C. § 103 as being unpatentable over Minchenkov in view of Feng et al. (US 2021/0217233 A1, hereinafter Feng). As to dependent claim 8, the rejection of claim 1 is incorporated. Minchenkov does not appear to expressly teach a method comprising training a second machine learning model for predicting one or more local coordinates axes for at least one tooth in the digital 3D representation, wherein at least one of the X, Y, or Z axes of the local coordinate axes are predicated. Feng teaches a method comprising training a second machine learning model (“setting a local coordinate system for it using a first artificial neural network, wherein the first artificial neural network is a trained deep learning artificial neural network,” paragraph 0006 lines 6-9) for predicting one or more local coordinates axes for at least one tooth in the digital 3D representation, wherein at least one of the X, Y, or Z axes of the local coordinate axes are predicated (“the computer-implemented method for setting a local coordinate system of a tooth 3D digital model may further comprise: obtaining a first predicted vector using the first artificial neural network based on the first 3D digital model, wherein the first predicted vector corresponds to a first coordinate axis of the local coordinate system, the first coordinate axis is one of the y-axis and z-axis of the local coordinate system, and the other of the y-axis and z-axis of the local coordinate system is a second coordinate axis; determining the x-axis of the local coordinate system using a principal component analysis method based on the first 3D digital model; determining the second coordinate axis based on the determined x-axis and first predicted vector; and determining the first coordinate axis based on the determined x-axis and second coordinate axis,” paragraph 0008 lines 1-16). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Minchenkov to comprise the local coordinates of Feng. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely automating the setting of local coordinates (Feng paragraphs 0002-0005). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). As to dependent claim 19, the rejection of claim 13 is incorporated. Minchenkov does not appear to expressly teach a system wherein the one or more processors are further configured to train a second machine learning model for predicting one or more local coordinates axes for at least one tooth in the digital 3D representation, wherein at least one of the X, Y, or Z axes of the local coordinate axes are predicated. Feng teaches a system wherein the one or more processors are further configured to train a second machine learning model (“setting a local coordinate system for it using a first artificial neural network, wherein the first artificial neural network is a trained deep learning artificial neural network,” paragraph 0006 lines 6-9) for predicting one or more local coordinates axes for at least one tooth in the digital 3D representation, wherein at least one of the X, Y, or Z axes of the local coordinate axes are predicated (“the computer-implemented method for setting a local coordinate system of a tooth 3D digital model may further comprise: obtaining a first predicted vector using the first artificial neural network based on the first 3D digital model, wherein the first predicted vector corresponds to a first coordinate axis of the local coordinate system, the first coordinate axis is one of the y-axis and z-axis of the local coordinate system, and the other of the y-axis and z-axis of the local coordinate system is a second coordinate axis; determining the x-axis of the local coordinate system using a principal component analysis method based on the first 3D digital model; determining the second coordinate axis based on the determined x-axis and first predicted vector; and determining the first coordinate axis based on the determined x-axis and second coordinate axis,” paragraph 0008 lines 1-16). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Minchenkov to comprise the local coordinates of Feng. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely automating the setting of local coordinates (Feng paragraphs 0002-0005). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). Claims 10 and 16 are rejected under 35 U.S.C. § 103 as being unpatentable over Minchenkov in view of Zhou et al. (US 2019/0205606 A1, hereinafter Zhou). As to dependent claim 10, the rejection of claim 1 is incorporated. Minchenkov does not appear to expressly teach a method wherein at least one weight of the first machine learning model is trained, at least in part, by transfer learning. Zhou teaches a method wherein at least one weight of the first machine learning model is trained, at least in part, by transfer learning (“There two types of transfer learning approaches that are typically used. In a first type of transfer learning approach, CNN-A is used as a fixed feature extractor. This is done by removing the last fully-connected layer(s) and taking the feature values from the intermediate layers as a fixed feature extractor for the new dataset. These features are then fed into other machine learning methods (e.g., support vector machine (SVM), boosting, etc.) for final decisions. Variants of this approach include using only the features from one intermediate layer or aggregating features from all intermediate layers. Further, feature selection can be applied before feeding the features into other machine learning methods. In a second type of transfer learning approach, CNN-A is fine tuned. This is done by retraining CNN-A using the small-size database of medical images from domain B, with the previously trained weights of CNN-A used for initialization. Also, it is possible to keep earlier layers fixed (due to overfitting concerns) and only fine-tune the remaining layers,” paragraph 0137 lines 10-28). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the training of Minchenkov to comprise the transfer learning of Zhou. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely training via transfer learning (“There two types of transfer learning approaches that are typically used. In a first type of transfer learning approach, CNN-A is used as a fixed feature extractor. This is done by removing the last fully-connected layer(s) and taking the feature values from the intermediate layers as a fixed feature extractor for the new dataset. These features are then fed into other machine learning methods (e.g., support vector machine (SVM), boosting, etc.) for final decisions. Variants of this approach include using only the features from one intermediate layer or aggregating features from all intermediate layers. Further, feature selection can be applied before feeding the features into other machine learning methods. In a second type of transfer learning approach, CNN-A is fine tuned. This is done by retraining CNN-A using the small-size database of medical images from domain B, with the previously trained weights of CNN-A used for initialization. Also, it is possible to keep earlier layers fixed (due to overfitting concerns) and only fine-tune the remaining layers,” Zhou paragraph 0137 lines 10-28). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). As to dependent claim 16, the rejection of claim 14 is incorporated. Minchenkov does not appear to expressly teach a system wherein the neural network is trained, at least in part, by transfer learning. Zhou teaches a system wherein the neural network is trained, at least in part, by transfer learning (“There two types of transfer learning approaches that are typically used. In a first type of transfer learning approach, CNN-A is used as a fixed feature extractor. This is done by removing the last fully-connected layer(s) and taking the feature values from the intermediate layers as a fixed feature extractor for the new dataset. These features are then fed into other machine learning methods (e.g., support vector machine (SVM), boosting, etc.) for final decisions. Variants of this approach include using only the features from one intermediate layer or aggregating features from all intermediate layers. Further, feature selection can be applied before feeding the features into other machine learning methods. In a second type of transfer learning approach, CNN-A is fine tuned. This is done by retraining CNN-A using the small-size database of medical images from domain B, with the previously trained weights of CNN-A used for initialization. Also, it is possible to keep earlier layers fixed (due to overfitting concerns) and only fine-tune the remaining layers,” paragraph 0137 lines 10-28). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the training of Minchenkov to comprise the transfer learning of Zhou. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely training via transfer learning (“There two types of transfer learning approaches that are typically used. In a first type of transfer learning approach, CNN-A is used as a fixed feature extractor. This is done by removing the last fully-connected layer(s) and taking the feature values from the intermediate layers as a fixed feature extractor for the new dataset. These features are then fed into other machine learning methods (e.g., support vector machine (SVM), boosting, etc.) for final decisions. Variants of this approach include using only the features from one intermediate layer or aggregating features from all intermediate layers. Further, feature selection can be applied before feeding the features into other machine learning methods. In a second type of transfer learning approach, CNN-A is fine tuned. This is done by retraining CNN-A using the small-size database of medical images from domain B, with the previously trained weights of CNN-A used for initialization. Also, it is possible to keep earlier layers fixed (due to overfitting concerns) and only fine-tune the remaining layers,” Zhou paragraph 0137 lines 10-28). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure: Jurdi et al., “High-level prior-based loss functions for medical image segmentation: A survey,” Computer Vision and Image Understanding, Volume 210, September 2021, 103248, ISSN 1077-3142, https://doi.org/10.1016/j.cviu.2021.103248, https://www.sciencedirect.com/science/article/pii/S1077314221000928 disclosing machine learning segmentation of digital 3D medical scans Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). In the interests of compact prosecution, Applicant is invited to contact the examiner via electronic media pursuant to USPTO policy outlined MPEP § 502.03. All electronic communication must be authorized in writing. Applicant may wish to file an Internet Communications Authorization Form PTO/SB/439. Applicant may wish to request an interview using the Interview Practice website: http://www.uspto.gov/patent/laws-and-regulations/interview-practice. Applicant is reminded Internet e-mail may not be used for communication for matters under 35 U.S.C. § 132 or which otherwise require a signature. A reply to an Office action may NOT be communicated by Applicant to the USPTO via Internet e-mail. If such a reply is submitted by Applicant via Internet e-mail, a paper copy will be placed in the appropriate patent application file with an indication that the reply is NOT ENTERED. See MPEP § 502.03(II). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ryan Barrett whose telephone number is 571 270 3311. The examiner can normally be reached 9:00am to 5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Michelle Bechtold can be reached at 571 431 0762. The fax phone number for the organization where this application or proceeding is assigned is 571 273 8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Ryan Barrett/ Primary Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

May 19, 2023
Application Filed
Feb 07, 2026
Non-Final Rejection — §102, §103, §Other (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602612
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Apr 14, 2026
Patent 12585525
BUSINESS LANGUAGE PROCESSING USING LoQoS AND rb-LSTM
2y 5m to grant Granted Mar 24, 2026
Patent 12585506
SYSTEM AND METHOD FOR DETERMINATION OF MODEL FITNESS AND STABILITY FOR MODEL DEPLOYMENT IN AUTOMATED MODEL GENERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12585990
HETEROGENEOUS COMPUTE-BASED ARTIFICIAL INTELLIGENCE MODEL PARTITIONING
2y 5m to grant Granted Mar 24, 2026
Patent 12585975
STATE MAPS FOR QUANTUM COMPUTING
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+43.7%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 409 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month