Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action has been issued in response to amendment filed 11/25/2025
Response to Arguments
Applicant's arguments filed 11/25/2025 have been carefully and fully considered. With respect to applicant’s argument regarding the rejection under 35 U.S.C § 103 which recites:
“There is no disclosure in Azernikov of multiple CNNs used to reinforce each other’s output to provide an optimized output. Instead, Azernikov discloses a single-stage neural network that outputs a design, while Lee handles printer allocation, neither provides the claimed first-mode vs second-mode hierarchy driven by distinct prioritization algorithms.”
Examiner points to Azernikov [0101] training module 123 may use dentition training data sets to train one or more deep neural networks with such a structure 500 (FIG. 5A) or 550 (FIG. 5B) for recognizing dental information and/or features within each of the training data set, and FIG. 5B. The neural network is not a single stage neural network as clearly shown in Fig. 5B.
“Further, the claimed limitation requires feedback provided to the dental configuration recommendation module that is explicitly indicative of the accuracy of its proposals and used to reinforce that module… As presently recited in claim 1, “ the dental confirmation recommendation module uses the feedback to strengthen or weaken parameters of the machine learning engine,” which is a mechanism for reinforcement machine learning. In contrast, Azernikov’s qualitive evaluation module outputs a grade and provides that feedback to a design technician, not back into a recommendation engine. And while Azernikov describes a generative adversarial network, the GAN uses two neural networks against each other to generate an output model. The generator network uses a loss function from a discriminator network to update internal weights. Applicant respectfully submits that this internal model building loop is not presented as feedback about proposal accuracy, directed to a dental configuration recommendation module for reinforcement as claimed… This is not reinforcement training – but dataset building. Thus, Azernikov does not disclose a user or system-generated feedback loop of proposal accuracy that retrains or reinforces a recommendation engine in a dental workflow (e.g., by adjusting input parameter weights). The Examiner’s combination of separate passages regarding a qualitative feedback and generative adversarial network weight updates do not show that Azernikov teaches feedback to a recommendation module as required. Therefore Azernikov in view of Lee does not teach “providing feedback for the dental configuration recommendation module, the feedback indicative of an accuracy of proposals in order to reinforce the dental configuration recommendation module”.
Examiner notes that the network is employed to model dental anatomical features and restorations and the loss function provides feedback for the generator neural network to improve each sample by changing weights and generating another output using the patients dentition scans (support can be found [0113] - [0115]), this would be interpreted as the accuracy of a proposal since the network is improving the model of dental anatomical features which is reinforcing the recommendation module. The claim language does not require reinforcement training, BRI it simply states "in order to reinforce the ... module" which is interpreted as to strengthen or support, additionally nowhere in the specification does it define "reinforce" as reinforcement training. Although feedback is provided to a design technician the user submits the design back through the deep neural network for a recheck until its acceptable which is described in [0145], and is therefore interpreted as feedback to a recommendation engine.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6, 8-9, 11, 13-15, and 17-21 are rejected under 35 U.S.C. 103 as being unpatentable over Azernikov et al. (US20220218449A1, herein Azernikov), in view of Lee et al. (US20160167310A1, herein Lee).
Regarding claim 1, Azernikov teaches A computer-implemented method comprising the steps of: receiving a three-dimensional (3D) scan of a dental cavity of a patient for a dental workflow ([0007] receiving a patient's scan data representing at least one portion of the patient's dentition data set): automatically identifying, by a computer-aided design/computer-aided manufacturing (CAD CAM) resource module configured to automatically obtain information about the dental workflow, input data, including at least CAD/CAM input data, for a dental configuration recommendation module; ([0007] identifying, using the trained deep neural network, one or more dental features in the patient's scan data based on one or more output probability values of the deep neural network, [0002] dental computer-aided design (CAD), and specifically to dental CAD automation using deep learning, [0104] the deep neural network is trained such that when a patient's scan data (patient's scanned dental 3D model) is input into the deep neural network, [0009] receive, from a user interface, an input indicating a selection of a dental restoration type to be fabricated, [0070] a user interface (UI), such as physical and/or on-screen buttons, with which human client 175 may interact with client device 107 to perform functions such as submitting a new dental restoration request, receiving and reviewing identified dental information associated with dental models, receiving and reviewing a completed dental restoration design, etc.): automatically identifying, by the CAD/CAM resource module ([0007] identifying, using the trained deep neural network, one or more dental features in the patient's scan data based on one or more output probability values of the deep neural network, [0002] dental computer-aided design (CAD), and specifically to dental CAD automation using deep learning, [0104] the deep neural network is trained such that when a patient's scan data (patient's scanned dental 3D model) is input into the deep neural network), preferences of a specified user as part of the input data ([0009] The dental restoration client is configured to receive, from a user interface, an input indicating a selection of a dental restoration type to be fabricated, [0063] Dental restoration server 101 receives dental restoration requests from client device 107 operated by client 175 such as a human client. In one embodiment, the dental restoration requests include scan dental models generated by scanner 109. In other embodiments, client 175 may send a physical model or impression of a patient's teeth along with the requests to dental restoration server 101, [0013] The restoration types can include crowns, inlays, bridges, and implants); automatically generating, as part of the input data, a set of dependencies that affect an effectiveness of a first output proposal to be proposed by the dental configuration recommendation module ([0114] in response to the loss function, generator 510 can change one or more of the weights and/or bias variables and generate another output, [0121] data associated with restoration design 1212 is preprocessed and is provided as an input to a deep neural network… three depth maps corresponding with the occlusal, buccal, and lingual views of restoration design 1212 are provided. Also provided is a preprocessed scan data for a portion of a patient's dentition 1210) ; extracting, in a first mode, one or more first features from the input data based on an algorithm of prioritization or dependencies ([0122] The algorithms and instructions of method 1000, when executed by processor 202, enable computing device 200 to train one or more deep neural networks for recognizing or identifying dental information or features in dental models or scanned data sets of a patient's dentition), the one or more first features representative of a request for completing a restoration design, proposing, in the first mode, using a first convolutional neural network of the dental configuration recommendation module and the extracted one or more first features (Fig. 10B, [0023] performing qualitative evaluations of restoration designs and using the output of the evaluation to assign a final evaluation grade that is used by the system to determine a further processing option for the restoration design, [0055] Using the electronic image, a computer-implemented dental information or features recognition system is used to automatically identify useful dental structure and restoration information and detect features and margin lines of dentition, thus facilitating automatic dental restoration design and fabrication in the following steps, [0063] dental restoration requests include scan dental models generated by scanner 109), the first output proposal as at least one restoration geometry proposal that is compatible with the automatically generated set of dependencies([0010] The 3D modeling can then generate an output restoration model using the selected pre-trained deep neural network based on the patient's dentition data); extracting, in a second mode responsive to proposing in the first mode, one or more second features from the input data based on an algorithm of prioritization or dependencies ([0122] The algorithms and instructions of method 1000, when executed by processor 202, enable computing device 200 to train one or more deep neural networks for recognizing or identifying dental information or features in dental models or scanned data sets of a patient's dentition), the one or more second features representative of a request for completing a machining process; ([0063] client 175 may send a physical model or impression of a patient's teeth along with the requests to dental restoration server 101 and the digital dental model can be created accordingly on the server side 101, [0075] provide dental restoration design and/or fabrication services), proposing, in the second mode, and in response to the request for completing the machining process, using a second convolutional neural network of the dental configuration recommendation module, which uses as input; a) the one or more second features, b) the restoration geometry proposal from the first mode,… (Fig. 5B, [0011] The 3D modeling can then use the patient's dentition data set as an input to the selected pre-trained deep neural network. The 3D modeling can then generate an output restoration model using the selected pre-trained deep neural network based on the patient's dentition data, [0013] dentition or dental feature can include a lower jaw, an upper jaw, a prepared jaw, an opposing jaw, a set of tooth numbers, and dental restoration types. The restoration types can include crowns, inlays, bridges, and implants) dental printer, a second output proposal as at least one configuration for the machining process that is compatible with the automatically generated set of dependencies ([0011] the 3D fabricator is configured to fabricate a 3D model of the selected dental restoration type using the output restoration model generated by the 3D modeling module, [0013] dentition or dental feature can include a lower jaw, an upper jaw, a prepared jaw, an opposing jaw, a set of tooth numbers, and dental restoration types. The restoration types can include crowns, inlays, bridges, and implants, [0010] The 3D modeling module is also configured to select a deep neural network pre-trained by a group of dentition training data sets designed for a restoration model that matches the selected dental restoration type. For example, if the selected dental restoration type is a crown, then a deep neural network that is pretrained by a group of dentition training data sets specifically designed for mapping crowns is selected. The 3D modeling can then use the patient's dentition data set as an input to the selected pre-trained deep neural network. The 3D modeling can then generate an output restoration model using the selected pre-trained deep neural network based on the patient's dentition data, [0055] input to a machine learning engine in order to obtain dental parameter output proposals.); providing feedback for the dental configuration recommendation module indicative of an accuracy of proposals in order to reinforce the dental configuration recommendation module (Azernikov, [0145] qualitative evaluation module 1630 may provide an output that has one of three levels: accept, minor adjustment, or complete remake. In an embodiment, the system may provide feedback to design device 103 based upon qualitative evaluation module 135 output that certain areas require improvement. Based upon this feedback, the restoration design will be improved by design device 103 and checked again until the qualitative evaluation module indicates that the restoration design is acceptable, or until a maximum number of iterations has been reached, [0114] feedback required for generator 510 to improve each succeeding sample, [0022] The deep neural network is trained so that, once a defect is found, the system will provide feedback to a restoration design module that an identified design aspect requires improvement); and optimizing the first output proposal with the first convolutional neural network based on the feedback and the second output proposal, or optimizing the second output proposal with the second convolutional neural network based on the first output proposal; wherein the dental configuration recommendation module operates as a machine learning engine (Fig. 5B, [0101] training module 123 may use dentition training data sets to train one or more deep neural networks with such a structure 500 (FIG. 5A) or 550 (FIG. 5B) for recognizing dental information and/or features within each of the training data set, [0114] in response to the loss function, generator 510 can change one or more of the weights and/or bias variables and generate another output, [0121] data associated with restoration design 1212 is preprocessed and is provided as an input to a deep neural network… three depth maps corresponding with the occlusal, buccal, and lingual views of restoration design 1212 are provided. Also provided is a preprocessed scan data for a portion of a patient's dentition 1210, [0065] dental restoration server 101 may include a neural network module (not shown) that contains various deep learning neural networks such as convolutional networks, convolutional deep belief networks, recurrent neural networks, and generative adversarial networks, [0022] The deep neural network is trained so that, once a defect is found, the system will provide feedback to a restoration design module that an identified design aspect requires improvement), and wherein the dental configuration recommendation module uses the feedback to strengthen or weaken parameters of the machine learning engine ([0114] Loss function 540 can be used to provide the feedback required for generator 510 to improve each succeeding sample. In one embodiment, in response to the loss function, generator 510 can change one or more of the weights and/or bias variables and generate another output, [0065] use many training data sets from a database 133 (e.g., scans of patients' dentition impressions) to train one or more deep neural networks, which can be a part of training module 123. In some embodiments, dental restoration server 101 may include a neural network module (not shown) that contains various deep learning neural networks such as convolutional networks, convolutional deep belief networks, recurrent neural networks, and generative adversarial networks, [0145] the system may provide feedback to design device 103 based upon qualitative evaluation module 135 output that certain areas require improvement. Based upon this feedback, the restoration design will be improved by design device 103 and checked again until the qualitative evaluation module indicates that the restoration design is acceptable, or until a maximum number of iterations has been reached).
Azernikov does not teach c) identification of a …printer, and d) tools connected to the … printer
Lee teaches c) identification of a …printer, and d) tools connected to the … printer ([0011] the printer characteristic information may include at least one information of 3D printer specification, material/color/file supportable by a 3D printer, maximum size of an object that can be printed by a 3D printer, resolution of a 3D printer and accuracy of a 3D printer, [0044] An apparatus for recommending a 3D printer 100 may receive printer characteristic information of corresponding 3D printers 200 from a plurality of 3D printers 200 existing in cloud and further store the received printer characteristic information, [0061] Here, the 3D printer specification may include information about manufacturer, brand name, model, number of printer heads, printing speed and nozzle of the corresponding 3D printer).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Azernikov’s, teaching of inputting data into a dental recommendation model with Lee’s teaching of the data including printer characteristic information. The combined teaching provides an expected result of inputting data into a dental recommendation model including printer characteristic information. Therefore, one of ordinary skill in the art would be motivated to improve the accuracy of the recommendation model.
Regarding claim 2, the combination of Azernikov and Lee teach The method of claim 1, wherein said proposing the first or second output proposal includes at least one of (a) proposing a user-specific 3D restoration design as a first configuration (Azernikov, [0063] the dental restoration requests include scan dental models generated by scanner 109. In other embodiments, client 175 may send a physical model or impression of a patient's teeth along with the requests to dental restoration server 101 and the digital dental model can be created accordingly on the server side 101, e.g., by an administrator or technician operating on the server 101. Dental restoration server 101 creates and manages dental restoration cases based upon the received requests from client 107. For example, dental restoration server 101 may assign the created cases to appropriate design device 103 or third party server 151 to design a dental restoration according to the client's requests, [0056] A user (e.g., a design technician) operating on the user device can view the dental model and design or refine a restoration model based on the dental model.), (b) proposing at least one user-specific restoration parameter as a second configuration, and (c) proposing at least one user-specific manufacturing parameter as a third configuration.
Regarding claim 3, the combination of Azernikov and Lee teach The method of claim 2, further comprising: providing, using an adaptation module, a modification of a dental workflow setting or of a proposed user-specific configuration from a plurality of proposed user-specific configurations and computing an acceptance decision of the modification based on the set of dependencies (Azernikov, [0145] qualitative evaluation module 1630 may provide an output that has one of three levels: accept, minor adjustment, or complete remake. In an embodiment, the system may provide feedback to design device 103 based upon qualitative evaluation module 135 output that certain areas require improvement. Based upon this feedback, the restoration design will be improved by design device 103 and checked again until the qualitative evaluation module indicates that the restoration design is acceptable, or until a maximum number of iterations has been reached, [0056] A user (e.g., a design technician) operating on the user device can view the dental model and design or refine a restoration model based on the dental model. ).
Regarding claim 4, the combination of Azernikov and Lee teach The method of claim 3, further comprising: recomputing, the proposed user-specific configuration upon determining that the modification is acceptable (Azernikov, [0145] qualitative evaluation module 1630 may provide an output that has one of three levels: accept, minor adjustment, or complete remake. In an embodiment, the system may provide feedback to design device 103 based upon qualitative evaluation module 135 output that certain areas require improvement. Based upon this feedback, the restoration design will be improved by design device 103 and checked again until the qualitative evaluation module indicates that the restoration design is acceptable, or until a maximum number of iterations has been reached, [0056] A user (e.g., a design technician) operating on the user device can view the dental model and design or refine a restoration model based on the dental model).
Regarding claim 5, the combination of Azernikov and Lee teach The method of claim 1, wherein the input data is automatically obtained from devices contained in or connected to the dental manufacturing system (Azernikov, Fig. 1, [0011] The 3D fabricator can be a milling machine or a 3D printer. One or more of the system components/modules (e.g., the dental restoration client, the 3D modeling module, and the 3D module fabricator) can reside on the same local network or can be remotely located from each other, [0080] Client 175 may then upload the generated model received from restoration server 101 to a fabricator such as a 3D printer).
Regarding claim 6, the combination of Azernikov and Lee teach The method of claim 1, wherein at least a portion of the input data is automatically retrieved from the 3D scan of the dental cavity (Azernikov, [0007] identifying, using the trained deep neural network, one or more dental features in the patient's scan data based on one or more output probability values of the deep neural network)
Regarding claim 8, the combination of Azernikov and Lee teach The method of claim 1, further comprising: training the dental configuration recommendation module based on information comprising static CAD/CAM input data (Azernikov, [0011] The 3D fabricator can be a milling machine or a 3D printer, [0077] client device 107 may include a miniature 3D fabricator such as a milling machine or a 3D printer….client device 107 may pre-process the 3D impression scan data…the preprocessing of the scanned impression data can be performed by training module 123 ) , dynamic CAD CAM input data ([0057] may train deep neural networks, using many dentition training data sets, to automatically recognize dental information and/or features from the dental model representing at least a portion of a patient's dentition. The systems and methods may display the recognized dental information and/or features before the user starts doing manual design. For example, the present system may identify and label upper and lower jaws, prepared and opposing jaws, the number of a tooth on which the preparation is to be used, or the type of restoration to be designed) , and user preferences [0056] A user (e.g., a design technician) operating on the user device can view the dental model and design or refine a restoration model based on the dental model.
Regarding claim 9, the combination of Azernikov and Lee teach The method of claim 8. wherein the information includes historical data selected from a database of a dental manufacturing system (Azernikov, [0066] Database 133 can have many groups of training data sets, one group for each tooth number and/or for each dental feature, for example. Database 133 can also have a dedicated group of training data sets for the upper jaw, the opposing jaw, bridges, implants, bone graft locations, prepared tooth, and each attribute or aspect of a tooth surface anatomy (e.g, buccal and lingual cusps, distobucall and mesiobuccal inclines, distal and mesial cusp ridges, distolingual and mesiolingual inclines, occlusal surface, and buccal and lingual arcs, [0067] training module 123 may pre-train one or more deep neural networks using training data sets from database 133, [0068] Database 133 can store data related to the deep neural networks and the identified dental information associated with the dental models, [0128] Training module 123 can also train one or more deep neural networks to predict the shape and size of a missing tooth based on the unsupervised learning of hundreds or thousands of sample dentition data sets ).
Regarding claim 11, the combination of Azernikov and Lee teach The method of claim 1, further comprising locally or remotely providing the dental configuration recommendation module with new instructions and requirements as the set of dependencies (Azernikov, [0145] qualitative evaluation module 1630 may provide an output that has one of three levels: accept, minor adjustment, or complete remake. In an embodiment, the system may provide feedback to design device 103 based upon qualitative evaluation module 135 output that certain areas require improvement. Based upon this feedback, the restoration design will be improved by design device 103 and checked again until the qualitative evaluation module indicates that the restoration design is acceptable, or until a maximum number of iterations has been reached, [0022] the deep neural network is trained so that, once a defect is found, the system will provide feedback to a restoration design module that an identified design aspect requires improvement, [0114] in response to the loss function, generator 510 can change one or more of the weights and/or bias variables and generate another output, [0121] data associated with restoration design 1212 is preprocessed and is provided as an input to a deep neural network… three depth maps corresponding with the occlusal, buccal, and lingual views of restoration design 1212 are provided. Also provided is a preprocessed scan data for a portion of a patient's dentition 1210)).
Regarding claim 13, Azernikov teaches A computer system comprising a processor configured to perform operations including ([0012] implemented by one or more processors of a system): receiving a three-dimensional (3D) scan of a dental cavity of a patient for a dental workflow ([0007] receiving a patient's scan data representing at least one portion of the patient's dentition data set); automatically identifying, by a computer-aided design/computer-aided manufacturing (CAD/CAM) resource configured to automatically obtain information about the dental workflow, input data, including at least CAD/CAM input data, for a dental configuration recommendation module ([0007] identifying, using the trained deep neural network, one or more dental features in the patient's scan data based on one or more output probability values of the deep neural network, [0002] dental computer-aided design (CAD), and specifically to dental CAD automation using deep learning, [0104] the deep neural network is trained such that when a patient's scan data (patient's scanned dental 3D model) is input into the deep neural network, [0009] receive, from a user interface, an input indicating a selection of a dental restoration type to be fabricated, [0070] a user interface (UI), such as physical and/or on-screen buttons, with which human client 175 may interact with client device 107 to perform functions such as submitting a new dental restoration request, receiving and reviewing identified dental information associated with dental models, receiving and reviewing a completed dental restoration design, etc. ); automatically identifying, by the CAD/CAD resource module ([0007] identifying, using the trained deep neural network, one or more dental features in the patient's scan data based on one or more output probability values of the deep neural network, [0002] dental computer-aided design (CAD), and specifically to dental CAD automation using deep learning, [0104] the deep neural network is trained such that when a patient's scan data (patient's scanned dental 3D model) is input into the deep neural network), preferences of a specified user as part of the input data ([0009] The dental restoration client is configured to receive, from a user interface, an input indicating a selection of a dental restoration type to be fabricated, [0063] Dental restoration server 101 receives dental restoration requests from client device 107 operated by client 175 such as a human client. In one embodiment, the dental restoration requests include scan dental models generated by scanner 109. In other embodiments, client 175 may send a physical model or impression of a patient's teeth along with the requests to dental restoration server 101, [0013] The restoration types can include crowns, inlays, bridges, and implants); automatically generating, as part of the input data , a set of dependencies that affect an effectiveness of a first output proposal to be proposed by the dental configuration recommendation module ([0114] in response to the loss function, generator 510 can change one or more of the weights and/or bias variables and generate another output, [0121] data associated with restoration design 1212 is preprocessed and is provided as an input to a deep neural network… three depth maps corresponding with the occlusal, buccal, and lingual views of restoration design 1212 are provided. Also provided is a preprocessed scan data for a portion of a patient's dentition 1210); extracting, in a first mode, one or more first features from the input data based on an algorithm of prioritization or dependencies ([0122] The algorithms and instructions of method 1000, when executed by processor 202, enable computing device 200 to train one or more deep neural networks for recognizing or identifying dental information or features in dental models or scanned data sets of a patient's dentition), the one or more first features representative of a request for completing a restoration design; proposing, in the first mode, using a first convolutional neural network the dental configuration recommendation module and the extracted one or more first features (Fig. 10B, [0023] performing qualitative evaluations of restoration designs and using the output of the evaluation to assign a final evaluation grade that is used by the system to determine a further processing option for the restoration design, [0055] Using the electronic image, a computer-implemented dental information or features recognition system is used to automatically identify useful dental structure and restoration information and detect features and margin lines of dentition, thus facilitating automatic dental restoration design and fabrication in the following steps, [0063] dental restoration requests include scan dental models generated by scanner 109), the first output proposal as at least one restoration geometry proposal that is compatible with the automatically generated set of dependencies ([0010] The 3D modeling can then generate an output restoration model using the selected pre-trained deep neural network based on the patient's dentition data); extracting, in a second mode responsive to proposing in the first mode, one or more second features from the input data based on an algorithm of prioritization or dependencies ([0122] The algorithms and instructions of method 1000, when executed by processor 202, enable computing device 200 to train one or more deep neural networks for recognizing or identifying dental information or features in dental models or scanned data sets of a patient's dentition), the one or more second features representative of a request for completing a machining process ([0063] client 175 may send a physical model or impression of a patient's teeth along with the requests to dental restoration server 101 and the digital dental model can be created accordingly on the server side 101, [0075] provide dental restoration design and/or fabrication services), and proposing, in the second mode and in response to the request for completing the machining process, using a second convolutional neural network of the dental configuration recommendation module, which uses as input: a) the one or more second features, b) the restoration geometry proposal from the first mode (Fig. 5B, [0011] The 3D modeling can then use the patient's dentition data set as an input to the selected pre-trained deep neural network. The 3D modeling can then generate an output restoration model using the selected pre-trained deep neural network based on the patient's dentition data, [0013] dentition or dental feature can include a lower jaw, an upper jaw, a prepared jaw, an opposing jaw, a set of tooth numbers, and dental restoration types. The restoration types can include crowns, inlays, bridges, and implants), a … dental printer… second output proposal as at least one configuration for the machining process that is compatible with the automatically generated set of dependencies([0011] the 3D fabricator is configured to fabricate a 3D model of the selected dental restoration type using the output restoration model generated by the 3D modeling module, [0013] dentition or dental feature can include a lower jaw, an upper jaw, a prepared jaw, an opposing jaw, a set of tooth numbers, and dental restoration types. The restoration types can include crowns, inlays, bridges, and implants, [0010] The 3D modeling module is also configured to select a deep neural network pre-trained by a group of dentition training data sets designed for a restoration model that matches the selected dental restoration type. For example, if the selected dental restoration type is a crown, then a deep neural network that is pretrained by a group of dentition training data sets specifically designed for mapping crowns is selected. The 3D modeling can then use the patient's dentition data set as an input to the selected pre-trained deep neural network. The 3D modeling can then generate an output restoration model using the selected pre-trained deep neural network based on the patient's dentition data, [0055] input to a machine learning engine in order to obtain dental parameter output proposals); and providing feedback for the dental configuration recommendation module, the feedback indicative of an accuracy of proposals in order to reinforce the dental configuration recommendation module (Azernikov, [0145] qualitative evaluation module 1630 may provide an output that has one of three levels: accept, minor adjustment, or complete remake. In an embodiment, the system may provide feedback to design device 103 based upon qualitative evaluation module 135 output that certain areas require improvement. Based upon this feedback, the restoration design will be improved by design device 103 and checked again until the qualitative evaluation module indicates that the restoration design is acceptable, or until a maximum number of iterations has been reached, [0114] feedback required for generator 510 to improve each succeeding sample, [0022] The deep neural network is trained so that, once a defect is found, the system will provide feedback to a restoration design module that an identified design aspect requires improvement); and optimizing the first output proposal with the first convolutional neural network based on the feedback and the second output proposal, or optimizing the second output proposal with the second convolutional neural network based on the first output proposal wherein the dental configuration recommendation module operates as a machine learning engine (Fig. 5B, [0101] training module 123 may use dentition training data sets to train one or more deep neural networks with such a structure 500 (FIG. 5A) or 550 (FIG. 5B) for recognizing dental information and/or features within each of the training data set, [0114] in response to the loss function, generator 510 can change one or more of the weights and/or bias variables and generate another output, [0114] in response to the loss function, generator 510 can change one or more of the weights and/or bias variables and generate another output, [0121] data associated with restoration design 1212 is preprocessed and is provided as an input to a deep neural network… three depth maps corresponding with the occlusal, buccal, and lingual views of restoration design 1212 are provided. Also provided is a preprocessed scan data for a portion of a patient's dentition 1210, [0065] dental restoration server 101 may include a neural network module (not shown) that contains various deep learning neural networks such as convolutional networks, convolutional deep belief networks, recurrent neural networks, and generative adversarial networks), and wherein the dental configuration recommendation module uses the feedback to strengthen or weaken parameters of the machine learning engine ([0114] Loss function 540 can be used to provide the feedback required for generator 510 to improve each succeeding sample. In one embodiment, in response to the loss function, generator 510 can change one or more of the weights and/or bias variables and generate another output, [0065] use many training data sets from a database 133 (e.g., scans of patients' dentition impressions) to train one or more deep neural networks, which can be a part of training module 123. In some embodiments, dental restoration server 101 may include a neural network module (not shown) that contains various deep learning neural networks such as convolutional networks, convolutional deep belief networks, recurrent neural networks, and generative adversarial networks, [0145] the system may provide feedback to design device 103 based upon qualitative evaluation module 135 output that certain areas require improvement. Based upon this feedback, the restoration design will be improved by design device 103 and checked again until the qualitative evaluation module indicates that the restoration design is acceptable, or until a maximum number of iterations has been reached).
Azernikov does not teach c) identification of a …printer, and d) tools connected to the … printer
Lee teaches c) identification of a …printer, and d) tools connected to the … printer ([0011] the printer characteristic information may include at least one information of 3D printer specification, material/color/file supportable by a 3D printer, maximum size of an object that can be printed by a 3D printer, resolution of a 3D printer and accuracy of a 3D printer, [0044] An apparatus for recommending a 3D printer 100 may receive printer characteristic information of corresponding 3D printers 200 from a plurality of 3D printers 200 existing in cloud and further store the received printer characteristic information, [0061] Here, the 3D printer specification may include information about manufacturer, brand name, model, number of printer heads, printing speed and nozzle of the corresponding 3D printer).
Regarding claim 14, the combination of Azernikov, and Lee teach The computer system of claim 13, wherein said proposing the first or second output proposal includes the processor to perform (Azernikov, [0012] implemented by one or more processors of a system) at least one of (a) proposing a user-specific restoration design as a first configuration ([0063] the dental restoration requests include scan dental models generated by scanner 109. In other embodiments, client 175 may send a physical model or impression of a patient's teeth along with the requests to dental restoration server 101 and the digital dental model can be created accordingly on the server side 101, e.g., by an administrator or technician operating on the server 101. Dental restoration server 101 creates and manages dental restoration cases based upon the received requests from client 107. For example, dental restoration server 101 may assign the created cases to appropriate design device 103 or third party server 151 to design a dental restoration according to the client's requests, [0056] A user (e.g., a design technician) operating on the user device can view the dental model and design or refine a restoration model based on the dental model.), (b) proposing at least one restoration parameter as a second configuration, and (c) proposing at least one user-specific manufacturing parameter as a third configuration.
Regarding claim 15, the combination of Azernikov, and Lee teach The computer system of claim 13. wherein at least a portion of the input data is automatically retrieved from the 3D scan of the dental cavity (Azernikov, Fig. 1, [0011] The 3D fabricator can be a milling machine or a 3D printer. One or more of the system components/modules (e.g., the dental restoration client, the 3D modeling module, and the 3D module fabricator) can reside on the same local network or can be remotely located from each other, [0080] Client 175 may then upload the generated model received from restoration server 101 to a fabricator such as a 3D printer).
Regarding claim 17, Azernikov teaches A non-transitory computer-readable storage medium storing a program which ([0082] The storage device 208 is any non-transitory computer-readable storage medium, such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device. The memory 206 holds instructions and data used by the processor 202) , when executed by a computer system ([0084] and executed by processor 202) , causes the computer system to perform a procedure comprising the steps of: receiving a three-dimensional (3D) scan of a dental cavity of a patient for a dental workflow ([0007] receiving a patient's scan data representing at least one portion of the patient's dentition data set): automatically identifying, by a computer-aided design/computer-aided manufacturing (CAD CAM) resource module configured to automatically obtain information about the dental workflow, input data, including at least CAD/CAM input data and preferences, for a dental configuration recommendation module ([0007] identifying, using the trained deep neural network, one or more dental features in the patient's scan data based on one or more output probability values of the deep neural network, [0002] dental computer-aided design (CAD), and specifically to dental CAD automation using deep learning, [0104] the deep neural network is trained such that when a patient's scan data (patient's scanned dental 3D model) is input into the deep neural network, [0009] receive, from a user interface, an input indicating a selection of a dental restoration type to be fabricated, [0070] a user interface (UI), such as physical and/or on-screen buttons, with which human client 175 may interact with client device 107 to perform functions such as submitting a new dental restoration request, receiving and reviewing identified dental information associated with dental models, receiving and reviewing a completed dental restoration design, etc.): automatically identifying, by the CAD/CAM resource module ([0007] identifying, using the trained deep neural network, one or more dental features in the patient's scan data based on one or more output probability values of the deep neural network, [0002] dental computer-aided design (CAD), and specifically to dental CAD automation using deep learning, [0104] the deep neural network is trained such that when a patient's scan data (patient's scanned dental 3D model) is input into the deep neural network), preference of a specified user as part of the input data ([0009] The dental restoration client is configured to receive, from a user interface, an input indicating a selection of a dental restoration type to be fabricated, [0063] Dental restoration server 101 receives dental restoration requests from client device 107 operated by client 175 such as a human client. In one embodiment, the dental restoration requests include scan dental models generated by scanner 109. In other embodiments, client 175 may send a physical model or impression of a patient's teeth along with the requests to dental restoration server 101, [0013] The restoration types can include crowns, inlays, bridges, and implants); automatically generating, as part of the input data, a set of dependencies that affect an effectiveness of a first output proposal to be proposed by the dental configuration recommendation module ([0114] in response to the loss function, generator 510 can change one or more of the weights and/or bias variables and generate another output, [0121] data associated with restoration design 1212 is preprocessed and is provided as an input to a deep neural network… three depth maps corresponding with the occlusal, buccal, and lingual views of restoration design 1212 are provided. Also provided is a preprocessed scan data for a portion of a patient's dentition 1210); extracting, in a first mode, one or more first features from the input data based on an algorithm of prioritization of dependencies ([0122] The algorithms and instructions of method 1000, when executed by processor 202, enable computing device 200 to train one or more deep neural networks for recognizing or identifying dental information or features in dental models or scanned data sets of a patient's dentition), the one or more first features representative of a request for completing a restoration design; proposing, in the first mode, using a first convolutional neural network of the dental configuration recommendation module and the extracted one or more first features (Fig. 10B, [0023] performing qualitative evaluations of restoration designs and using the output of the evaluation to assign a final evaluation grade that is used by the system to determine a further processing option for the restoration design, [0055] Using the electronic image, a computer-implemented dental information or features recognition system is used to automatically identify useful dental structure and restoration information and detect features and margin lines of dentition, thus facilitating automatic dental restoration design and fabrication in the following steps, [0063] dental restoration requests include scan dental models generated by scanner 109), the first output proposal as at least one restoration geometry proposal that is compatible with the automatically generated set of dependencies ([0010] The 3D modeling can then generate an output restoration model using the selected pre-trained deep neural network based on the patient's dentition data); extracting, in a second mode responsive to proposing in the first mode, one or more second features from the input data based on an algorithm of prioritization or dependencies ([0122] The algorithms and instructions of method 1000, when executed by processor 202, enable computing device 200 to train one or more deep neural networks for recognizing or identifying dental information or features in dental models or scanned data sets of a patient's dentition), the one or more second features representative of a request for completing a machining process ([0063] client 175 may send a physical model or impression of a patient's teeth along with the requests to dental restoration server 101 and the digital dental model can be created accordingly on the server side 101, [0075] provide dental restoration design and/or fabrication services), and proposing, in the second mode and in response to the request for completing the machining process, using a second convolutional neural network of the dental configuration recommendation module, which uses as input; a) the one or more second features, b) the restoration geometry proposal from the first mode (Fig. 5B, [0011] The 3D modeling can then use the patient's dentition data set as an input to the selected pre-trained deep neural network. The 3D modeling can then generate an output restoration model using the selected pre-trained deep neural network based on the patient's dentition data, [0013] dentition or dental feature can include a lower jaw, an upper jaw, a prepared jaw, an opposing jaw, a set of tooth numbers, and dental restoration types. The restoration types can include crowns, inlays, bridges, and implants), … dental printer…a second output proposal as at least one configuration for the machining process that is compatible with the automatically generated set of dependencies ([0011] the 3D fabricator is configured to fabricate a 3D model of the selected dental restoration type using the output restoration model generated by the 3D modeling module, [0013] dentition or dental feature can include a lower jaw, an upper jaw, a prepared jaw, an opposing jaw, a set of tooth numbers, and dental restoration types. The restoration types can include crowns, inlays, bridges, and implants, [0010] The 3D modeling module is also configured to select a deep neural network pre-trained by a group of dentition training data sets designed for a restoration model that matches the selected dental restoration type. For example, if the selected dental restoration type is a crown, then a deep neural network that is pretrained by a group of dentition training data sets specifically designed for mapping crowns is selected. The 3D modeling can then use the patient's dentition data set as an input to the selected pre-trained deep neural network. The 3D modeling can then generate an output restoration model using the selected pre-trained deep neural network based on the patient's dentition data, [0055] input to a machine learning engine in order to obtain dental parameter output proposals); and providing feedback for the dental configuration recommendation module, the feedback indicative of an accuracy of proposals in order to reinforce the dental configuration recommendation module (Azernikov, [0145] qualitative evaluation module 1630 may provide an output that has one of three levels: accept, minor adjustment, or complete remake. In an embodiment, the system may provide feedback to design device 103 based upon qualitative evaluation module 135 output that certain areas require improvement. Based upon this feedback, the restoration design will be improved by design device 103 and checked again until the qualitative evaluation module indicates that the restoration design is acceptable, or until a maximum number of iterations has been reached, [0114] feedback required for generator 510 to improve each succeeding sample, [0022] The deep neural network is trained so that, once a defect is found, the system will provide feedback to a restoration design module that an identified design aspect requires improvement); and optimizing the first output proposal with the first convolutional neural network based on feedback and the second output proposal, or optimizing the second output proposal with the second convolutional neural network based on the first output proposal; wherein the dental configuration recommendation module operates as a machine learning engine (Fig. 5B, [0101] training module 123 may use dentition training data sets to train one or more deep neural networks with such a structure 500 (FIG. 5A) or 550 (FIG. 5B) for recognizing dental information and/or features within each of the training data set, [0114] in response to the loss function, generator 510 can change one or more of the weights and/or bias variables and generate another output, [0114] in response to the loss function, generator 510 can change one or more of the weights and/or bias variables and generate another output, [0121] data associated with restoration design 1212 is preprocessed and is provided as an input to a deep neural network… three depth maps corresponding with the occlusal, buccal, and lingual views of restoration design 1212 are provided. Also provided is a preprocessed scan data for a portion of a patient's dentition 1210, [0065] dental restoration server 101 may include a neural network module (not shown) that contains various deep learning neural networks such as convolutional networks, convolutional deep belief networks, recurrent neural networks, and generative adversarial networks), and wherein the dental configuration recommendation module uses the feedback to strengthen or weaken parameters of the machine learning engine ([0114] Loss function 540 can be used to provide the feedback required for generator 510 to improve each succeeding sample. In one embodiment, in response to the loss function, generator 510 can change one or more of the weights and/or bias variables and generate another output, [0065] use many training data sets from a database 133 (e.g., scans of patients' dentition impressions) to train one or more deep neural networks, which can be a part of training module 123. In some embodiments, dental restoration server 101 may include a neural network module (not shown) that contains various deep learning neural networks such as convolutional networks, convolutional deep belief networks, recurrent neural networks, and generative adversarial networks, [0145] the system may provide feedback to design device 103 based upon qualitative evaluation module 135 output that certain areas require improvement. Based upon this feedback, the restoration design will be improved by design device 103 and checked again until the qualitative evaluation module indicates that the restoration design is acceptable, or until a maximum number of iterations has been reached).
Azernikov does not teach c) identification of a …printer, and d) tools connected to the … printer
Lee teaches c) identification of a …printer, and d) tools connected to the … printer ([0011] the printer characteristic information may include at least one information of 3D printer specification, material/color/file supportable by a 3D printer, maximum size of an object that can be printed by a 3D printer, resolution of a 3D printer and accuracy of a 3D printer, [0044] An apparatus for recommending a 3D printer 100 may receive printer characteristic information of corresponding 3D printers 200 from a plurality of 3D printers 200 existing in cloud and further store the received printer characteristic information, [0061] Here, the 3D printer specification may include information about manufacturer, brand name, model, number of printer heads, printing speed and nozzle of the corresponding 3D printer).
Regarding claim 18, the combination of Azernikov and Lee teach The non-transitory computer-readable storage medium of claim 17, wherein said proposing includes at least one of (a) proposing a user-specific restoration design as a first configuration (Azernikov, [0063] the dental restoration requests include scan dental models generated by scanner 109. In other embodiments, client 175 may send a physical model or impression of a patient's teeth along with the requests to dental restoration server 101 and the digital dental model can be created accordingly on the server side 101, e.g., by an administrator or technician operating on the server 101. Dental restoration server 101 creates and manages dental restoration cases based upon the received requests from client 107. For example, dental restoration server 101 may assign the created cases to appropriate design device 103 or third party server 151 to design a dental restoration according to the client's requests, [0056] A user (e.g., a design technician) operating on the user device can view the dental model and design or refine a restoration model based on the dental model.), (b) proposing at least one restoration parameter as a second configuration, and (c) proposing at least one user-specific manufacturing parameter as a third configuration.
Regarding claim 19, the combination of Azernikov and Lee teach The non-transitory computer-readable storage medium of claim 17, wherein at least a portion of the input data is automatically retrieved from the 3D scan of the dental cavity (Azernikov, [0007] identifying, using the trained deep neural network, one or more dental features in the patient's scan data based on one or more output probability values of the deep neural network).
Regarding claim 20, the combination of Azernikov and Lee teach The non-transitory computer-readable storage medium of claim 17, wherein the dental configuration recommendation module operates as a machine learning engine (Azernikov, [0007] identifying, using the trained deep neural network, one or more dental features in the patient's scan data based on one or more output probability values of the deep neural network, [0002] dental computer-aided design (CAD), and specifically to dental CAD automation using deep learning).
Regarding claim 21, the combination of Azernikov and Lee teach The method of claim 1, further comprising: automatically identifying the input data by the image analysis of the 3D scan of the dental cavity to obtain at least a tooth number and a tooth type (Azernikov, [0013] dentition or dental feature can include a lower jaw, an upper jaw, a prepared jaw, an opposing jaw, a set of tooth numbers, and dental restoration types. The restoration types can include crowns, inlays, bridges, and implants, [0059] the training data sets can be certain types of dentition data such as crown data, implant data, margin line data, cusp data, tooth surface anatomy data, dental restorations data, etc, [0007] identifying, using the trained deep neural network, one or more dental features in the patient's scan data based on one or more output probability values of the deep neural network).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Lipnik (US20230149135) discloses modeling dental structures from a dental scan using convolutional neural networks.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YVONNE T FOLLANSBEE whose telephone number is (571)272-0634. The examiner can normally be reached Monday - Friday 1pm - 9pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Fennema can be reached at (571) 272-2748. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YVONNE TRANG FOLLANSBEE/Examiner, Art Unit 2117
/ROBERT E FENNEMA/Supervisory Patent Examiner, Art Unit 2117