DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments, in the amendment filed 8/6/2025, with respect to the rejections of claims1-20 under 35 U.S.C.103(a) have been fully considered but are moot in view of the new ground(s) of rejection necessitated by the amendments. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Anssari Moin et al (US 2020/0320685).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-12,15, and 17-23 are rejected under 35 U.S.C. 103 as being unpatentable over Salah et al (US 20190026893) in view of Anssari Moin et al (US 2020/0320685)
As to claim 1, Salah et al teaches a method comprising:
receiving a two dimensional image of a face(figure 12A);
processing the 2D image of the face (figure 12B-12C) note that Salah teaches in paragraph [0057] “Image” should be understood to be an image in two dimensions, like a photograph or an image taken from a film. An image is formed by pixels) using one or more trained machine learning model to determine a bounding shape around teeth in the 2 D image; (training of at least one deep learning device, preferably a neural network, by means of the learning base; [0045] 3′) submission of the analysis image to the deep learning device for it to determine, for said analysis image, at least one probability relating to said image attribute value, and determination, as a function of said probability, of a value for said image attribute for the analysis image., paragraph [044-045]),
cropping the 2D image based on the bounding shape to produce a cropped image; processing the cropped image using an edge detection operation to generate edge data for the 2D cropped image; and processing the 2D cropped image and the edge data using a second trained machine learning model to label edges in the cropped 2D image(all said analysis tooth zones are identified, and the value of said tooth attribute is determined for each analysis tooth zone, and, in the step c″), the suitability of the aligner is determined as a function of said tooth attribute values; Preferably, said tooth attribute is chosen from the group formed by a maximal separation along the free edge of the tooth, a mean separation along the free edge of the tooth, and said image attribute is chosen from the group formed by a maximal separation along all of the teeth represented, a mean separation along the free edges of all of the teeth represented, an overall acceptability of the separation of the teeth represented ( paragraph [0417-0418]) Additionally ,Salah teaches A second deep learning device, preferably a neural network, may in particular be implemented to assess a probability relating to the type of tooth represented in an analysis tooth zone(paragraph [013] where the training of at least one deep learning device, preferably a neural network, by means of the learning base; [0433] b″3′) submission of the analysis image to the deep learning device for it to determine, for said analysis image, at least one probability relating to said image attribute value, and determination, as a function of said probability, of the presence of a separation between the aligner and the tooth or teeth represented on the analysis image, and/or of an amplitude of said separation (paragraph [0430][0444]);
While Salah meets the limitation above. Salah fails to teach “classify the image as depicting one of an anterior view, a side view or an occlusal view ”.
However, Anssari Moin et al teaches the tooth structure may be sliced at a point on the longitudinal axis which is at a predetermined distance from the bottom side of the tooth structure. This way a 2D slice of data may be determined. In this 2D slice the two points with the greatest distance from each other may be determined. The line between these points may be referred to as the lateral axis of the tooth structure. The sample may then be rotated in such a way that the lateral axis 440 is parallel to a pre-determined axis (e.g. the y-axis). This may leave two possible rotations along the longitudinal axis 434 (since there are two possibilities of line 440 being parallel to the y-axis). Selection between these two rotations may be determined on the basis the two areas defined by the slice and the lateral axis. Thereafter, the structure may be rotated along the longitudinal axis such that the larger area is oriented towards a pre-determined direction, for example as shown in 442, towards the side of the negative x axis 444.When considering different methods of unifying the orientation between samples, it may be beneficial for training accuracy to train separate 3D neural networks for classification of individual teeth for these different methods( paragraph [0105-0106]). Anssari Morrin clearly teaches Once trained, the deep neural network may receive a 3D image data stack of a dento-maxillofacial structure and classify the voxels of the 3D image data stack. Before the data is presented to the trained deep neural network, the data may be pre-processed so that the neural network can efficiently and accurately classify voxels. The output of the neural network may include different collections of voxel data, wherein each collection may represent a distinct part e.g. teeth or jaw bone of the 3D image data. The classified voxels may be post-processed in order to reconstruct an accurate 3D model of the dento-maxillofacial structure ( paragraph [0123])It would have been obvious before the effective filing date of the claimed invention to use the illustrated architecture and classify the view of the image in order to to accurately and timely localize, classify and taxonomize 3D teeth data into teeth types into a data structure which links sets of data representing teeth in 3D to objects corresponding to the 32 possible teeth of an adult. Therefore, the claimed invention would have been obvious to one of ordinary skill in the art at the time of the invention by applicant.
As to claim 2, Salah et al teaches the he method of claim 1,wherein the image is a 2D image of a patient wearing a dental appliance over teeth (Acquisition of at least one analysis image at least partially representing the aligner in a service position in which it is worn by a patient, abstract) , and wherein the labeled edges comprise one or more first labeled edges having a tooth edge classification and one or more second labeled edges having a dental appliance edge classification (all said analysis tooth zones are identified, and the value of said tooth attribute is determined for each analysis tooth zone, and, in the step c″), the suitability of the aligner is determined as a function of said tooth attribute values. Preferably, said tooth attribute is chosen from the group formed by a maximal separation along the free edge of the tooth, a mean separation along the free edge of the tooth, and said image attribute is chosen from the group formed by a maximal separation along all of the teeth represented, a mean separation along the free edges of all of the teeth represented, an overall acceptability of the separation of the teeth represented, paragraph [0369][0370][0417][0418]).
As to claim 3, Salah et al teaches the of claim 2, further comprising: determining a fit of the dental appliance on the teeth based on a comparison of the one or more first labeled edges to the one or more second labeled edges (paragraph[0149][291-293][0300]).
As to claim 4, Salah et al teaches claim 2, further comprising: determining a distance between a portion of a dental appliance edge and a portion of an adjacent tooth edge ( distance [0294]); determining whether the distance exceeds a threshold; and responsive to determining that the distance exceeds the threshold, determining that the dental appliance does not properly fit the teeth. ([0240]; an assessment is made, as a function of the results of the preceding step, as to the compatibility the aligner. For example, a search is conducted to see if the separation of the aligner with at least one tooth exceeds an acceptability threshold and, in this case, a decision is made as to the replacement of the aligner by a better suited aligner ( paragraph [0448])
As to claim 5,Salah et al teaches the method of claim4 comprising: identifying one or more teeth in the image; registering each tooth of the one or more teeth with a respective tooth label; and for each tooth of one or more teeth, using the respective tooth label associated with the tooth to identify a specific tooth associated with the determined distance between the portion of the aligner edge and the portion of the adjacent tooth edge. (Each historical tooth model examined is thus associated with a particular minimal distance, which measures its proximity of shape with the analysis tooth zone, paragraph [0082][0232][0240][0296-0298])
As to claim 6, Salah et al teaches the method of claim 4, further comprising: generating a notification indicating that the dental appliance does not properly fit the teeth (paragraph [0414]) preferably, assessment of the suitability of the aligner as a function of the value of said tooth or image attribute; paragraph [0415] preferably, sending of an information message as a function of said assessment.).
As to claim 7, Anssari Moin et al teaches the method of claim 1,wherein labeled edges comprise edge classification probabilities of a plurality of edge classifications, the plurality of edge classifications comprising a tooth edge classification, the method further comprising: determining the edge classification probabilities for each edge pixel of a plurality of edge pixels in the edge data; and applying a path finding operation to the labeled edges using the tooth edge classification as a cost basis to update the edge classification probabilities for one or more of the edge pixels ( paragraph [0113-0116]).
As to claim 8, Anssari Moin et al teaches the method of claim 1, wherein the image is a two-dimensional (2D) color image, the method further comprising: classifying, by the one or more trained machine learning model, based on an input of the 2D color image, the 2D color image as depicting one of an anterior view, a side view or an occlusal view(paragraph [105-0106]).
As to claim 8, Anssari Moin et al teaches the method of claim 1,wherein labeling the edges in the cropped 2D image comprises assigning a separate label to each of a plurality of teeth in the cropped image ( label activations as gathered from (one of the) deep neural network(s) may be limited to a tooth type class, paragraph[0115-0119]).
The limitation of claims 9-12,15,17-23 has been addressed above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCY BITAR whose telephone number is (571)270-1041. The examiner can normally be reached on Mon-Friday from 8:00 am to 5:00 p.m. .
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mrs. Jennifer Mahmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
NANCY . BITAR
Examiner
Art Unit 2664
/NANCY BITAR/Primary Examiner, Art Unit 2664