DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-19 are pending for examination in the application filed 12/01/2023.
Priority
Acknowledgement is made of Applicant’s claim to priority of provisional application 63/429,545, filing date 12/01/2022.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 04/04/2024 and 07/25/2025 have been considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 9, and 13-19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kearney (US20220180447A1).
Regarding claim 1, Kearney teaches a method for assessing whether a patient is a candidate for a dental treatment, the method comprising ([0111] Embodiments in accordance with the invention may be embodied as an apparatus, method, or computer program product. [0117] Referring to FIG. 1, a method 100 may be performed by a computer system in order to select an outcome for a set of input data. The outcome may be a determination whether a particular course of treatment is correct or incorrect. The method 100 may include receiving 102 an image. The image may be an image of patient anatomy indicating the periodontal condition of the patient):
receiving one or more panoramic dental x-ray images ([0117] The method 100 may include receiving 102 an image. The image may be an image of patient anatomy indicating the periodontal condition of the patient. Accordingly, the image may be of a of a patient's mouth obtained by means of an X-ray (intra-oral or extra-oral, full mouth series (FMX), panoramic, cephalometric), computed tomography (CT) scan, cone-beam computed tomography (CBCT) scan, intra-oral image capture using an optical camera, magnetic resonance imaging (MRI), or other imaging modality);
preprocessing the one or more panoramic dental x-ray images ([0595] The reference image 4600 and the input image 4604 may be processed by a segmentation network 4606. The output of the segmentation network 4606 may be labels 4608 of reference points labeling points on the input image 4604 corresponding to the reference point labels 4602 in the reference image);
determining one or more dental characteristics based on the preprocessed one or more panoramic dental x-ray images using a trained neural network ([0602] The inputs to the encoder 4704 may include an input image 4604 (either for training or utilization) and labels 4608 of points in the input image 4604 corresponding to the reference point labels 4602, such as in the form of segmentation masks obtained as described above, each segmentation masks corresponding to one of the reference point labels 4602. [0368] The image 2804 and the one or more anatomical masks 2806 may be concatenated and processed using the machine learning model 2802. The machine learning model 2802 may be trained to output measurements 2808 of the anatomy labeled by the masks),
wherein the trained neural network is trained using a plurality of training x-ray images and corresponding dental attribute data ([0368] Accordingly, training data entries may each include an image 2804 and one or more anatomical masks 2806 as inputs and one or more measurements as desired outputs. The training algorithm may then train the machine learning model 2802 to output a measurement for a given input image 2804 and corresponding anatomical masks 2806);
comparing the one or more dental characteristics to one or more treatment thresholds; and outputting a recommendation for at least one dental treatment based on the comparison of the one or more dental characteristics to the one or more treatment thresholds ([0124] The result of steps 108, 110, 112, and 114 is an image that may have been corrected, labels, e.g. pixel masks, indicating the location of anatomy and detected features and a measurement for each detected feature. This intermediate data may then be evaluated 116 with respect to a threshold. In particular, this may include an automated analysis of the detected and measured features with respect to thresholds. For example, CAL or PD measured using the machine-learning approaches described below may be compared to thresholds to see if treatment may be needed. Step 116 may also include evaluating some or all of the images, labels, detected features, and measurements for detected features in a machine learning model to determine whether a diagnosis is appropriate (see FIG. 11)).
Regarding claim 2, Kearney teaches the method of claim 1. Kearney further teaches wherein the one or more dental characteristics describes one or more of: a degree of tooth crowding described in millimeters, a degree of tooth spacing described in millimeters, an Angle's classification of malocclusion, a deep bite, an open bite, a presence of root collisions, and an estimated bone density ([0327] Measurements of an item of dental anatomy may include its center of mass, relative distance to other anatomy, size distortion, and density. [0330] Machine learning models may be trained to identify and measure dental anatomy that may be used to determine the appropriateness of root canal therapy at a given tooth position such as crown-to-root-ratio, calculus, root length, relative distance to adjacent teeth, furcation, fracture, and whether the tooth at that tooth position is missing).
Regarding claim 3, Kearney teaches the method of claim 1. Kearney further teaches wherein the corresponding dental attribute data on which the trained neural network is trained includes one or more of: tooth crowding in millimeters, tooth spacing in millimeters, Angle's classification of malocclusion, a deep bite, an open bite, root collisions, and bone density ([0368] Accordingly, training data entries may each include an image 2804 and one or more anatomical masks 2806 as inputs and one or more measurements as desired outputs. The training algorithm may then train the machine learning model 2802 to output a measurement for a given input image 2804 and corresponding anatomical masks 2806. [0327] Measurements of an item of dental anatomy may include its center of mass, relative distance to other anatomy, size distortion, and density. [0330] Machine learning models may be trained to identify and measure dental anatomy that may be used to determine the appropriateness of root canal therapy at a given tooth position such as crown-to-root-ratio, calculus, root length, relative distance to adjacent teeth, furcation, fracture, and whether the tooth at that tooth position is missing).
Regarding claim 9, Kearney teaches the method of claim 1. Kearney further teaches wherein the one or more panoramic dental x-ray images include a plurality of x-ray images from a plurality of patients and the recommendation includes a recommendation for each of the plurality of patients ([0421] FIG. 32A illustrates a system 3200a for identifying dental images that originate from the same patient or different patients through the entire life cycle of the patient's dental history. [0124] The result of steps 108, 110, 112, and 114 is an image that may have been corrected, labels, e.g. pixel masks, indicating the location of anatomy and detected features and a measurement for each detected feature. This intermediate data may then be evaluated 116 with respect to a threshold. In particular, this may include an automated analysis of the detected and measured features with respect to thresholds. For example, CAL or PD measured using the machine-learning approaches described below may be compared to thresholds to see if treatment may be needed. Step 116 may also include evaluating some or all of the images, labels, detected features, and measurements for detected features in a machine learning model to determine whether a diagnosis is appropriate (see FIG. 11). [0125] If the result of step 116 is affirmative, then the method 100 may include processing 118 the feature metric from step 114 according to a decision hierarchy. The decision hierarchy may further operate with respect to patient demographic data from step 104 and the patient treatment history from step 106. The result of the processing according to the decision hierarchy may be evaluated at step 120. If the result is affirmative, then an affirmative response may be output 122. An affirmative response may indicate that a course of treatment corresponding to the decision hierarchy is determined to be appropriate).
Regarding claim 13, Kearney teaches the method of claim 1. Kearney further teaches wherein preprocessing comprises segmenting the one or more panoramic dental x-ray images to identify individual teeth, spaces between the teeth and/or overlapping teeth (See Fig. 36 and Fig. 37D. [0122] The method 100 may further include processing 110 the image to identify patient anatomy. Anatomy identified may be represented as a pixel mask identifying pixels of the image that correspond to the identified anatomy and labeled as corresponding to the identified anatomy. This may include identifying individual teeth. [0595] The reference image 4600 and the input image 4604 may be processed by a segmentation network 4606. The output of the segmentation network 4606 may be labels 4608 of reference points labeling points on the input image 4604 corresponding to the reference point labels 4602 in the reference image. [0602] The inputs to the encoder 4704 may include an input image 4604 (either for training or utilization) and labels 4608 of points in the input image 4604 corresponding to the reference point labels 4602, such as in the form of segmentation masks obtained as described above, each segmentation masks corresponding to one of the reference point labels 4602).
Regarding claim 14, Kearney teaches the method of claim 13. Kearney further teaches where segmenting comprises segmenting using a second trained neural network ([0751] During utilization an input image may be processed using the labeling model 6110 to obtain a segmentation mask 6112. The segmentation mask 6112, and possibly the input image, may then be processed using the trained 2D-to-3D model 6114 to obtain a 3D estimate. The 3D estimate, e.g., non-linear pixel spacing estimate for pixels of the segmentation mask, may then be used to obtain a correct measurement of a dental feature labeled by the segmentation mask. [0772] FIG. 63 illustrates a machine learning model 6300 that may be used to determine dental readiness data for a patient. The machine learning model 6300 may include a plurality of models 6302, 6304, 6306. In the illustrated embodiment, these models may include a CNN 6302, a plurality of LSTM 6304, one or more transformer neural networks 6306).
Regarding claim 15, Kearney teaches the method of claim 1. Kearney further teaches wherein preprocessing comprises segmenting the one or more panoramic dental x-ray images to identify individual teeth (See Fig. 36 and Fig. 37D. [0122] The method 100 may further include processing 110 the image to identify patient anatomy. Anatomy identified may be represented as a pixel mask identifying pixels of the image that correspond to the identified anatomy and labeled as corresponding to the identified anatomy. This may include identifying individual teeth. [0595] The reference image 4600 and the input image 4604 may be processed by a segmentation network 4606. The output of the segmentation network 4606 may be labels 4608 of reference points labeling points on the input image 4604 corresponding to the reference point labels 4602 in the reference image. [0602] The inputs to the encoder 4704 may include an input image 4604 (either for training or utilization) and labels 4608 of points in the input image 4604 corresponding to the reference point labels 4602, such as in the form of segmentation masks obtained as described above, each segmentation masks corresponding to one of the reference point labels 4602)
and normalizing the one or more panoramic dental x-ray images to the identified individual teeth ([0478] The method 3700 may include evaluating 3708 the input mask with respect to a mask repository, i.e. a repository of dental images 3412, each with its corresponding masks 3414. Step 3708 may include comparing the shape 3606 to shapes present in the mask 3414 corresponding to the classification from step 3706 of each dental image 3412 evaluated, i.e. associated with the same dental anatomy or dental treatment as the modified mask. [0479] The method 3700 may further include fitting 3712 the shape 3606 to a shape in the matching mask 3414. Fitting 3712 may include performing steps such as isolating the shape in the matching mask 3414 corresponding closest to the shape 3606 (“the matching shape”). The matching shape may then be scaled, panned, stretched, and/or rotated to match the size, shape, and orientation of the shape 3606 to obtain a fitted shape. For example, FIG. 37C illustrates a fitted shape 3720 obtained by panning, rotating, scaling, and stretching the shape 3718 in order to conform to the shape 3606. [0482] The image 3412 presented at step 3702 with masks 3414 including the trimmed shape added to the mask 3414 selected at step 3706 may then be processed 3716 with the generator 3402 to obtain a synthetic image 3416. The shape 3606 will then be represented in the synthetic image 3416 in a manner approximating a feature conforming to the trimmed shape as if captured using the imaging modality used to obtain the original image 3412).
Regarding claim 16, Kearney teaches the method of claim 15. Kearney further teaches where normalizing comprises cropping the one or more panoramic dental x-ray images to exclude regions outside of the identified individual teeth ([0480] The method 3700 may further include trimming 3714 the fitted shape 3720 according to anatomy represented in the dental image 3412 presented at step 3702. For example, where the shape 3606 is classified as a caries, the fitted shape may be trimmed by removing portions of the fitted shape that extend beyond the mask 3414 for a tooth with which a major portion of the matching shape overlaps following the fitting step 3712. For example, FIG. 37D illustrates a trimmed shape 3722 obtained by trimming the shape 3720 to lie within the outline of the tooth 3724 overlapped by the shape 3606. [0481] Matching shapes for crowns, inlays, onlays, fillings, or other features that would normally be within the outline of a tooth may likewise be trimmed. Other features that are not bounded by the outline of a tooth may remain untrimmed or be trimmed with respect to outlines indicated in masks 3414 for other anatomy, such as bone, gums, or other anatomical features).
Regarding claim 17, Kearney teaches a method for assessing whether a one or more patients of a group of patients are a candidate for a dental treatment, the method comprising ([0111] Embodiments in accordance with the invention may be embodied as an apparatus, method, or computer program product. [0117] Referring to FIG. 1, a method 100 may be performed by a computer system in order to select an outcome for a set of input data. The outcome may be a determination whether a particular course of treatment is correct or incorrect. The method 100 may include receiving 102 an image. The image may be an image of patient anatomy indicating the periodontal condition of the patient. [0421] FIG. 32A illustrates a system 3200a for identifying dental images that originate from the same patient or different patients through the entire life cycle of the patient's dental history):
receiving a batch of dental x-ray images corresponding to a group of patients, wherein for each patient there comprises one or more x-ray images including one or more panoramic x-ray images ([0117] The method 100 may include receiving 102 an image. The image may be an image of patient anatomy indicating the periodontal condition of the patient. Accordingly, the image may be of a of a patient's mouth obtained by means of an X-ray (intra-oral or extra-oral, full mouth series (FMX), panoramic, cephalometric), computed tomography (CT) scan, cone-beam computed tomography (CBCT) scan, intra-oral image capture using an optical camera, magnetic resonance imaging (MRI), or other imaging modality. [0787] The practice management system 6502 may be any system known in the art for enabling a dental service provider to create, store, and access records of dental patients, the records describing dental anatomy, dental pathologies, proposed or administered dental procedures, and appointments. The practice management system 6502 may include or access a provider imaging database 6504. The imaging database 6504 may store dental images according to any of the imaging modalities disclosed herein. The practice management system and provider imaging database 6504 may be hosted on or accessed through a local server 6506. [0788] For each dental procedure, there may be images and documentation used to support a finding that the proposed procedure is helpful (“images” is used in the following description but a single image may be used in the same manner)).
determining, for each patient of the group of patients, one or more dental characteristics based on the one or more x-ray images corresponding to each patient, using a trained neural network ([0602] The inputs to the encoder 4704 may include an input image 4604 (either for training or utilization) and labels 4608 of points in the input image 4604 corresponding to the reference point labels 4602, such as in the form of segmentation masks obtained as described above, each segmentation masks corresponding to one of the reference point labels 4602. [0368] The image 2804 and the one or more anatomical masks 2806 may be concatenated and processed using the machine learning model 2802. The machine learning model 2802 may be trained to output measurements 2808 of the anatomy labeled by the masks),
wherein the trained neural network is trained using a plurality of training x-ray images and corresponding dental attribute data ([0368] Accordingly, training data entries may each include an image 2804 and one or more anatomical masks 2806 as inputs and one or more measurements as desired outputs. The training algorithm may then train the machine learning model 2802 to output a measurement for a given input image 2804 and corresponding anatomical masks 2806);
comparing, for each patient of the group of patients, the one or more dental characteristics to one or more treatment thresholds; and outputting a dataset comprising recommendations for at least one dental treatment for one or more of the patients of the group of patients based on the comparison of the one or more dental characteristics for each patient of the group of patients to the one or more treatment thresholds ([0124] The result of steps 108, 110, 112, and 114 is an image that may have been corrected, labels, e.g. pixel masks, indicating the location of anatomy and detected features and a measurement for each detected feature. This intermediate data may then be evaluated 116 with respect to a threshold. In particular, this may include an automated analysis of the detected and measured features with respect to thresholds. For example, CAL or PD measured using the machine-learning approaches described below may be compared to thresholds to see if treatment may be needed. Step 116 may also include evaluating some or all of the images, labels, detected features, and measurements for detected features in a machine learning model to determine whether a diagnosis is appropriate (see FIG. 11)).
Regarding claim 18, Kearney teaches an apparatus for assessing a dental x-ray image, the apparatus comprising ([0111] Embodiments in accordance with the invention may be embodied as an apparatus, method, or computer program product. [0117] Referring to FIG. 1, a method 100 may be performed by a computer system in order to select an outcome for a set of input data. The outcome may be a determination whether a particular course of treatment is correct or incorrect. The method 100 may include receiving 102 an image. The image may be an image of patient anatomy indicating the periodontal condition of the patient):
a communication interface (interface 3600); one or more processors; and a memory coupled to the one or more processors, the memory storing computer-program instructions, that, when executed by the one or more processors, cause the one or more processors to ([0114] It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions or code. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks):
receive one or more panoramic dental x-ray images ([0117] The method 100 may include receiving 102 an image. The image may be an image of patient anatomy indicating the periodontal condition of the patient. Accordingly, the image may be of a of a patient's mouth obtained by means of an X-ray (intra-oral or extra-oral, full mouth series (FMX), panoramic, cephalometric), computed tomography (CT) scan, cone-beam computed tomography (CBCT) scan, intra-oral image capture using an optical camera, magnetic resonance imaging (MRI), or other imaging modality);
pre-process the one or more panoramic dental x-ray images ([0595] The reference image 4600 and the input image 4604 may be processed by a segmentation network 4606. The output of the segmentation network 4606 may be labels 4608 of reference points labeling points on the input image 4604 corresponding to the reference point labels 4602 in the reference image);
determine one or more dental characteristics based on the one or more panoramic dental x-ray images using a trained neural network ([0602] The inputs to the encoder 4704 may include an input image 4604 (either for training or utilization) and labels 4608 of points in the input image 4604 corresponding to the reference point labels 4602, such as in the form of segmentation masks obtained as described above, each segmentation masks corresponding to one of the reference point labels 4602. [0368] The image 2804 and the one or more anatomical masks 2806 may be concatenated and processed using the machine learning model 2802. The machine learning model 2802 may be trained to output measurements 2808 of the anatomy labeled by the masks),
wherein the trained neural network is trained using a plurality of training x-ray images and corresponding dental attribute data ([0368] Accordingly, training data entries may each include an image 2804 and one or more anatomical masks 2806 as inputs and one or more measurements as desired outputs. The training algorithm may then train the machine learning model 2802 to output a measurement for a given input image 2804 and corresponding anatomical masks 2806);
compare the one or more dental characteristics to one or more treatment thresholds; and output a recommendation for at least one dental treatment based on the comparison of the one or more dental characteristics to the one or more treatment thresholds ([0124] The result of steps 108, 110, 112, and 114 is an image that may have been corrected, labels, e.g. pixel masks, indicating the location of anatomy and detected features and a measurement for each detected feature. This intermediate data may then be evaluated 116 with respect to a threshold. In particular, this may include an automated analysis of the detected and measured features with respect to thresholds. For example, CAL or PD measured using the machine-learning approaches described below may be compared to thresholds to see if treatment may be needed. Step 116 may also include evaluating some or all of the images, labels, detected features, and measurements for detected features in a machine learning model to determine whether a diagnosis is appropriate (see FIG. 11)).
Regarding claim 19, Kearney teaches non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors of a device, cause the device to ([0111] Embodiments in accordance with the invention may be embodied as an apparatus, method, or computer program product. [0114] It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions or code):
receive one or more panoramic dental x-ray images ([0117] The method 100 may include receiving 102 an image. The image may be an image of patient anatomy indicating the periodontal condition of the patient. Accordingly, the image may be of a of a patient's mouth obtained by means of an X-ray (intra-oral or extra-oral, full mouth series (FMX), panoramic, cephalometric), computed tomography (CT) scan, cone-beam computed tomography (CBCT) scan, intra-oral image capture using an optical camera, magnetic resonance imaging (MRI), or other imaging modality);
preprocess the one or more panoramic dental x-ray images ([0595] The reference image 4600 and the input image 4604 may be processed by a segmentation network 4606. The output of the segmentation network 4606 may be labels 4608 of reference points labeling points on the input image 4604 corresponding to the reference point labels 4602 in the reference image);
determine one or more dental characteristics based on the one or more panoramic dental x-ray images using a trained neural network ([0602] The inputs to the encoder 4704 may include an input image 4604 (either for training or utilization) and labels 4608 of points in the input image 4604 corresponding to the reference point labels 4602, such as in the form of segmentation masks obtained as described above, each segmentation masks corresponding to one of the reference point labels 4602. [0368] The image 2804 and the one or more anatomical masks 2806 may be concatenated and processed using the machine learning model 2802. The machine learning model 2802 may be trained to output measurements 2808 of the anatomy labeled by the masks),
wherein the trained neural network is trained using a plurality of training x-ray images and corresponding dental attribute data ([0368] Accordingly, training data entries may each include an image 2804 and one or more anatomical masks 2806 as inputs and one or more measurements as desired outputs. The training algorithm may then train the machine learning model 2802 to output a measurement for a given input image 2804 and corresponding anatomical masks 2806);
compare the one or more dental characteristics to one or more treatment thresholds; and output a recommendation for at least one dental treatment based on the comparison of the one or more dental characteristics to the one or more treatment thresholds ([0124] The result of steps 108, 110, 112, and 114 is an image that may have been corrected, labels, e.g. pixel masks, indicating the location of anatomy and detected features and a measurement for each detected feature. This intermediate data may then be evaluated 116 with respect to a threshold. In particular, this may include an automated analysis of the detected and measured features with respect to thresholds. For example, CAL or PD measured using the machine-learning approaches described below may be compared to thresholds to see if treatment may be needed. Step 116 may also include evaluating some or all of the images, labels, detected features, and measurements for detected features in a machine learning model to determine whether a diagnosis is appropriate (see FIG. 11)).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Kearney in view of Kim (Kim, J., Hwang, J. J., Jeong, T., Cho, B. H., & Shin, J. (2022). Deep learning-based identification of mesiodens using automatic maxillary anterior region estimation in panoramic radiography of children. Dentomaxillofacial Radiology, 51(7), 20210528).
Regarding claim 4, Kearney teaches the method of claim 3. Kim, in the same field of endeavor of dental x-ray analysis, teaches wherein the plurality of training x-ray images is limited to include images of anterior teeth ([Abstract] The first network (DeeplabV3plus) is a segmentation model that uses the posterior molar space to set the ROI in the maxillary anterior region with the mesiodens in the panoramic radiograph. The second network (Inception-resnet-v2) is a classification model that uses cropped maxillary anterior teeth to determine the presence of mesiodens).
PNG
media_image1.png
712
1298
media_image1.png
Greyscale
PNG
media_image2.png
686
831
media_image2.png
Greyscale
Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Kearney with the teachings of Kim to limit the training x-ray images to images of the anterior teeth because "In particular, in the anterior region of the panoramic radiograph of the mixed dentition, there are unerupted permanent teeth and many anatomical overlaps, so accurate interpretation can be difficult. Therefore, it is difficult to set the ROI in the anterior region of the panoramic radiograph. In diagnosis using medical images, ROI setting is an important process that affects the performance of deep learning; therefore to improve accuracy, adequate preprocessing is needed to obtain as small a ROI as possible that provides enough context" [Kim pg. 2 para. 5].
Regarding claim 5, Kearney teaches the method of claim 1. Kim teaches wherein the plurality of training x-ray images is limited to upper anterior teeth, lower anterior teeth, or a combination thereof. (See Kim Figs. 1 and 6 above. [Abstract] The first network (DeeplabV3plus) is a segmentation model that uses the posterior molar space to set the ROI in the maxillary anterior region with the mesiodens in the panoramic radiograph. The second network (Inception-resnet-v2) is a classification model that uses cropped maxillary anterior teeth to determine the presence of mesiodens).
Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Kearney with the teachings of Kim to limit the training x-ray images to images of upper anterior teeth, lower anterior teeth, or a combination thereof because "In particular, in the anterior region of the panoramic radiograph of the mixed dentition, there are unerupted permanent teeth and many anatomical overlaps, so accurate interpretation can be difficult. Therefore, it is difficult to set the ROI in the anterior region of the panoramic radiograph. In diagnosis using medical images, ROI setting is an important process that affects the performance of deep learning; therefore to improve accuracy, adequate preprocessing is needed to obtain as small a ROI as possible that provides enough context" [Kim pg. 2 para. 5].
Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Kearney in view of Takabayashi (US20210342947A1).
Regarding claim 6, Kearney teaches the method of claim 1. Takabayashi, in the same field of endeavor of dental x-ray analysis, teaches wherein the one or more panoramic dental x-ray images include a plurality of bitewing and periapical x-ray images and determining the one or more dental characteristics include applying the trained neural network on the bitewing and periapical x-ray images separately from each other ([0009] In some embodiments, after receiving the dental image, the system detects an image type of the dental image (e.g., bitewings, panoramics, etc). The system then determines dental image information based on the image type, and determines one or more features associated with the dental image. [0051] the classification techniques employ machine learning or other artificial intelligence processes or models trained to classify images into distinct categories based on a visual appearance of each category. In some embodiments, the categories of image type can be, e.g., bitewing x-ray image, periapical x-ray image, panoramic x-ray image, intra-oral image, computed tomography image, any combination thereof, or any other suitable image type for the dental image).
PNG
media_image3.png
753
1020
media_image3.png
Greyscale
Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Kearney with the teachings of Takabayashi to apply the trained neural network on the bitewing and periapical x-ray images separately from each other because "services provided are supported by correct image type (e.g., if an implant is provided, an insurance company will likely require both pre- and post-operative bitewing images)" [Takabayashi 0049].
Regarding claim 7, Kearney teaches the method of claim 1. Takabayashi teaches wherein the one or more panoramic dental x-ray images are received through an application programming interface (See Takabayashi Fig. 3A above. [0070] Input data and x-ray images 302 are submitted by an input device 110 using one or more dashboards within a user interface 304. [0067] The image type may include, e.g., panoramic, intra-oral, bitewing, or other form of image type. [0071] The input information is then sent to Application Programming Interfaces (“APIs”) 306).
Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Kearney with the teachings of Takabayashi to receive images through an API because "the APIs can be integrated to one or more existing pieces of software utilized by insurance companies, billing providers and/or medical billing clearinghouses" [Takabayashi 0071].
Regarding claim 8, Kearney teaches the method of claim 1. Takabayashi teaches wherein the recommendation is output through an application programming interface (See Takabayashi Fig. 3A above. [0072] The outputs of each model component are then sent back to the pieces of software through the APIs 306. [0041] Services module 162 functions to perform analysis and assessment of services (e.g., treatments and procedures) provided by the dentist or specialist. [0062] In some embodiments, the comparison involves processing and analysis of services provided by the dental professionals. These may include, for example, detecting pathologies within the image (e.g., cavities, gum disease), detecting procedures within the image (e.g., implants, crowns, fillings)…the system may find overtreatment if, for example, the procedure rendered by a treating doctor goes beyond the standard procedure for the existing condition (e.g., pathologies) of the patient detected within the image. The system reports any discrepancies as described in step 210).
Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Kearney with the teachings of Takabayashi to output recommendations through an API because "the APIs can be integrated to one or more existing pieces of software utilized by insurance companies, billing providers and/or medical billing clearinghouses" [Takabayashi 0071].
Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Kearney in view of Kelleher (US20210244504A1).
Regarding claim 10, Kearney teaches the method of claim 1. Kelleher, in the same field of endeavor of dental treatment analysis, teaches wherein the one or more treatment thresholds are based on a user's preferences associated with the user's preferred dental treatments ([0121] The method 1000 can optionally include a process 1025 to receive, e.g., by the software application on the computing device, one or more user preferences associated with orthodontic treatment procedures for the patient. The user preferences would typically be entered by a practitioner, e.g., an orthodontist, dentist or periodontist. In some examples, the user preferences can include max- or min-constraints for certain parameters associated with the practitioner's tentative treatment choices, which can include an extraction, IPR and/or installation of a TAD. Also, the user preferences can include a pre-treatment diagnostic values or constraint thereof determined by the user (e.g., practitioner) for a tentative treatment choice as a prospective resolution to the patient's problem(s). In implementations of the optional process 1025, the received user preferences can be incorporated in the process 1020 to determine the set of quantitative prospective pre-treatment values).
Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Kearney with the teachings of Kelleher for the treatment thresholds to be based on the user's preferences "to provide an automated predictive tool for orthodontic treatment planning that addresses long-term effects of prospective plans contemplated (or not) by the practitioner" [Kelleher 0124].
Regarding claim 11, Kearney teaches the method of claim 1. Kelleher teaches wherein comparing the one or more dental characteristics comprises adjusting the one or more treatment thresholds based a dental practitioner associated with the patient ([0121] The method 1000 can optionally include a process 1025 to receive, e.g., by the software application on the computing device, one or more user preferences associated with orthodontic treatment procedures for the patient. The user preferences would typically be entered by a practitioner, e.g., an orthodontist, dentist or periodontist. In some examples, the user preferences can include max- or min-constraints for certain parameters associated with the practitioner's tentative treatment choices, which can include an extraction, IPR and/or installation of a TAD. Also, the user preferences can include a pre-treatment diagnostic values or constraint thereof determined by the user (e.g., practitioner) for a tentative treatment choice as a prospective resolution to the patient's problem(s). In implementations of the optional process 1025, the received user preferences can be incorporated in the process 1020 to determine the set of quantitative prospective pre-treatment values).
Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Kearney with the teachings of Kelleher to adjust treatment thresholds based on the dental practitioner "to provide an automated predictive tool for orthodontic treatment planning that addresses long-term effects of prospective plans contemplated (or not) by the practitioner" [Kelleher 0124].
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Kearney in view of Iman (US20210343400A1).
Regarding claim 12, Kearney teaches the method of claim 1. Inam, in the same field of endeavor of dental x-ray analysis, teaches wherein the one or more panoramic dental x-ray images include a plurality of bitewing and periapical x-ray images, wherein determining the one or more dental characteristics are performed for all x-ray images simultaneously ([0092] Referring back to FIG. 2, in step 208, masks and points on the radiographic image 120 are predicted by a segmenter and object detector. Depending on the image type as determined in step 206, appropriate pairs of segmenters and object detectors are chosen from a set of segmenters and object detectors 210-214, including selecting among Bitewing Segmenter and Object Detector 210 (in response to the image type being determined, at 206, to be a bitewing image type), Periapical Segmenter and Object Detector 212 (in response to the image type being determined, at 206, to be a periapical image type), and Panoramic Segmenter and Object Detector 214 (in response to the image type being determined, at 206, to be a panoramic image type) in order to provide the desired masks and points prediction…The selected DL model may predict masks for many labels such as tooth number, general tooth area, bone, enamel, restorations such as crown, filling/inlay, onlay, bridge, implants etc. The DL model itself may be an amalgam of several detection architectures (implementing multiple DL models that are combined to provide meaningful results). The best model identified with the help of metrics such as Intersection over Union (IoU) and bone level (distance between Cemento Enamel Junction (CEJ) and bone point) against a test set is chosen for the particular label. Further, the model may also directly predict the CEJ, and alveolar bone crest (hereafter called bone) points per tooth number. The model provides two ways of getting CEJ and bone points, which can be used to improve the confidence of the measurements. [0276] In embodiments, a computer may enable execution of computer program instructions including multiple programs or threads. The multiple programs or threads may be processed approximately simultaneously to enhance utilization of the processor and to facilitate substantially simultaneous functions. By way of implementation, any and all methods, program codes, program instructions, and the like described herein may be implemented in one or more threads which may in turn spawn other threads, which may themselves have priorities associated with them).
Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Kearney with the teachings of Inam to determine dental characteristics for bitewing and periapical x-ray images simultaneously because "The Bitewing Segmenter and Object Detector 210, Periapical Segmenter and Object Detector 212, and Panoramic Segmenter and Object Detector 214 may use different Deep Learning (DL) based ML models for prediction. For example, bitewing specialized anatomy training models may be used to achieve better prediction (e.g., better labelling) of bitewing images" [Inam 0092].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lang (US11751944B2) teaches dental x-ray analysis and measurement using neural networks.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jacqueline R Zak whose telephone number is (571)272-4077. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JACQUELINE R ZAK/Examiner, Art Unit 2666
/EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666