DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-21 are pending and have been examined.
Claim Objections
Claims 3, 5, and 6 are objected to because of the following informalities: they recite “3D model” where first use of an abbreviation in a claim should indicate what the abbreviation stands for (e.g., “three dimensional (3D) model”, see Claim 10 as an example). Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1-21 are directed to a system, method, or product, which are statutory categories of invention. (Step 1: YES).
The Examiner has identified method claim 20 as the claim that represents the claimed invention for analysis and is similar to system claim 1 and product claim 21.
Claim 20 recites the limitations of:
A method comprising:
receiving data of a current state of a dental site of a patient, the data comprising a plurality of data items generated from a plurality of oral state capture modalities;
processing the data using a plurality of trained machine learning models, wherein each trained machine learning model of the plurality of trained machine learning models is trained to process one or more data items generated from one or more oral state capture modalities of the plurality of oral state capture modalities, wherein the plurality of trained machine learning models output estimations of one or more oral conditions;
processing at least one of the data or the estimations of the one or more oral conditions to generate at least one of a) one or more actionable symptom recommendations for one or more oral health problems associated with the one or more oral conditions or b) one or more diagnoses of the one or more oral health problems; and
generating one or more treatment recommendations for treatment of at least one oral health problem of the one or more oral health problems based on at least one of the one or more actionable symptom recommendations or the one or more diagnoses.
These above limitations, under their broadest reasonable interpretation, cover performance of the limitation as certain methods of organizing human activity. The claim recites elements, in non-bold above, which covers performance of the limitation as managing personal behavior and interactions between people. Receiving patient data, processing the data, generate actionable recommendations or diagnoses of oral health problems (teaching), and generating treatment recommendations (teaching) based on recommendation or diagnoses is interacting with a patient to provide a diagnoses. Therefore, the claim as a whole is directed to “treating a patient,” which is an abstract idea. “Treating a patient” is considered to be a method of organizing human activity because it is an example of managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). The broadest reasonable interpretation of the claims includes the interaction between a healthcare provider and a patient. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation as managing personal behavior or interactions between people, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Claims 1 and 21 are also abstract for similar reasons. (Step 2A-Prong 1: YES. The claims are abstract)
This judicial exception is not integrated into a practical application. In particular, the claims only recite: computing device, memory, processing devices, machine (Claim 1); machine (Claim 20); non-transitory computer readable medium, processor, machine (Claim 21). The computer hardware is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. The machine is a generic machine and could be just a computer. The using trained machine learning models is recited at a high-level of generality and is using some type of generic model. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore claims 1, 20, and 21 are directed to an abstract idea without a practical application. (Step 2A-Prong 2: NO. The additional claimed elements are not integrated into a practical application)
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a computer hardware amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Steps such as receiving are steps that are considered insignificant extra solution activity and mere instructions to apply the exception using general computer components (see MPEP 2106.05(d), II). Thus claims 1, 20, and 21 are not patent eligible. (Step 2B: NO. The claims do not provide significantly more)
Dependent claims 2-19 further define the abstract idea that is present in their respective independent claim 1 and thus correspond to Certain Methods of Organizing Human Activity and hence are abstract for the reasons presented above. The dependent claims do not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination. Claims 3, 5, and 6 recite a 3D model at a high-level of generality. Claim 4, 7-9, and 17 recite using machine learning model which is applying a generic model at a high level of generality. Claim 8 recites a patient device, which is a generic device recited at a high-level of generality. Claim 18 recites insurance which is also abstract as a fundamental economic practice. Therefore, the claims 2-19 are directed to an abstract idea. Thus, the claims 1-21 are not patent-eligible.
Examiner Request
The Applicant is requested to indicate where in the specification there is support for amendments to claims should Applicant amend. The purpose of this is to reduce potential 35 U.S.C. §112(a) or §112 1st paragraph issues that can arise when claims are amended without support in the specification. The Examiner thanks the Applicant in advance.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 2, 4, 13, and 17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Pub. No. US 2022/0189611 to Farkash et al.
Regarding claims 1, 20, and 21
(claim 1) A system comprising:
a computing device comprising a memory and one or more processing devices, wherein the computing device is configured to:
Farkash et al. teaches:
Processor and memory (computing device)…
“For example, a system may include: one or more processors; a memory, accessible by the one or more processors and storing computer-program instructions, that, when executed by the one or more processors, perform a computer-implemented method comprising: receiving or accessing data collected from an oral scan of the subject's oral cavity, the data including at least three of: 3D surface data, color image data, near-infrared (NIR) data, and fluorescence imaging data; identifying one or more features indicative of gingival inflammation in the collected data using a trained machine learning model, wherein the trained machine learning model is trained on image data including at least three of: previous 3D surface data, previous color image data, previous near-infrared (NIR) data, and previous fluorescence imaging data, wherein the scan data used to train the machine learning model is filtered based on a threshold angle between images of the image data and a threshold distance between the images of the image data; and outputting an indication of gingival inflammation gingival inflammation based on the identified one or more features indicative of gingival inflammation.” [0025]
receive data of a current state of a dental site of a patient, the data comprising a plurality of data items generated from a plurality of oral state capture modalities:
Multiple modalities for collecting (receive) images (data)…
“The intraoral scanning systems may be capable of collecting images of the subject's oral cavity using multiple imaging modalities, including 3D volumetric imaging, color imaging, infrared (e.g., near infrared (NIR)) imaging, color imaging spectroscopy, and/or NIR spectroscopy. In some variations, the intraoral scanning apparatuses may include aspects of one or more iTero oral scanning systems (e.g., iTero 5D) manufactured and sold by Align Technology, Inc. headquartered in San Jose, Calif., U.S.A. Various features and methods of using such intraoral scanning apparatuses are described, for example, in U.S. Pat. Nos. 10,123,706 and 10,390,913, each of which is herein incorporated by reference in its entirety.” [0008]
Example of detecting tooth conditions (current dental state)…
“In some examples, the apparatuses and methods include fluorescence imaging in conjunction with other imaging modalities (e.g., 3D volumetric imaging, color imaging, infrared NIR imaging and/or NIR spectroscopy) to provide a more comprehensive assessment of the subject's oral condition. Fluorescence imaging may be used to provide information related to health of soft tissues of the mouth and be used to detect precancerous or cancerous lesions. Thus, in addition to detecting tooth conditions (e.g., cavities, cracks, etc.) and gingival conditions (e.g., mild, moderate or severe gingival inflammation, etc.) using other imaging modalities (e.g., volumetric, color, NIR), fluorescence imaging can be used to detect cancerous and/or precancerous lesions in the soft tissues around the teeth. Further, when combined with information provided by the other imaging modalities, the fluorescence imaging data may provide a more accurate and faster diagnosis of oral cancer and precancers.” [0010]
Current scan…
“FIG. 4 is a flowchart illustrating an example method for tracking changes to a patient's oral condition over time. Such method may be performed to monitor the patient's oral health to assure that mild gingival inflammation does not progress, or to monitor the patient's recovery during treatment of gingival inflammation. At 401, features indicative of gingival inflammation are identified in the collected images from a current intraoral scan of the patient. At 403, the collected images from the current scan are compared to collected images from one or more previous scans of the patient to determine changes to the features over time. For instance, collected images from a current scan of the patient's oral cavity can be compared to collected images from scans previously performed on the same patient's oral cavity (e.g., from previous dental office visits). In one example where the patient's oral health is being monitored to assure that the symptoms of gingival inflammation do not worsen, the images from the current scan may be compared with images from previous scans to determine whether CEJ and pocket depth measurements indicate improving or worsening gum recession and whether gum color/size measurements indicate improving or worsening gum inflammation/irritation. In another example where the patient is being treated for gingival inflammation and monitored to determine whether the treatment is effective, the images from the current scan may be compared with images from previous scans to assure that CEJ and pocket depth measurements indicate improving gum recession and that gum color/size measurements indicate less gum inflammation/irritation. In some cases, the extent of gingival inflammation can be updated based on the changes.” [0082]
process the data using a plurality of trained machine learning models, wherein each trained machine learning model of the plurality of trained machine learning models is trained to process one or more data items generated from one or more oral state capture modalities of the plurality of oral state capture modalities, wherein the plurality of trained machine learning models output estimations of one or more oral conditions;
Train machine learning models (plural) using datasets from oral scans (oral state capture modalities)…
“The dataset(s) for building the machine learning may be collected and iteratively modified over time based on a particular patient's oral scans and/or a library of oral scans of different patient. In some variations, the system 258 may be configured to build a questionnaire to identify clinical issues and recommend treatment, identify doctors that would qualify and annotate the dataset, and train machine learning models using the datasets.” [0068]
Machine learning to assess (estimate) and diagnose dental (oral) conditions and output dental conditions…
“Any of the apparatuses and/or methods can implement machine learning techniques and classification models to automatically assess and/or diagnose periodontal or dental conditions. Examples of machine learning systems that may be used include, but are not limited to, Convolutional Neural Networks (CNN), Decision Tree, Random Forest, Logistic Regression, Support Vector Machine, AdaBoosT, K-Nearest Neighbor (KNN), Quadratic Discriminant Analysis, Neural Network, etc. The machine learning classification models can be configured to generate an output data set that includes a probability that the data set includes one or more or periodontal and/or dental conditions. In some examples, the machine learning classification model can output a linear scale rating (e.g., a probability between 0.0 to 1.0).” [0013]
process at least one of the data or the estimations of the one or more oral conditions to generate at least one of a) one or more actionable symptom recommendations for one or more oral health problems associated with the one or more oral conditions or b) one or more diagnoses of the one or more oral health problems; and
Automatically access (process) and diagnose dental conditions…
“Any of the apparatuses and/or methods can implement machine learning techniques and classification models to automatically assess and/or diagnose periodontal or dental conditions. Examples of machine learning systems that may be used include, but are not limited to, Convolutional Neural Networks (CNN), Decision Tree, Random Forest, Logistic Regression, Support Vector Machine, AdaBoosT, K-Nearest Neighbor (KNN), Quadratic Discriminant Analysis, Neural Network, etc. The machine learning classification models can be configured to generate an output data set that includes a probability that the data set includes one or more or periodontal and/or dental conditions. In some examples, the machine learning classification model can output a linear scale rating (e.g., a probability between 0.0 to 1.0).” [0013]
“The processing system 258 may include one or more processors configured to process scan data from the scanning system 254. The processing system 258 may include one or more of: feature extraction engine(s) 260, labeling engine(s) 269, machine learning engine(s) 262, segmentation engine(s) 264, diagnosis engine(s) 266, and treatment recommendation engine(s) 268. The feature extraction engine(s) 262 may extract features from the oral scan data. The extracted features can be used as input by the machine learning engine 264 to train a machine learning model. In some cases, a labeling engine 262 is used to label the different features related to one or more diseases or conditions. The machine learning engine 264 may train the machine learning model based upon patient data from, for example, a patient database 270, which can include historical patient data, patient demographics, tooth measurements, tooth surface mesh, processed tooth features, and/or other patient information. The patient database 270 may be part of a computing device which includes the processing system 258 or may be part of a separate computing device.” [0063]
Treatment recommendations…
“The segmentation engine 260 can use the trained machine learning model to segment the data into individual components. In some examples the data is segmented into different tissue types (e.g., tooth, periodontium, bone, plaque, etc.), features related to different diseases or conditions (e.g., gingival inflammation, cancer lesion, precancer lesion, etc.), and/or different tooth diseases or conditions (e.g., cavities, caries, cracks, etc.). The processing system 258 may store historical or new image data in, for example, the patent database 270. The diagnosis engine(s) 268 can generate one or more diagnoses based on learned features associated with different diseases and conditions (e.g., gingival inflammation, cancer, precancer). Optionally, the treatment recommendation engine 268 can generate one or more treatment recommendations based on the one or more diagnoses. The processing system 258 can send the diagnosis and/or treatment recommendations to the display 256 (and/or other output device) for presentation to a user. In some variations, the displayed images (and/or 3D model) includes color-coded features based on the identified features indicative of a disease or condition. For example, gums effected by gingivitis , cancerous lesions, precancerous lesions, tooth cavities, tooth cracks and/or plaque may each be identified with distinctive colors.” [0065]
generate one or more treatment recommendations for treatment of at least one oral health problem of the one or more oral health problems based on at least one of the one or more actionable symptom recommendations or the one or more diagnoses.
Suggest a diagnosis and generate a treatment plan with follow up appointment/scan, night guard (actional recommendations)…
“The processing system 258 can be configured to automatically generate one or more diagnoses based on machine learning analysis of the scans. The system 258 may be configured to automatically chart, maintain notes, and highlight potential problems related to a diagnosis. The processing system 258 may be configured to generate 3D time lapse videos to help identify and illustrate areas in the patient's anatomy which change over time, suggest a diagnosis (e.g. chipped tooth, gingival recession, caries, etc.) based on machine learning, and generate a treatment plan (e.g., follow up appointment/scan in 6 months, night guard, etc.). The processing system 258 may be configured to analyze a 3D model and/or 2D images (e.g., 2D color and 2D NIR) to provide a diagnosis using machine learning. The processing system 258 may be configured to: identify clinical issues based on single tooth 2D color and NIR images (e.g. caries); provide a full-mouth machine learning diagnosis (e.g., identify clinical issues based on full jaw 2D and 3D data (e.g. malocclusion, tooth wear, acid reflux, etc.)); provide auto gum recession identification based on single 3D scan and 2D images; automatic chart all teeth, crowns, fillings, missing teeth etc. based on 3D scan; and/or automatically identify prepped teeth and type of restoration (crown, inlay, bridge, etc.) based on 3D scan.” [0067]
Regarding claim 2
The system of claim 1, wherein the computing device is further configured to:
predict a future state of the dental site based on processing at least one of the data, the estimations of the one or more oral conditions, the one or more actionable symptom recommendations, or the one or more diagnoses of the one or more oral health problems, wherein the predicted future state of the dental site comprises a future state of at least one of the one or more oral conditions or the one or more oral health problems.
Farkash et al. teaches:
Final positions (future state) of teeth (dental site) and move teeth to final position (future state of oral conditions)…
“Various alternatives, modifications, and equivalents may be used in lieu of the above components. Although the final position of the teeth may be determined using computer-aided techniques, a user may move the teeth into their final positions by independently manipulating one or more teeth while satisfying the constraints of the prescription.” [0108]
Inflammation and recheck (future state) of oral health problems…
“In some variations, the features indicative of gingival inflammation may be marked on images or on a 3D model 307. For example, the features may be highlighted (e.g., using one or more colors) and/or labeled (e.g., using with symbols and/or lettering) on images or the 3D model as displayed on a display of the intraoral scanning system. In some variations, the system can be figured to provide one or more recommendations for treating the gingival inflammation based on the identified level of gum inflammation 309. In some case, the recommendation may include a recommendation for one or more follow up appointments to recheck the patient's oral health.” [0081]
Regarding claim 4
The system of claim 1, wherein the computing device is further configured to:
receive additional data of one or more prior states of the dental site of the patient;
Farkash et al. teaches:
Longitudinal oral conditions (one or more prior states)…
“The methods and apparatuses described herein may relate to oral scanners and methods of their use, and particularly for generating three-dimensional (3D) representations of the teeth and gingiva and other soft tissues of the mouth. In particular, described herein are methods and apparatuses that may be useful in scanning, including 3D scanning, and analyzing the intraoral cavity for detection, diagnosis, treatment, and longitudinal tracking of oral conditions.” [0003]
process the additional data using the plurality of trained machine learning models, wherein the plurality of trained machine learning models output additional estimations of prior states of the one or more oral conditions;
Machine learning using historical patient data (prior states of conditions)…
“The processing system 258 may include one or more processors configured to process scan data from the scanning system 254. The processing system 258 may include one or more of: feature extraction engine(s) 260, labeling engine(s) 269, machine learning engine(s) 262, segmentation engine(s) 264, diagnosis engine(s) 266, and treatment recommendation engine(s) 268. The feature extraction engine(s) 262 may extract features from the oral scan data. The extracted features can be used as input by the machine learning engine 264 to train a machine learning model. In some cases, a labeling engine 262 is used to label the different features related to one or more diseases or conditions. The machine learning engine 264 may train the machine learning model based upon patient data from, for example, a patient database 270, which can include historical patient data, patient demographics, tooth measurements, tooth surface mesh, processed tooth features, and/or other patient information. The patient database 270 may be part of a computing device which includes the processing system 258 or may be part of a separate computing device.” [0063]
process at least one of the additional data or the additional estimations of the prior states of the one or more oral conditions to generate at least one of a) one or more prior actionable symptom recommendations or b) one or more prior state diagnoses of the one or more oral health problems; and
Diagnosis and treatment recommendation using historical patient data…
“The processing system 258 may include one or more processors configured to process scan data from the scanning system 254. The processing system 258 may include one or more of: feature extraction engine(s) 260, labeling engine(s) 269, machine learning engine(s) 262, segmentation engine(s) 264, diagnosis engine(s) 266, and treatment recommendation engine(s) 268. The feature extraction engine(s) 262 may extract features from the oral scan data. The extracted features can be used as input by the machine learning engine 264 to train a machine learning model. In some cases, a labeling engine 262 is used to label the different features related to one or more diseases or conditions. The machine learning engine 264 may train the machine learning model based upon patient data from, for example, a patient database 270, which can include historical patient data, patient demographics, tooth measurements, tooth surface mesh, processed tooth features, and/or other patient information. The patient database 270 may be part of a computing device which includes the processing system 258 or may be part of a separate computing device.” [0063]
determine a change in the one or more oral health problems over a time period based on a comparison of the one or more oral conditions to the prior states of the one or more oral conditions.
Compare to previously collected data (determine change over time)….
“Once the oral scan data is received or otherwise accessed, one or more features in the scan data indicative of gingival inflammation are identified 303. This can involve comparing the scan image data to previously collected data and using machine learning model (e.g., trained network) to identify and distinguish anatomical features (e.g., teeth, gums, connective tissue, bone etc.) as well as features of indicative of gingival inflammation. Such comparison may be automatic, semi-automatic or manual. As described above, the previously collected data used to train the machine learning model may include image data from previous 3D scans (e.g., including surface, color, NIR image data, and/or NIR spectroscopy data), X-ray images, periodontal charts (e.g., including probe depths), and/or visual inspection/tactile data from a dental professional.” [0078]
Regarding claim 13
The system of claim 1, wherein the computing device is further configured to:
determine a severity of each of the one or more oral conditions and/or oral health problems; and
Farkash et al. teaches:
Detecting (determining) tooth conditions and gingival conditions (severity of oral conditions)…
“In some examples, the apparatuses and methods include fluorescence imaging in conjunction with other imaging modalities (e.g., 3D volumetric imaging, color imaging, infrared NIR imaging and/or NIR spectroscopy) to provide a more comprehensive assessment of the subject's oral condition. Fluorescence imaging may be used to provide information related to health of soft tissues of the mouth and be used to detect precancerous or cancerous lesions. Thus, in addition to detecting tooth conditions (e.g., cavities, cracks, etc.) and gingival conditions (e.g., mild, moderate or severe gingival inflammation, etc.) using other imaging modalities (e.g., volumetric, color, NIR), fluorescence imaging can be used to detect cancerous and/or precancerous lesions in the soft tissues around the teeth. Further, when combined with information provided by the other imaging modalities, the fluorescence imaging data may provide a more accurate and faster diagnosis of oral cancer and precancers.” [0010]
rank the one or more oral conditions and/or oral health problems based at least in part on the severity.
Mild, moderate or server (rank the conditions)…
“In some examples, the apparatuses and methods include fluorescence imaging in conjunction with other imaging modalities (e.g., 3D volumetric imaging, color imaging, infrared NIR imaging and/or NIR spectroscopy) to provide a more comprehensive assessment of the subject's oral condition. Fluorescence imaging may be used to provide information related to health of soft tissues of the mouth and be used to detect precancerous or cancerous lesions. Thus, in addition to detecting tooth conditions (e.g., cavities, cracks, etc.) and gingival conditions (e.g., mild, moderate or severe gingival inflammation, etc.) using other imaging modalities (e.g., volumetric, color, NIR), fluorescence imaging can be used to detect cancerous and/or precancerous lesions in the soft tissues around the teeth. Further, when combined with information provided by the other imaging modalities, the fluorescence imaging data may provide a more accurate and faster diagnosis of oral cancer and precancers.” [0010]
Regarding claim 17
The system of claim 1, wherein processing at least one of the data or the estimations of the one or more oral conditions to generate at least one of a) the one or more actionable symptom recommendations or b) the one or more diagnoses of one or more oral health problems associated with the one or more oral conditions comprises:
processing a plurality of the estimations of the one or more oral conditions using a first trained machine learning model that outputs a first actionable symptom recommendation or a first diagnosis of a first dental health condition; and
Farkash et al. teaches:
Example of tooth conditions (current dental state)…
“In some examples, the apparatuses and methods include fluorescence imaging in conjunction with other imaging modalities (e.g., 3D volumetric imaging, color imaging, infrared NIR imaging and/or NIR spectroscopy) to provide a more comprehensive assessment of the subject's oral condition. Fluorescence imaging may be used to provide information related to health of soft tissues of the mouth and be used to detect precancerous or cancerous lesions. Thus, in addition to detecting tooth conditions (e.g., cavities, cracks, etc.) and gingival conditions (e.g., mild, moderate or severe gingival inflammation, etc.) using other imaging modalities (e.g., volumetric, color, NIR), fluorescence imaging can be used to detect cancerous and/or precancerous lesions in the soft tissues around the teeth. Further, when combined with information provided by the other imaging modalities, the fluorescence imaging data may provide a more accurate and faster diagnosis of oral cancer and precancers.” [0010]
Machine learning engines with diagnosis engine and treatment recommendation…
“The processing system 258 may include one or more processors configured to process scan data from the scanning system 254. The processing system 258 may include one or more of: feature extraction engine(s) 260, labeling engine(s) 269, machine learning engine(s) 262, segmentation engine(s) 264, diagnosis engine(s) 266, and treatment recommendation engine(s) 268. The feature extraction engine(s) 262 may extract features from the oral scan data. The extracted features can be used as input by the machine learning engine 264 to train a machine learning model. In some cases, a labeling engine 262 is used to label the different features related to one or more diseases or conditions. The machine learning engine 264 may train the machine learning model based upon patient data from, for example, a patient database 270, which can include historical patient data, patient demographics, tooth measurements, tooth surface mesh, processed tooth features, and/or other patient information. The patient database 270 may be part of a computing device which includes the processing system 258 or may be part of a separate computing device.” [0063]
processing the plurality of the estimations of the one or more oral conditions using a second trained machine learning model that outputs a second actionable symptom recommendation or a second diagnosis of a second dental health condition.
Machine learning to assess (estimate) dental (oral) conditions…
“Any of the apparatuses and/or methods can implement machine learning techniques and classification models to automatically assess and/or diagnose periodontal or dental conditions. Examples of machine learning systems that may be used include, but are not limited to, Convolutional Neural Networks (CNN), Decision Tree, Random Forest, Logistic Regression, Support Vector Machine, AdaBoosT, K-Nearest Neighbor (KNN), Quadratic Discriminant Analysis, Neural Network, etc. The machine learning classification models can be configured to generate an output data set that includes a probability that the data set includes one or more or periodontal and/or dental conditions. In some examples, the machine learning classification model can output a linear scale rating (e.g., a probability between 0.0 to 1.0).” [0013]
Train machine learning models (plural) using datasets from oral scans (oral state capture modalities)…
“The dataset(s) for building the machine learning may be collected and iteratively modified over time based on a particular patient's oral scans and/or a library of oral scans of different patient. In some variations, the system 258 may be configured to build a questionnaire to identify clinical issues and recommend treatment, identify doctors that would qualify and annotate the dataset, and train machine learning models using the datasets.” [0068]
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 3, 5, 6, 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Pub. No. US 2022/0189611 to Farkash et al. in view of Pub. No. US 2018/0206940 to Kopelman et al.
Regarding claim 3
The system of claim 2, wherein the computing device is further configured to:
generate a first simulation of at least one of an image of the predicted future state of the dental site, a 3D model of the predicted future state of the dental site, or a video showing a progression over time to the predicted future state of the dental site;
Farkash et al. teaches:
Example of generate 3D time lapse videos…
“The processing system 258 can be configured to automatically generate one or more diagnoses based on machine learning analysis of the scans. The system 258 may be configured to automatically chart, maintain notes, and highlight potential problems related to a diagnosis. The processing system 258 may be configured to generate 3D time lapse videos to help identify and illustrate areas in the patient's anatomy which change over time, suggest a diagnosis (e.g. chipped tooth, gingival recession, caries, etc.) based on machine learning, and generate a treatment plan (e.g., follow up appointment/scan in 6 months, night guard, etc.). The processing system 258 may be configured to analyze a 3D model and/or 2D images (e.g., 2D color and 2D NIR) to provide a diagnosis using machine learning. The processing system 258 may be configured to: identify clinical issues based on single tooth 2D color and NIR images (e.g. caries); provide a full-mouth machine learning diagnosis (e.g., identify clinical issues based on full jaw 2D and 3D data (e.g. malocclusion, tooth wear, acid reflux, etc.)); provide auto gum recession identification based on single 3D scan and 2D images; automatic chart all teeth, crowns, fillings, missing teeth etc. based on 3D scan; and/or automatically identify prepped teeth and type of restoration (crown, inlay, bridge, etc.) based on 3D scan.” [0067]
See 3D Model below.
estimate a second future state of the dental site expected to occur after a treatment of at least one of the one or more oral conditions or the one or more oral health problems;
3D to identify changes over time (second future state)…
“The processing system 258 can be configured to automatically generate one or more diagnoses based on machine learning analysis of the scans. The system 258 may be configured to automatically chart, maintain notes, and highlight potential problems related to a diagnosis. The processing system 258 may be configured to generate 3D time lapse videos to help identify and illustrate areas in the patient's anatomy which change over time, suggest a diagnosis (e.g. chipped tooth, gingival recession, caries, etc.) based on machine learning, and generate a treatment plan (e.g., follow up appointment/scan in 6 months, night guard, etc.). The processing system 258 may be configured to analyze a 3D model and/or 2D images (e.g., 2D color and 2D NIR) to provide a diagnosis using machine learning. The processing system 258 may be configured to: identify clinical issues based on single tooth 2D color and NIR images (e.g. caries); provide a full-mouth machine learning diagnosis (e.g., identify clinical issues based on full jaw 2D and 3D data (e.g. malocclusion, tooth wear, acid reflux, etc.)); provide auto gum recession identification based on single 3D scan and 2D images; automatic chart all teeth, crowns, fillings, missing teeth etc. based on 3D scan; and/or automatically identify prepped teeth and type of restoration (crown, inlay, bridge, etc.) based on 3D scan.” [0067]
See 3D Model below.
generate a second simulation of at least one of an image of the second future state of the dental site or a 3D model of the second future state of the dental site; and
“The processing system 258 can be configured to automatically generate one or more diagnoses based on machine learning analysis of the scans. The system 258 may be configured to automatically chart, maintain notes, and highlight potential problems related to a diagnosis. The processing system 258 may be configured to generate 3D time lapse videos to help identify and illustrate areas in the patient's anatomy which change over time, suggest a diagnosis (e.g. chipped tooth, gingival recession, caries, etc.) based on machine learning, and generate a treatment plan (e.g., follow up appointment/scan in 6 months, night guard, etc.). The processing system 258 may be configured to analyze a 3D model and/or 2D images (e.g., 2D color and 2D NIR) to provide a diagnosis using machine learning. The processing system 258 may be configured to: identify clinical issues based on single tooth 2D color and NIR images (e.g. caries); provide a full-mouth machine learning diagnosis (e.g., identify clinical issues based on full jaw 2D and 3D data (e.g. malocclusion, tooth wear, acid reflux, etc.)); provide auto gum recession identification based on single 3D scan and 2D images; automatic chart all teeth, crowns, fillings, missing teeth etc. based on 3D scan; and/or automatically identify prepped teeth and type of restoration (crown, inlay, bridge, etc.) based on 3D scan.” [0067]
See 3D Model below.
generate a presentation showing the first simulation and the second simulation.
[No Patentable Weight is given to non-functional descriptive claim language of showing first and second simulation as nothing is done (no interaction) with the shown simulations.]
See 3D Model below.
Model
Farkash et al. teaches 3D models. They do not teach 3D models at second future state.
Kopelman et al. also in the business of 3D models teaches:
Image data of current and planned treatment and final position for multi-stage treatment plan, and where data is presented to practitioner…
“Embodiments provide a method and system for assessing the actual progress of an orthodontic treatment plan that has a target end position (e.g., of assessing a patient's teeth during intermediate stages of a multi-stage orthodontic treatment plan). Image data for an actual condition of the patient's dental arch may be compared with a planned condition of the patient's dental arch (e.g., for an intermediate stage of the multi-stage orthodontic treatment plan). Based on this comparison, one or more clinical signs that the actual condition of the patient's dental arch has a deviation from the planned condition of the patient's dental arch (e.g., for the intermediate stage of a multi-stage orthodontic treatment plan) are identified. One or more probable root causes for the deviation are then determined based on the one or more clinical signs. Additionally, the clinical signs and/or the root causes may be used to determine whether a planned final position of the dental arch is achievable without corrective action. This may include checking a position of the teeth in each arch as well as the progress of the treatment plan, which may include additional parameters including occlusion, bite relation, arch expansion, and so on. Finally, one or more corrective actions for the orthodontic treatment plan may be determined based on the one or more probable root causes. The determined clinical signs, probable root causes and/or corrective actions may be presented to the dental practitioner for his or her consideration.” [0018]
3D model generated for target of each treatment stage…
“Multiple treatment stages may then be generated based on the determined movement path. Each of the treatment stages can be incremental repositioning stages of an orthodontic treatment procedure designed to move one or more of the patient's teeth from a starting tooth arrangement for that treatment stage to a target arrangement for that treatment stage. A different 3D model of a target condition for a treatment stage may be generated for each of the treatment stages. One or a set of orthodontic appliances (e.g., aligners) are then fabricated based on the generated treatment stages (e.g., based on the 3D models of the target conditions for each of the treatment stages). For example, a set of appliances can be fabricated, each shaped to accommodate a tooth arrangement specified by one of the treatment stages, such that the appliances can be sequentially worn by the patient to incrementally reposition the teeth from the initial arrangement to the target arrangement. The configuration of the aligners can be selected to elicit the tooth movements specified by the corresponding treatment stage.” [0039]
Image data and compare virtual 3D model of current condition to second virtual 3D model of planned (estimate second future state)…
“The image data 135 received during the intermediate stage in the multi-stage orthodontic treatment plan 186 may be compared by an adaptive treatment module 115 to data in the treatment plan 186. In one embodiment, adaptive treatment module 115 compares a first virtual 3D model of the actual current condition of the patient's dental arch that is included in the image data (e.g., that was generated based on an intraoral scan of the patient's dental arch) to a second virtual 3D model of the planned condition of the patient's dental arch for the current intermediate stage of the multi-stage orthodontic treatment plan. Based on the comparison of the image data 135 to the intermediate stage of the orthodontic treatment plan 186, adaptive treatment module 115 determines any clinical signs of deviation between the actual condition of the patient's dental arch and the planned condition of the patient's dental arch for the current treatment stage. The adaptive treatment module 115 determines one or more root causes associated with the one or more clinical signs. The adaptive treatment module 115 may additionally determine one or more corrective actions from the one or more root causes. In some instances, a determined corrective action may include switching from a first treatment plan that was started to a second treatment plan, where both treatment plans were generated prior to initiation of the first treatment plan. Additionally, the adaptive treatment module 115 may determine whether a planned final condition of the dental arch remains achievable in view of the determined clinical signs, root causes and/or corrective actions.” [0041]
Planned final condition (second future state) based on corrective actions (after treatment)…
“The image data 135 received during the intermediate stage in the multi-stage orthodontic treatment plan 186 may be compared by an adaptive treatment module 115 to data in the treatment plan 186. In one embodiment, adaptive treatment module 115 compares a first virtual 3D model of the actual current condition of the patient's dental arch that is included in the image data (e.g., that was generated based on an intraoral scan of the patient's dental arch) to a second virtual 3D model of the planned condition of the patient's dental arch for the current intermediate stage of the multi-stage orthodontic treatment plan. Based on the comparison of the image data 135 to the intermediate stage of the orthodontic treatment plan 186, adaptive treatment module 115 determines any clinical signs of deviation between the actual condition of the patient's dental arch and the planned condition of the patient's dental arch for the current treatment stage. The adaptive treatment module 115 determines one or more root causes associated with the one or more clinical signs. The adaptive treatment module 115 may additionally determine one or more corrective actions from the one or more root causes. In some instances, a determined corrective action may include switching from a first treatment plan that was started to a second treatment plan, where both treatment plans were generated prior to initiation of the first treatment plan. Additionally, the adaptive treatment module 115 may determine whether a planned final condition of the dental arch remains achievable in view of the determined clinical signs, root causes and/or corrective actions.” [0041]
Image for intermediate stage used to update treatment plan…
“In some embodiments, the image data 135 received during the intermediate stage in the multi-stage orthodontic treatment plan 186 may be used to analyze a fit of a next aligner based on the actual current condition of the dental arch (e.g., based on current teeth positions, occlusion, arch width, and so on). If the next aligner will not have an optimal fit on the patient's dental arch (e.g., will not fit onto the dental arch or will fit but will not apply the desired forces on one or more teeth), then new aligners may be designed based on updating the treatment plan staging.” [0042]
Image data with 3D model of current and planned dental condition and representation (presentation) of the image data…
“Data comparator 166 compares received image data 162 with a treatment plan 186. The image data may represent an actual condition of a patient's dental arch during a current stage in the treatment plan 186, and the treatment plan 186 may be a multi-stage treatment plan that includes a planned condition of the dental arch for the current stage. A representation of the dental arch in the image data 162 may be registered with a representation of the dental arch in the treatment plan. For example, the image data 162 may include a virtual 3D model of the actual condition of the patient's dental arch and the treatment plan may include an additional virtual 3D model of the planned condition of the patient's dental arch for the current stage of treatment.” [0044]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of Farkash et al. the ability to use 3D models at second future date as taught by Kopelman et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Kopelman et al. who teaches using 3D models and that treatment plans change over time based on updated dental conditions from prior treatment. It would be obvious to analyze patients current dental condition, perform treatment based on a treatment plan, and then update the plan at a second time based on revised condition of a patients response to an initial treatment plan.
The combined references teach multiple treatment plans and 3D models over time. They do not explicitly teach generate a presentation of first and second simulation. However, one of ordinary skill in the art would recognize that both models/simulations have been created and providing a presentation with both would be useful information to present together.
It would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s filing to modify the combined references with the knowledge available to such an artisan that presenting models/simulations together provides useful information. This would have been known work in the field of endeavor prompting variations of it in the same field based on use of updating dental models/simulations and would provide predictable results.
Regarding claim 5
The system of claim 4, wherein the computing device is further configured to:
generate at least one of an image, a 3D model, or a video showing the change in the one or more oral health problems over the time period:
Farkash et al. teaches:
Example of generate 3D time lapse videos that show change over time…
“The processing system 258 can be configured to automatically generate one or more diagnoses based on machine learning analysis of the scans. The system 258 may be configured to automatically chart, maintain notes, and highlight potential problems related to a diagnosis. The processing system 258 may be configured to generate 3D time lapse videos to help identify and illustrate areas in the patient's anatomy which change over time, suggest a diagnosis (e.g. chipped tooth, gingival recession, caries, etc.) based on machine learning, and generate a treatment plan (e.g., follow up appointment/scan in 6 months, night guard, etc.). The processing system 258 may be configured to analyze a 3D model and/or 2D images (e.g., 2D color and 2D NIR) to provide a diagnosis using machine learning. The processing system 258 may be configured to: identify clinical issues based on single tooth 2D color and NIR images (e.g. caries); provide a full-mouth machine learning diagnosis (e.g., identify clinical issues based on full jaw 2D and 3D data (e.g. malocclusion, tooth wear, acid reflux, etc.)); provide auto gum recession identification based on single 3D scan and 2D images; automatic chart all teeth, crowns, fillings, missing teeth etc. based on 3D scan; and/or automatically identify prepped teeth and type of restoration (crown, inlay, bridge, etc.) based on 3D scan.” [0067]
wherein the at least one of the image, the 3D model or the video is generated by processing at least one of the data, the estimations of the one or more oral conditions, the one or more actionable symptom recommendations, or the one or more diagnoses of the one or more oral health problems, and further based on processing at least one of the additional data, the additional estimations of the prior states of the one or more oral conditions, the one or more prior actionable symptom recommendations, the one or more prior state diagnoses of the one or more oral health problems, or the change in the one or more oral health problems over the time period, using a generative model.
See Generative Model below.
Generative Model
Farkash et al. teaches model. They do not teach generative model.
Kopelman et al. in the business of models teaches:
Generate a virtual model (generative model)…
“The image data 135 may be used to generate a virtual model (e.g., a virtual 2D model or virtual 3D model) of the actual condition of the patient's dental arch in some embodiments. To generate the virtual model, intraoral scan application 108 may register (i.e., “stitch” together) the intraoral images generated from the intraoral scan session. In one embodiment, performing image registration includes capturing 3D data of various points of a surface in multiple images (views from a camera), and registering the images by computing transformations between the images. The images may then be integrated into a common reference frame by applying appropriate transformations to points of each registered image.” [0029]
3D model of a target (estimated) condition for treatment stage (actional symptom recommendations)…
“Multiple treatment stages may then be generated based on the determined movement path. Each of the treatment stages can be incremental repositioning stages of an orthodontic treatment procedure designed to move one or more of the patient's teeth from a starting tooth arrangement for that treatment stage to a target arrangement for that treatment stage. A different 3D model of a target condition for a treatment stage may be generated for each of the treatment stages. One or a set of orthodontic appliances (e.g., aligners) are then fabricated based on the generated treatment stages (e.g., based on the 3D models of the target conditions for each of the treatment stages). For example, a set of appliances can be fabricated, each shaped to accommodate a tooth arrangement specified by one of the treatment stages, such that the appliances can be sequentially worn by the patient to incrementally reposition the teeth from the initial arrangement to the target arrangement. The configuration of the aligners can be selected to elicit the tooth movements specified by the corresponding treatment stage.” [0039]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of Farkash et al. the ability to use generative model as taught by Kopelman et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Kopelman et al. who teaches the advantages of generative models for evaluating dental conditions and treatment plans over time. Farkash et al. benefits as they also treat patients over time using 3D models.
Regarding claim 6
The system of claim 4, wherein the computing device is further configured to:
predict a future state of the dental site based on processing at least one of the data, the estimations of the one or more oral conditions, the one or more actionable symptom recommendations, or the one or more diagnoses of the one or more oral health problems, and further based on processing at least one of the additional data, the additional estimations of the prior states of the one or more oral conditions, the one or more prior actionable symptom recommendations, or the one or more prior state diagnoses of the one or more oral health problems, or the change in the one or more oral health problems over the time period, wherein the predicted future state of the dental site comprises a future state of at least one of the one or more oral conditions or the one or more oral health problems; and
See Future State below.
generate at least one of an image of the predicted future state of the dental site, a 3D model of the predicted future state of the dental site, or a video showing a progression from the one or more prior states of the dental site of the patient to the predicted future state of the dental site.
See Future State below.
Future State
Farkash et al. teaches dental condition (state). They do not teach details of future state.
Kopelman et al. in the business of dental condition teaches:
Assessing treatment plan with target end position (predict future state) and intermediate stages…
“Embodiments provide a method and system for assessing the actual progress of an orthodontic treatment plan that has a target end position (e.g., of assessing a patient's teeth during intermediate stages of a multi-stage orthodontic treatment plan). Image data for an actual condition of the patient's dental arch may be compared with a planned condition of the patient's dental arch (e.g., for an intermediate stage of the multi-stage orthodontic treatment plan). Based on this comparison, one or more clinical signs that the actual condition of the patient's dental arch has a deviation from the planned condition of the patient's dental arch (e.g., for the intermediate stage of a multi-stage orthodontic treatment plan) are identified. One or more probable root causes for the deviation are then determined based on the one or more clinical signs. Additionally, the clinical signs and/or the root causes may be used to determine whether a planned final position of the dental arch is achievable without corrective action. This may include checking a position of the teeth in each arch as well as the progress of the treatment plan, which may include additional parameters including occlusion, bite relation, arch expansion, and so on. Finally, one or more corrective actions for the orthodontic treatment plan may be determined based on the one or more probable root causes. The determined clinical signs, probable root causes and/or corrective actions may be presented to the dental practitioner for his or her consideration.” [0018]
Planned (estimated) dental arch (oral condition) and dental arch deviation (diagnoses of oral problem)…
“Embodiments provide a method and system for assessing the actual progress of an orthodontic treatment plan that has a target end position (e.g., of assessing a patient's teeth during intermediate stages of a multi-stage orthodontic treatment plan). Image data for an actual condition of the patient's dental arch may be compared with a planned condition of the patient's dental arch (e.g., for an intermediate stage of the multi-stage orthodontic treatment plan). Based on this comparison, one or more clinical signs that the actual condition of the patient's dental arch has a deviation from the planned condition of the patient's dental arch (e.g., for the intermediate stage of a multi-stage orthodontic treatment plan) are identified. One or more probable root causes for the deviation are then determined based on the one or more clinical signs. Additionally, the clinical signs and/or the root causes may be used to determine whether a planned final position of the dental arch is achievable without corrective action. This may include checking a position of the teeth in each arch as well as the progress of the treatment plan, which may include additional parameters including occlusion, bite relation, arch expansion, and so on. Finally, one or more corrective actions for the orthodontic treatment plan may be determined based on the one or more probable root causes. The determined clinical signs, probable root causes and/or corrective actions may be presented to the dental practitioner for his or her consideration.” [0018]
Modifications to treatment plan (recommendations)…
“Some corrective actions may be modifications to the final treatment plan (e.g., to final teeth positions) and/or staging of the teeth positions in the treatment plan (if the treatment plan is a multi-stage treatment plan) that may be made automatically without any input from the dental practitioner. Staging refers to the sequence of movements from current or initial teeth positions to new teeth positions. Staging includes determining which tooth movements will be performed at different phases of treatment. Other corrective actions may be modifications to the treatment plan that are made after approval from the dental practitioner. Other corrective actions may require one or more actions or operations to be performed by the dental practitioner.” [0019]
Generate 3D model of a target (future) condition…
“Multiple treatment stages may then be generated based on the determined movement path. Each of the treatment stages can be incremental repositioning stages of an orthodontic treatment procedure designed to move one or more of the patient's teeth from a starting tooth arrangement for that treatment stage to a target arrangement for that treatment stage. A different 3D model of a target condition for a treatment stage may be generated for each of the treatment stages. One or a set of orthodontic appliances (e.g., aligners) are then fabricated based on the generated treatment stages (e.g., based on the 3D models of the target conditions for each of the treatment stages). For example, a set of appliances can be fabricated, each shaped to accommodate a tooth arrangement specified by one of the treatment stages, such that the appliances can be sequentially worn by the patient to incrementally reposition the teeth from the initial arrangement to the target arrangement. The configuration of the aligners can be selected to elicit the tooth movements specified by the corresponding treatment stage.” [0039]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of Farkash et al. the ability to use 3D models for future states as taught by Kopelman et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Kopelman et al. who teaches the advantages of 3D models for evaluating dental conditions and treatment plans over time. Farkash et al. benefits as they also treat patients over time using 3D models.
Regarding claim 9
The system of claim 1, wherein the data comprises first data generated using a first oral state capture modality and second data generated using a second oral state capture modality, and wherein the computing device is further configured to:
process the first data using one or more of the plurality of trained machine learning models, wherein the one or more of the plurality of trained machine learning models output first estimations of the one or more oral conditions;
Farkash et al. teaches:
Example of training machine learning model and outputting indication (estimation) of gingival inflammation…
“For example, described herein are methods that include: receiving or accessing data collected from an oral scan of the subject's oral cavity, the data including at least three of: 3D surface data, color image data, near-infrared (NIR) data, and fluorescence imaging data; identifying one or more features indicative of gingival inflammation in the collected data using a trained machine learning model, wherein the trained machine learning model is trained on image data including at least three of: previous 3D surface data, previous color image data, previous near-infrared (NIR) data, and previous fluorescence imaging data, wherein the scan data used to train the machine learning model is filtered based on a threshold angle between images of the image data and a threshold distance between the images of the image data; and outputting an indication of gingival inflammation based on the identified one or more features indicative of gingival inflammation.” [0017]
process at least one of the first data or the first estimations of the one or more oral conditions to output at least one of a) one or more initial actionable symptom recommendations or b) one or more initial diagnoses of the one or more oral health problems, and further to output a recommendation to generate the second data;
Output gingival inflammation (initial diagnoses)…
“For example, described herein are methods that include: receiving or accessing data collected from an oral scan of the subject's oral cavity, the data including at least three of: 3D surface data, color image data, near-infrared (NIR) data, and fluorescence imaging data; identifying one or more features indicative of gingival inflammation in the collected data using a trained machine learning model, wherein the trained machine learning model is trained on image data including at least three of: previous 3D surface data, previous color image data, previous near-infrared (NIR) data, and previous fluorescence imaging data, wherein the scan data used to train the machine learning model is filtered based on a threshold angle between images of the image data and a threshold distance between the images of the image data; and outputting an indication of gingival inflammation based on the identified one or more features indicative of gingival inflammation.” [0017]
Treatment recommendation engine (therefore, output a recommendation)…
“The processing system 258 may include one or more processors configured to process scan data from the scanning system 254. The processing system 258 may include one or more of: feature extraction engine(s) 260, labeling engine(s) 269, machine learning engine(s) 262, segmentation engine(s) 264, diagnosis engine(s) 266, and treatment recommendation engine(s) 268. The feature extraction engine(s) 262 may extract features from the oral scan data. The extracted features can be used as input by the machine learning engine 264 to train a machine learning model. In some cases, a labeling engine 262 is used to label the different features related to one or more diseases or conditions. The machine learning engine 264 may train the machine learning model based upon patient data from, for example, a patient database 270, which can include historical patient data, patient demographics, tooth measurements, tooth surface mesh, processed tooth features, and/or other patient information. The patient database 270 may be part of a computing device which includes the processing system 258 or may be part of a separate computing device.” [0063]
receive the second data responsive to outputting the recommendation to generate the second data; and
Generate one or more (second data) recommendation based on diagnoses…
“The segmentation engine 260 can use the trained machine learning model to segment the data into individual components. In some examples the data is segmented into different tissue types (e.g., tooth, periodontium, bone, plaque, etc.), features related to different diseases or conditions (e.g., gingival inflammation, cancer lesion, precancer lesion, etc.), and/or different tooth diseases or conditions (e.g., cavities, caries, cracks, etc.). The processing system 258 may store historical or new image data in, for example, the patent database 270. The diagnosis engine(s) 268 can generate one or more diagnoses based on learned features associated with different diseases and conditions (e.g., gingival inflammation, cancer, precancer). Optionally, the treatment recommendation engine 268 can generate one or more treatment recommendations based on the one or more diagnoses. The processing system 258 can send the diagnosis and/or treatment recommendations to the display 256 (and/or other output device) for presentation to a user. In some variations, the displayed images (and/or 3D model) includes color-coded features based on the identified features indicative of a disease or condition. For example, gums effected by gingivitis , cancerous lesions, precancerous lesions, tooth cavities, tooth cracks and/or plaque may each be identified with distinctive colors.” [0065]
replace at least one of a) the first estimations with the estimations, b) the one or more initial actionable symptom recommendations with the one or more actionable symptom recommendations, or c) the one or more initial diagnoses with the one or more diagnoses based on additional processing of the second data.
See Replace below.
Replace
Farkash et al. teaches treatment recommendation. They do not teach replace a recommendation.
Kopelman et al. also in the business of treatment recommendation teaches:
Example of identifies any clinical signs (addition processing second data)…
“At block 215, processing logic compares the image data of the actual condition of the patient's dental arch to a planned condition of the patient's dental arch for the intermediate stage of the multistage orthodontic treatment plan. At block 218, processing logic identifies any clinical signs that the actual condition of the patient's dental arch has a deviation from the planned condition of the patient's dental arch for the intermediate stage of the multistage orthodontic treatment plan based on a result of the comparing. At block 220, processing logic determines whether any clinical signs have been identified. If no clinical signs are identified, then the method ends. If one or more clinical signs have been identified, the method proceeds to block 222.” [0084]
“At block 222, processing logic determines one or more probable root causes for the deviation based on the one or more clinical signs. In other words, processing logic determines one or more root causes associated with the one or more clinical signs. At block 225, processing logic determines whether a planned final condition of the dental arch is achievable without corrective action. If the planned final condition of the dental arch is achievable without corrective action, then the method may end. However, if the planned final condition of the dental arch is not achievable without corrective action, then the method proceeds to block 230.” [0085]
Example of update (replace) treatment plan (recommendation) based on root causes…
“At block 230, processing logic determines one or more corrective actions for the multistage orthodontic treatment plan based on the one or more probable root causes. If multiple treatment plans were generated before treatment was begun, then the one or more corrective actions may include a corrective action to switch from implementation of the multistage orthodontic treatment plan to a different multistage orthodontic treatment plan that has already been generated. For example, a first corrective action may be to switch to a second treatment plan, and a second corrective action may be to switch to a third treatment plan. At block 235, processing logic outputs information of the determined root causes and determined corrective actions, and may additionally output information on the clinical signs that have been identified. Alternatively, or additionally, processing logic may output a graphical representation of the clinical signs, propose an updated treatment plan (e.g., propose one or more corrective actions that will modify the treatment plan and/or propose a new final condition for the dental arch), and/or automatically perform one or more of the determined corrective actions. Any updates to the final condition of the dental arch that may be easier to achieve should still address chief concerns of the patient.” [0086]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of Farkash et al. the ability to replace recommendations as taught by Kopelman et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Kopelman et al. who teaches the advantages of evaluating dental conditions and treatment plans over time and updating such plans.
Regarding claim 19
The system of claim 1, wherein the one or more treatment recommendations comprise at least one of one or more restorative treatment recommendations or one or more orthodontic treatment recommendations, and wherein the computing device is further configured to:
receive a selection of at least one of a restorative treatment recommendation of the one or more restorative treatment recommendations or an orthodontic treatment recommendation of the one or more orthodontic treatment recommendations; and
Farkash et al. teaches:
Generate (receive) treatment recommendation…
“The segmentation engine 260 can use the trained machine learning model to segment the data into individual components. In some examples the data is segmented into different tissue types (e.g., tooth, periodontium, bone, plaque, etc.), features related to different diseases or conditions (e.g., gingival inflammation, cancer lesion, precancer lesion, etc.), and/or different tooth diseases or conditions (e.g., cavities, caries, cracks, etc.). The processing system 258 may store historical or new image data in, for example, the patent database 270. The diagnosis engine(s) 268 can generate one or more diagnoses based on learned features associated with different diseases and conditions (e.g., gingival inflammation, cancer, precancer). Optionally, the treatment recommendation engine 268 can generate one or more treatment recommendations based on the one or more diagnoses. The processing system 258 can send the diagnosis and/or treatment recommendations to the display 256 (and/or other output device) for presentation to a user. In some variations, the displayed images (and/or 3D model) includes color-coded features based on the identified features indicative of a disease or condition. For example, gums effected by gingivitis , cancerous lesions, precancerous lesions, tooth cavities, tooth cracks and/or plaque may each be identified with distinctive colors.” [0065]
Dental restoration…
“The processing system 258 can be configured to automatically generate one or more diagnoses based on machine learning analysis of the scans. The system 258 may be configured to automatically chart, maintain notes, and highlight potential problems related to a diagnosis. The processing system 258 may be configured to generate 3D time lapse videos to help identify and illustrate areas in the patient's anatomy which change over time, suggest a diagnosis (e.g. chipped tooth, gingival recession, caries, etc.) based on machine learning, and generate a treatment plan (e.g., follow up appointment/scan in 6 months, night guard, etc.). The processing system 258 may be configured to analyze a 3D model and/or 2D images (e.g., 2D color and 2D NIR) to provide a diagnosis using machine learning. The processing system 258 may be configured to: identify clinical issues based on single tooth 2D color and NIR images (e.g. caries); provide a full-mouth machine learning diagnosis (e.g., identify clinical issues based on full jaw 2D and 3D data (e.g. malocclusion, tooth wear, acid reflux, etc.)); provide auto gum recession identification based on single 3D scan and 2D images; automatic chart all teeth, crowns, fillings, missing teeth etc. based on 3D scan; and/or automatically identify prepped teeth and type of restoration (crown, inlay, bridge, etc.) based on 3D scan.” [0067]
Example of scanning by orthodontist…
“Scanner 1520 is responsible for scanning casts of the patient's teeth obtained either from the patient or from an orthodontist and providing the scanned digital data set information to computing system 1500 for further processing. In a distributed environment, scanner 1520 may be located at a remote location and communicate scanned digital data set information to computing system 1500 over network interface 1524. The system 1500 can be used to provide one more proposed diagnoses 1522 for one or more oral conditions/diseases based on processing of data set information acquired from computing system 1500.” [0107]
See Treatment Plan below.
generate a treatment plan that is one of a restorative treatment plan, an orthodontic treatment plan, or an ortho-restorative treatment plan based on the selection, the generating comprising:
Identify (generate) type of restoration (treatment) for restoration…
“…The processing system 258 may be configured to: identify clinical issues based on single tooth 2D color and NIR images (e.g. caries); provide a full-mouth machine learning diagnosis (e.g., identify clinical issues based on full jaw 2D and 3D data (e.g. malocclusion, tooth wear, acid reflux, etc.)); provide auto gum recession identification based on single 3D scan and 2D images; automatic chart all teeth, crowns, fillings, missing teeth etc. based on 3D scan; and/or automatically identify prepped teeth and type of restoration (crown, inlay, bridge, etc.) based on 3D scan.” [0067]
See Treatment Plan below.
determining staging for the treatment plan;
Treatment plan with follow up appointment (determining a staging)…
“The processing system 258 can be configured to automatically generate one or more diagnoses based on machine learning analysis of the scans. The system 258 may be configured to automatically chart, maintain notes, and highlight potential problems related to a diagnosis. The processing system 258 may be configured to generate 3D time lapse videos to help identify and illustrate areas in the patient's anatomy which change over time, suggest a diagnosis (e.g. chipped tooth, gingival recession, caries, etc.) based on machine learning, and generate a treatment plan (e.g., follow up appointment/scan in 6 months, night guard, etc.). The processing system 258 may be configured to analyze a 3D model and/or 2D images (e.g., 2D color and 2D NIR) to provide a diagnosis using machine learning. The processing system 258 may be configured to: identify clinical issues based on single tooth 2D color and NIR images (e.g. caries); provide a full-mouth machine learning diagnosis (e.g., identify clinical issues based on full jaw 2D and 3D data (e.g. malocclusion, tooth wear, acid reflux, etc.)); provide auto gum recession identification based on single 3D scan and 2D images; automatic chart all teeth, crowns, fillings, missing teeth etc. based on 3D scan; and/or automatically identify prepped teeth and type of restoration (crown, inlay, bridge, etc.) based on 3D scan.” [0067]
See Treatment Plan below.
receiving modifications to one or more stages of the treatment plan; and
See Treatment Plan below.
outputting an updated treatment plan.
See Treatment Plan below.
Treatment Plan
The combined references teach recommendation. They do not teach treatment plan with selection, staging, modification, and outputting a treatment plan.
Kopelman et al. teaches also in the business of recommendation teaches:
Staging treatment plan…
“Some corrective actions may be modifications to the final treatment plan (e.g., to final teeth positions) and/or staging of the teeth positions in the treatment plan (if the treatment plan is a multi-stage treatment plan) that may be made automatically without any input from the dental practitioner. Staging refers to the sequence of movements from current or initial teeth positions to new teeth positions. Staging includes determining which tooth movements will be performed at different phases of treatment. Other corrective actions may be modifications to the treatment plan that are made after approval from the dental practitioner. Other corrective actions may require one or more actions or operations to be performed by the dental practitioner.” [0019]
Example of input (selection) for reduction, extraction (restorative treatment) and alterations (modifications) to treatment plan…
“At block 335 of method 300, processing logic determines whether user input is called for based on one or more determined corrective actions. For example, some corrective actions may require a procedure or operation to be performed by a dental practitioner. Examples of such corrective actions include interproximal reduction, tooth extraction, and attachment placement. Other corrective actions may be performed without any procedure or operation on the part of the dental practitioner. Such corrective actions may be achieved through modification of the treatment plan. In some instances it may be preferable to consult with the dental practitioner before implementing corrective actions. Alternatively, some corrective actions may be automatically performed without input from the dental practitioner. For example, the dental practitioner may select an automatic implementation option. In such an instance, alterations to the treatment plan that affect a final position and/or intermediate staging of the treatment plan may be implemented automatically without input from the dental practitioner. If user input is called for, then the method proceeds to block 340. Otherwise, the method proceeds to block 345.” [0088]
Alterations of stages of treatment plan and outputting corrective actions…
“At block 345, processing logic automatically performs an alteration to one or more stages of the multistage orthodontic treatment plan without first receiving user input to perform any corrective actions. Processing logic may additionally change a final outcome of the orthodontic treatment plan (e.g., a final condition of the dental arch). Accordingly, processing logic may automatically perform one or multiple corrective actions without first receiving input to do so. A notice may be output to a dental practitioner identifying the one or more corrective actions that were automatically performed.” [0089]
Output treatment plan…
“At block 530, processing logic determines one or more corrective actions for the multistage orthodontic treatment plan based on the one or more probable root causes. At block 535, processing logic outputs information of the determined root causes and determined corrective actions. Alternatively, or additionally, processing logic may output a graphical representation of the clinical signs, propose an updated treatment plan (e.g., propose one or more corrective actions that will modify the treatment plan and or propose a new final condition for the dental arch), and/or automatically perform one or more of the determined corrective actions. Any updates to the final condition of the dental arch that may be easier to achieve should still address chief concerns of the patient.” [0098]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to use a treatment plan as taught by Kopelman et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Kopelman et al. who teaches the advantages of treatment plans for treatments with multiple stages.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Pub. No. US 2022/0189611 to Farkash et al. in view of Pub. No. US 2021/0134440 to Menavsky et al.
Regarding claim 7
The system of claim 1, wherein the computing device is further configured to:
receive initial image data of the current state of the dental site;
Farkash et al. teaches:
Example of image captured (current state) of dental arch (site)…
“In some cases, the machine learning model is trained using a subset of collected images of a scan. Using a subset of the images can improve the efficiency of training the machine learning model. FIGS. 17A-17D schematically illustrate how a subset of images from a scan may be selected. FIG. 17A shows various positions of a scanner sensor (e.g., camera) as it progressively scanned around an object (e.g., dental arch). For every image captured by the scanner sensor, there is center point along the projection of the center pixel with a fixed distance. The fixed distance typically ends around the target being imaged, but not necessarily. At an initial stage (stage 0), all the images captured by the scanner are available. When training the machine learning model, a first image that is captured by the scanner sensor at a first position of the scanner sensor is selected. If a second image captured by the scanner sensor at a second position is determined to be too close to the first image at the first position, then the image collected at the second position is not used in training machine learning model. In some cases, a first position of the scanner sensor is too close to the second position if a first angle between a first projection from the center pixel of the first image and a second projection from the center pixel of the second image is smaller than a threshold angle and if and a first distance between the first center point distance of the first pixel and the second center point distance of the second pixel is less than a threshold distance. In this case, the first angle is determined to be less than the threshold angle and the first distance is determined to be less than the threshold distance. Therefore, the second image is determined to be too close to the first image and is not used in training of the machine learning model.” [0071]
process the initial image data using a first trained machine learning model, wherein the first trained machine learning model outputs an initial estimation of the one or more oral conditions that are insufficient to diagnose the one or more oral health problems; and
Training machine learning model (first trained)…
“In some cases, the machine learning model is trained using a subset of collected images of a scan. Using a subset of the images can improve the efficiency of training the machine learning model. FIGS. 17A-17D schematically illustrate how a subset of images from a scan may be selected. FIG. 17A shows various positions of a scanner sensor (e.g., camera) as it progressively scanned around an object (e.g., dental arch). For every image captured by the scanner sensor, there is center point along the projection of the center pixel with a fixed distance. The fixed distance typically ends around the target being imaged, but not necessarily. At an initial stage (stage 0), all the images captured by the scanner are available. When training the machine learning model, a first image that is captured by the scanner sensor at a first position of the scanner sensor is selected. If a second image captured by the scanner sensor at a second position is determined to be too close to the first image at the first position, then the image collected at the second position is not used in training machine learning model. In some cases, a first position of the scanner sensor is too close to the second position if a first angle between a first projection from the center pixel of the first image and a second projection from the center pixel of the second image is smaller than a threshold angle and if and a first distance between the first center point distance of the first pixel and the second center point distance of the second pixel is less than a threshold distance. In this case, the first angle is determined to be less than the threshold angle and the first distance is determined to be less than the threshold distance. Therefore, the second image is determined to be too close to the first image and is not used in training of the machine learning model.” [0071]
See Insufficient below.
determine, based on the initial estimation, to at least one of perform additional analysis of the initial image data or recommend generation of the data of a current state of a dental site.
See Insufficient below.
Insufficient
Farkash et al. teaches machine learning and training. They do not teach insufficient output.
Menavsky et al. in the business of machine learning teaches:
Adding more input data and more training (perform additional analysis) based on threshold percentage and accuracy results…
“In step 240, the artificial intelligence engine 112 is tested using a validation set of test dental images, data generated by dental professionals at step 200 and one or more respective test dental survey response sets. The output of each test (e.g., dental test image(s) and respective dental survey response set) is evaluated in step 250. If the artificial intelligence engine 112 correctly interprets the tests (or a threshold percentage of the tests, such as 95% or more), the training is completed in step 260 otherwise the training process will be continued. This process can be repeated many times in order to improve the efficiency and accuracy of the results, by adding more input data for the training or by using additional algorithms to tune optimal hyperparameters (e.g. Bayesian algorithm) for each condition and input data prepared by those skilled in the art. In some embodiments, re-training can include adjusting one or more parameters of the artificial intelligence engine 112 to improve its accuracy. This loop is repeated until the artificial intelligence engine 112 correctly interprets the tests (or a threshold percentage of the tests) in step 250 at which point the training is completed in step 260.” [0037]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of Farkash et al. the ability to perform additional analysis based on insufficient data as taught by Menavsky et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Manavsky et al. who teaches the need for accuracy using machine learning models and it would be obvious that training such models may require additional data to meet an accuracy requirement.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over the combined references in section (7) above in further or Pub. No. US 2021/0350530 to Ricci et al.
Regarding claim 8
The system of claim 7, wherein processing of the initial image data is performed on a patient device, wherein processing the data using the plurality of trained machine learning models is performed on a server device, and wherein the initial image data is generated by the patient device, wherein the patient device comprises a mobile computing device of the patient.
Farkash et al. teaches:
“Thus, any of the methods (including user interfaces) described herein may be implemented as software, hardware or firmware, and may be described as a non-transitory computer-readable storage medium storing a set of instructions capable of being executed by a processor (e.g., computer, tablet, smartphone, etc.), that when executed by the processor causes the processor to control perform any of the steps, including but not limited to: displaying, communicating with the user, analyzing, modifying parameters (including timing, frequency, intensity, etc.), determining, alerting, or the like.” [0112]
Farkash et al. teaches mobile device (e.g., smartphone). They do not teach image performed by patient device.
Ricci et al. also in the business of mobile device teaches:
Cell phone with imaging using toothbrush (patient device)…
“At least one of: a dental image 1002, a dental image landmark 1016 may be obtained from at least one of: a digital x-ray, a digital image, a cell phone captured image, a photographic image, a toothbrush with imaging device, a toothbrush with imaging device being a camera, a film based x-ray, a digitally scanned x-ray, a digitally captured x-ray, an intraoral scanner, a scintillator technology based image, a trans-illumination image, a fluorescence technology based image, a blue fluorescence technology based image, a laser based technology based image, a magnetic resonance image (MRI), a computed tomography (CT) scan based image, a cone beam computed tomography (CBCT) image. Further, a dental professional 1126, a health care professional 1124, an expert, an individual 1128, an e-commerce organization 1130, a researcher, a manufacturer, a business, an application software, a patient portal, a system software, a client device, a processing device may utilize an image capture device or a data storage device to obtain at least one of: a dental image 1002, a dental image landmark 1016 wherein the image capture device includes one or more of: an image capture device, an x-ray equipment, a digital camera, an intraoral camera, a cell phone camera, an intraoral scanner, a scintillator counter, an indirect or direct flat panel detector (FPD), a charged couple device (CCD), a phosphor plate radiography device, a picture archiving and communication system (PACS), a photo-stimulable phosphor (PSP) device, a wireless complementary metal-oxide-semiconductor (CMOS) device. Further at least one of: a dental image 108, a dental image landmark, an image, an image landmark may be obtained from at least one of: Facebook, Inc., Instagram, Snap Inc., Apple Inc., Microsoft, Inc., Alphabet, Inc., Snowflake, Inc., Datadog, Inc., Amazon.com, Inc., Align Technology Inc., Smile Direct Club, Inc., Cube Click, Inc.” [0084]
Server with machine learning…
“The process may be followed by training an aggregate server to use at least one of: machine learning, deep learning to match and identify a real time confidence score for a real time dental image dataset 1030 to at least one of: a supervised annotated dental treatment recommendation dataset, an unsupervised annotated dental treatment recommendation dataset 1026 to produce at least one of: a first real time confidence score for a real time dental treatment recommendation 1028, a second real time confidence score for a real time dental treatment recommendation 1029, a multiple real time confidence score for a real time dental treatment recommendation 1031 and provide a real time confidence score to a real time dental treatment recommendation dataset 1014. At least one of: a supervised annotated dental treatment recommendation dataset 1018, an unsupervised annotated dental treatment recommendation dataset 1026 may be at least one of: obtained, annotated from at least one of: a dental professional 1126, a health care professional 1124, an expert, an individual 1128, an e-commerce organization 1130, an artificial intelligence system 1106, a researcher, a manufacturer, a business, an application software, a patient portal, a system software, a client device, a processing device.” [0087]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to use various devices to collect and perform data analysis as taught by Ricci et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Ricci et al. who teaches the various devices are capable of performing various functions and the combined references benefit by using existing devices to perform their various functions.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Pub. No. US 2022/0189611 to Farkash et al. in view of Pub. No. US 2014/0324919 to Badawi and in view of Pub. No. US 2008/0062429 to Liang et al.
Regarding claim 10
The system of claim 1, wherein the one or more oral conditions and the one or more oral health problems comprise a caries, wherein the data comprises a) at least an occlusal portion of a three- dimensional (3D) surface of the dental site or a color image of the dental site and b) an x-ray image of the dental site, and wherein the computing device is further configured to:
Farkash et al. teaches:
X-ray image…
“In any of these methods, a trained machine learning model is further trained based on X-ray image data, periodontal chart data and visual inspection/tactile data. The trained machine learning model may be further trained based on NIR spectroscopy data.” [0010]
3D scans with dental arch (surface of dental site)…
“The scanning system 254 may include a computer system configured to scan a patient's oral cavity, including the periodontium and/or the teeth. In some instances, the scanning system 254 is configured to scan a dental arch of the patient, which includes at least a portion of a patient's dentition formed by the patient's maxillary and/or mandibular teeth, and which may be viewed from an occlusal perspective. The scanning system 254 may include memory, one or more processors, and/or sensors to detect contours on a patient's dental arch. The scanning system 254 may be implemented as a camera, an intraoral scanner, an x-ray device, an infrared device, fluorescence imaging device, etc. The scanning system 254 may be configured to produce 3D and/or 2D scans of the patient's dental arch. The scanning system 254 may be configured to receive 2D or 3D scan data taken previously or by another system. The display system 256 may include a computer system configured to display at least a portion of the periodontium and/or teeth. The display 256 may be implemented as part of a computer system and/or as a display of a dedicated intraoral scanner.” [0062] Inherent with scan of dental arch is surface of dental site.
“The machine learning model may be trained based on any of a number of data set and may be customized based on the target conditions/diseases for detection and/or a particular patient. For example, the machine learning model may be trained based on input from: image data from previous 3D scans of the oral cavity (e.g., surface, color and NIR data) of the same patient or of one or more other patients; X-ray images of the oral cavity of the same patient and/or of one or more other patients,..” [0064]
determine a depth of the caries based on analysis of the x-ray image;
Farkash et al. teaches:
Depth…
“The intraoral scanner 101 can be configured to generate a volumetric model, which includes a virtual representation of an object in 3D in which internal regions (structures, etc.) are arranged within the volume in three physical dimensions in proportion and relative to the other internal and surface features of the object which is being modeled. For example, a volumetric representation of the teeth, gums and/or bone may include the outer surface as well as internal structures within the teeth and gums (beneath the surfaces of the teeth and gums) proportionately arranged relative to the teeth, gums and/or bone. The volumetric model can include a combination of 2D color images (surface images) and infrared (e.g., NIR) images captured during one or more scans of the patient's oral cavity. The volumetric model can be that a section in a way that substantially corresponds to a section through the teeth, gums and/or bone, showing position and size of internal structures. A volumetric model may be section from any (e.g., arbitrary) direction and correspond to equivalent sections through the object being modeled. A volumetric model may be electronic or physical. A physical volumetric model may be formed, e.g., by 3D printing and/or using one or more other manufacturing technologies. The volumetric models described herein may extend into the volume completely (e.g., through the entire volume of the teeth, gums and/or bone) or partially (e.g., into the volume being modeled for some minimum depth, e.g., 2 mm, 3 mm, 4 mm, 5 mm, 6 mm, 7 mm, 8 mm, 9 mm, 10 mm, 12 mm, etc.).” [0057]
See Depth and Arera below.
determine a surface area of the caries based on analysis of the occlusal portion of the 3D surface or the color image; and
Outer surface with 3D object…
“The intraoral scanner 101 can be configured to generate a volumetric model, which includes a virtual representation of an object in 3D in which internal regions (structures, etc.) are arranged within the volume in three physical dimensions in proportion and relative to the other internal and surface features of the object which is being modeled. For example, a volumetric representation of the teeth, gums and/or bone may include the outer surface as well as internal structures within the teeth and gums (beneath the surfaces of the teeth and gums) proportionately arranged relative to the teeth, gums and/or bone. The volumetric model can include a combination of 2D color images (surface images) and infrared (e.g., NIR) images captured during one or more scans of the patient's oral cavity. The volumetric model can be that a section in a way that substantially corresponds to a section through the teeth, gums and/or bone, showing position and size of internal structures. A volumetric model may be section from any (e.g., arbitrary) direction and correspond to equivalent sections through the object being modeled. A volumetric model may be electronic or physical. A physical volumetric model may be formed, e.g., by 3D printing and/or using one or more other manufacturing technologies. The volumetric models described herein may extend into the volume completely (e.g., through the entire volume of the teeth, gums and/or bone) or partially (e.g., into the volume being modeled for some minimum depth, e.g., 2 mm, 3 mm, 4 mm, 5 mm, 6 mm, 7 mm, 8 mm, 9 mm, 10 mm, 12 mm, etc.).” [0057]
3D and areas that change over time…
“The processing system 258 can be configured to automatically generate one or more diagnoses based on machine learning analysis of the scans. The system 258 may be configured to automatically chart, maintain notes, and highlight potential problems related to a diagnosis. The processing system 258 may be configured to generate 3D time lapse videos to help identify and illustrate areas in the patient's anatomy which change over time, suggest a diagnosis (e.g. chipped tooth, gingival recession, caries, etc.) based on machine learning, and generate a treatment plan (e.g., follow up appointment/scan in 6 months, night guard, etc.). The processing system 258 may be configured to analyze a 3D model and/or 2D images (e.g., 2D color and 2D NIR) to provide a diagnosis using machine learning. The processing system 258 may be configured to: identify clinical issues based on single tooth 2D color and NIR images (e.g. caries); provide a full-mouth machine learning diagnosis (e.g., identify clinical issues based on full jaw 2D and 3D data (e.g. malocclusion, tooth wear, acid reflux, etc.)); provide auto gum recession identification based on single 3D scan and 2D images; automatic chart all teeth, crowns, fillings, missing teeth etc. based on 3D scan; and/or automatically identify prepped teeth and type of restoration (crown, inlay, bridge, etc.) based on 3D scan.” [0067]
See Depth and Area below.
estimate a volume of the caries based on the depth of the caries and the surface area of the caries.
Volumetric models…
“The intraoral scanner 101 can be configured to generate a volumetric model, which includes a virtual representation of an object in 3D in which internal regions (structures, etc.) are arranged within the volume in three physical dimensions in proportion and relative to the other internal and surface features of the object which is being modeled. For example, a volumetric representation of the teeth, gums and/or bone may include the outer surface as well as internal structures within the teeth and gums (beneath the surfaces of the teeth and gums) proportionately arranged relative to the teeth, gums and/or bone. The volumetric model can include a combination of 2D color images (surface images) and infrared (e.g., NIR) images captured during one or more scans of the patient's oral cavity. The volumetric model can be that a section in a way that substantially corresponds to a section through the teeth, gums and/or bone, showing position and size of internal structures. A volumetric model may be section from any (e.g., arbitrary) direction and correspond to equivalent sections through the object being modeled. A volumetric model may be electronic or physical. A physical volumetric model may be formed, e.g., by 3D printing and/or using one or more other manufacturing technologies. The volumetric models described herein may extend into the volume completely (e.g., through the entire volume of the teeth, gums and/or bone) or partially (e.g., into the volume being modeled for some minimum depth, e.g., 2 mm, 3 mm, 4 mm, 5 mm, 6 mm, 7 mm, 8 mm, 9 mm, 10 mm, 12 mm, etc.).” [0057]
See Volume below.
Depth and Area
Farkash et al. teaches x-ray. They also teach 3D with imaging caries. They do not teach depth and caries area.
Badawi also in the business of x-ray teaches:
Data from x-ray or image or other information…
“In some examples, the dental data set can be linked to an x-ray, image or other information regarding a patient's dentition.” [0156]
Tooth decay with area, size, depth, etc…
“As noted in the example above, in some instances, the generated charting text may be less specific than the actual information in the data set. For example, while an area of tooth decay may be specifically defined by way of one or more parameters in the data set in terms of location, size, depth, etc., the generated charting text may only indicate that the caries is detected on a particular face of a particular tooth. In some instances, this may provide simpler, charting information without extraneous details.” [0075]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the Farkash et al. the ability to use various imaging devices to measure caries as taught by Badawi since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Badawi who teaches using images to measure various parameters of caries.
Volume
The combined references teach area and depth. They do not teach volume.
Liang et al. also in the business of area and depth teaches:
Determine volume from size (area) and depth…
“Optical coherence tomography (OCT) is a non-invasive imaging technique that employs interferometric principles to obtain high resolution, cross-sectional tomographic images of internal microstructures of the tooth and other tissue that cannot be obtained using conventional imaging techniques. Due to differences in the backscattering from carious and healthy dental enamel OCT can determine the depth of penetration of the caries into the tooth and determine if it has reached the dentin enamel junction. From area OCT data it is possible to quantify the size, shape, depth and determine the volume of carious regions in a tooth.” [0125]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to determine volume as taught by Liang et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Liang et al. who teaches volume can be determined by various parameters including shape and depth of caries.
The combined references teach area, depth and volume. They do not explicitly teach how volume is estimated. However one of ordinary skill in the art would recognize that volume is simply area times depth.
It would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s filing to modify the combined references with the knowledge available to such an artisan that volume is the result of an area times depth. This would have been known work in the field of endeavor prompting variations of it in the same field based on use of existing known mathematical formulas and would provide predictable results.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over the combined references in section (9) above in further view of Pub. No. US 2018/0263733 to Pokotilov et al.
Regarding claim 11
The system of claim 10, wherein the computing device is further configured to:
determine whether to treat the caries with a crown or a filling based on the estimated volume of the caries.
The combined references teach treatment. They do not teach volume.
Pokotilov et al. also in the business of treatment teaches:
Determining volume of loss of tooth (caries) for restorative object (determine to treat with crown or filling)…
“In some implementations, determining tooth mass loss includes determining the volume loss of the tooth. In some implementations, determining the volume loss of the tooth for the at least one of the patient's teeth includes: determining an initial volume of the patient's tooth before preparing the tooth for the interim restorative object; determining a prepared volume of the patient's tooth after preparing the tooth for the interim restorative object; and subtracting the prepared volume from the initial mass.” [0020]
Restorative dentistry includes inlays (fillings for cavities) and crowns…
“…. Alternatively, a target arrangement can be one of some intermediate arrangements for the patient's teeth during the course of orthodontic treatment, which may include various different treatment scenarios, including, but not limited to, instances where surgery is recommended, where interproximal reduction (IPR) is appropriate, where a progress check is scheduled, where anchor placement is best, where palatal expansion is desirable, where restorative dentistry is involved (e.g., inlays, onlays, crowns, bridges, implants, veneers, and the like), etc. As such, it is understood that a target tooth arrangement can be any planned resulting arrangement for the patient's teeth that follows one or more incremental repositioning stages. Likewise, an initial tooth arrangement can be any initial arrangement for the patient's teeth that is followed by one or more incremental repositioning stages.” [0143] Inherent with an inlay is a cavity.
Restorative object that minimize tooth volume loss…
“… At block 4033 the final orthodontic position and the restorative object position is determined. In some embodiments, the final orthodontic position and the restorative object position is determined by selecting the final teeth positions that minimize tooth mass or volume loss among the evaluated positions. In some embodiments, the positions may be determined based on minimaxing certain restorative procedures, such as minimizing the number of crowns. In some embodiments, less invasive procedures, such as the use of veneers, are given priority over more invasive restorative procedures, such as crowns and root canals or whole tooth extraction and prosthetics. This process allows for the evaluation of a new restorative position at each stage of the orthodontic treatment plan. The various interim restorative positions provide multiple options during the orthodontic treatment plan of when to apply the restorative object to the patient. The amount of tooth mass reduction can be different at each interim orthodontic position, thus providing more options.” [0340]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to determine treatment as taught by Pokotilov et al. since the since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Pokotilov et al. who teaches the advantages of procedures the minimize volume loss.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Pub. No. US 2022/0189611 to Farkash et al. in view of Pub. No. US 2010/0106518 to Kuo
Regarding claim 12
The system of claim 1, wherein the computing device is further configured to:
determine at least one of a doctor or a group practice treating the patient; and
Farkash et al. teaches:
Identify (determine) doctor and recommend treatment…
“The dataset(s) for building the machine learning may be collected and iteratively modified over time based on a particular patient's oral scans and/or a library of oral scans of different patient. In some variations, the system 258 may be configured to build a questionnaire to identify clinical issues and recommend treatment, identify doctors that would qualify and annotate the dataset, and train machine learning models using the datasets.” [0068]
See Doctor below.
determine one or more treatment preferences associated with at least one of the doctor or the group practice, wherein the one or more treatment recommendations are generated in view of the one or more treatment preferences.
See Doctor below.
Doctor
The combined references teach treatment. They do not teach determine doctor.
Kuo also in the business of treatment teaches:
Treatment goal (recommendation) and based on preferences…
“FIG. 5 is a flowchart illustrating an optimized patient referral routine in accordance with one embodiment of the present disclosure. Referring to FIG. 5, in particular embodiments, a patient profile is generated at step 510 for a prospective orthodontic patient. In one aspect, the patient profile may be automatically or manually generated based on one or more of, for example, the prospective patient's initial orthodontic conditions, the treatment goal, or one or more other treatment parameters including, for example, the desired time period for treatment, the type of treatment plan associated with the desired treatment, cost associated with the treatment, the type of appliance or technique for the treatment, or the prospective patient's desired or preference of treating doctor, for example.” [0027]
Determine doctor for treating a patient…
“Referring still to FIG. 5, at step 530, based on the initial orthodontic condition and/or other treatment parameters associated with the patient profile, a doctor database is queried. For example, in one embodiment, the doctor database may include the types of cases that the doctor is trained to treat, the types of cases that the doctor is willing to treat, historical information of treatment profiles and/or parameters related to the doctor's treatment history including, for example, the treated patients' initial condition, the goal or target condition, the actual outcome of the treated patients, treatment success rates, and/or parameters related to the to treatments performed including, for example, the duration of treatment, the treatment methodology, any ancillary treatment, types of appliances used in the treatment, and the like.” [0030]
Types of treatment goals (treatment preferences) doctors have treated…
“In one aspect, the doctor profiles stored in the doctor database may be represented in a numerical format, organized by descriptive categorical information, or represented in a combination of descriptive and numerical data associated with each doctor profile. Accordingly, the doctor database in one embodiment includes profiles of doctors and associated treatment parameters including, for example, the range and number of cases that the doctor is willing or trained to perform, the types of treatment goals the doctors have treated, and the statistical history of the doctors' treatment success rates. The doctor database may be used to compare an individual doctor's performance on one or more case performance metrics such as treatment time and level of success achieved relative to the general pool of similar cases found in the case database of step 520.” [0031]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the Farkash et al. the ability to determine a doctor as taught by Kuo since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Kuo who teaches the advantages of determining a doctor for treating different treatment goals.
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Pub. No. US 2022/0189611 to Farkash et al. in view of Pub. No. US 2004/0073092 to Miles.
Regarding claim 14
The system of claim 1, wherein generating the one or more diagnoses of the one or more oral health problems comprises performing a differential diagnosis of the one or more oral health problems.
The combined references teach diagnosis. They do not teach differential diagnosis.
Miles also in the business of diagnosis teaches:
Differential diagnosis with lesion (health problem)….
“These objects and others are achieved by various forms of the present invention. According to one aspect of the invention, a method for diagnosing oral lesions using a computer system is disclosed. An image of the oral lesion to be diagnosed is captured, and the user selects one ore more descriptor terms to describe the lesion. The system processes the selection and returns a differential diagnosis list that includes the most probable lesions that should be considered. The user can then view details about each of the lesions in the differential diagnosis list, such as the signs, symptoms, and behavior of the selected lesion to determine which lesion in the list is the best match with the lesion being diagnosed. If the user is not able to find a satisfactory match, the user can either refine the descriptor terms to potentially receive a new differential diagnosis list, or in the alternative, generate a referral report to send the patient out of the office.” [0006]
“If the user determines the current selected lesion is not a good match with the lesion to be diagnosed 20 and there are more lesions in the differential diagnosis list 22, the user can select another lesion from the list and review details about the newly selected lesion 18. If the current selected lesion is not a match 20 and there are no additional lesions in the differential diagnosis list 22, the user can decide to revise the descriptor terms 24 and then again selects the descriptor terms to best describe the lesion and indicate when finished 14. In the alternative, if the user does not want to revise the descriptor terms 24, the user can select an option to have the system generate a referral report 26 to refer the patient out of the office and end the diagnosis process 52.” [0031]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to perform differential diagnosis as taught by Miles since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Miles who teaches the advantages of using differential diagnosis for evaluating patients.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Pub. No. US 2022/0189611 to Farkash et al. in view of Pub. No. US 2022/0319711 to Maje et al.
Regarding claim 15
The system of claim 1, wherein the computing device is further configured to:
determine one or more questions for a doctor to ask the patient based on processing the at least one of the data or the estimations of the one or more oral conditions;
Farkash et al. teaches:
Build a questionnaire…
“The dataset(s) for building the machine learning may be collected and iteratively modified over time based on a particular patient's oral scans and/or a library of oral scans of different patient. In some variations, the system 258 may be configured to build a questionnaire to identify clinical issues and recommend treatment, identify doctors that would qualify and annotate the dataset, and train machine learning models using the datasets.” [0068]
See Questions and Answers below.
output the one or more questions;
See Questions and Answers below.
receive answers to the one or more questions; and
See Questions and Answers below.
update at least one of a) the one or more actionable symptom recommendations, b) the one or more diagnoses of the one or more oral health problems or c) the one or more treatment recommendations based on the received answers.
See Questions and Answers below.
Questions and Answers
The combined references each recommendation. They do not teach questions and answers.
Maji et al. also in the business of recommendation teaches:
Medical interview with dentist questions output) and patient answers…
“FIG. 6 is an explanatory view showing examples of questions for evaluation by the instructor. The questions shown in FIG. 6 are examples of the question items when a medical interview by the dentist or the dental hygienist is performed, and the answers of the user (patient) are recorded in the medical record. By obtaining answers from the user, the user's compliance as the instructor's impression can be estimated. The compliance as an impression can be expressed by a compliance index (for example, three classes of high, unstable and low, or ten classes of 1 to 10). The questions and the choices of answers shown in FIG. 6 are examples and the present invention is not limited to the examples of FIG. 6.” [0049]
Display (output) the questions and answers…
“The questions and the choices of answers as shown in FIG. 6 are displayed on the display panel 21 of the instructor terminal 20, and by the instructor selecting answers, the answers are transmitted to the server 50 as subjective data.” [0050]
Messaging advising (recommendation) based on (update) compliance (response to answers)…
“When the compliance index is comparatively high, an advice message advising to maintain the current health maintenance and promotion actions can be outputted. Further, respecting the user's own consciousness, high-level information such as the decrease in the frequency and the fulfillment of intellectual curiosity can be provided. When the compliance index is comparatively low, an advice message recommending actions that can eliminate the motivation decreasing factor can be outputted. By doing this, the user can receive support information in accordance with the compliance index of his/her own health maintenance and promotion actions, so that the decrease in the motivation can be prevented and the quality of the health maintenance and promotion actions can be increased. The support information can be outputted in the form of letters (text), charts, graphs, characters or avatars, or sound.” [0065]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to provide questions and answers as taught by Maji et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Maji et al. who teaches the determining patient compliance for treatment purposes.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Pub. No. US 2022/0189611 to Farkash et al. in view of Pub. No. US 2022/0012815 to Kearney et al.
Regarding claim 16
The system of claim 1, wherein the computing device is further configured to:
receive a selection of one or more recommended treatments; and
See Treatments below.
generate a presentation comprising the one or more selected treatments and associated prognoses of the one or more selected treatments, the presentation comprising talking points for a doctor.
[No Patentable Weight is given to non-functional descriptive claim language of “presentation comprising talking points for a doctor” as there is no interaction with the presentation, and it is just presenting information.]
See Treatments below.
Treatments
The combined references teach treatments. They do not teach generate a presentation of treatments and associated prognoses.
Kearney et al. also in the business of treatment teaches:
Example of receiving a list of proposed treatments…
“At step 6418, patient data 6402 is received. The data may be received through an interface that enables a user to drag and drop dental images, treatment history, or other data. In particular, inputs 6316 may be output by machine learning models according to the above-described embodiments. Accordingly, any of the data described above as being used by the machine learning models used to obtain the inputs 6316 may be received at step 6418, such as patient demographic data or comorbidities. Step 6418 may further include receiving a list of proposed treatments, such as in the form of CDT codes, textual description, or other descriptor. The data 6402 may also be retrieved from a screen capture, image acquisition hardware retrieval, electronic health records (EHR), integration with practice management software (PMS), or other source of data.” [0774]
Output (generate) text (presentation) as to proposed treatment and whether it is appropriate (prognosis)…
“Each output block 5408 is processed if the output (positive, negative) of the decision block 5406 to which it is connected is produced. Each output block 5408 may include a decision statement 5418 that is either a coded or human-understandable statement specifying the result of the workflow if that output block 5408 is reached. For example, the decision statement 5418 may indicate that an administered or proposed treatment is suitable based on the data associated with the patient treatment data block. The decision statement 5418 may state that an administered or proposed treatment is not deemed appropriate. The output block 5408 may include internal text 5420 viewable to an enterprise executing the workflow but not viewable to an entity submitting the data associated with the patient treatment data block 5404 for evaluation according to the workflow 5402. The output block 5408 may also include external text 5422 that is viewable by this entity. The internal and external text 5420, 5422 may provide a human-readable explanation for the decision statement 5418.” [0668]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to generate a presentation of treatments and prognosis as taught by Kearney et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Kearney et al. who teaches the advantages of providing proposed treatments to related prognosis.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Pub. No. US 2022/0189611 to Farkash et al. in view of Pub. No. US 2020/0015764 to Brooks et al.
Regarding claim 18
The system of claim 1, wherein the computing device is further configured to:
determine that a treatment of the one or more treatment recommendations was performed on the patient:
automatically generate an insurance claim for the treatment, wherein automatically generating the insurance claim comprises:
See Insurance Claim below.
selecting or generating an image of the dental site of the patient; and
See Insurance Claim below.
annotating the image based on at least one of the estimations of the one or more oral conditions, the one or more actionable symptom recommendation, the one or more diagnoses of the one or more oral health problems, or the treatment performed on the patient; and
See Insurance Claim below.
submitting the insurance claim to an insurance carrier.
See Insurance Claim below.
Insurance Claim
The combined references teach dental treatment. They do not teach insurance.
Brooks et al. also in the business of dental treatment teaches:
Computer assisted tool for insurance claims…
“The present invention provides a computer-assisted periodontal disease assessment tool that allows experts and expert systems to evaluate the presence and extent of periodontal disease in a patient based on a remotely generated, typically pre-standardized set of patient information including at least radiographic images of the patient dentition and typically also including other patient information. In some examples, these tools can provide periodontal insurers with a uniform, consistent, evidence-based decision-aid for insurance claims examination.” [0006]
Example of select image…
“In specific embodiments of this method, the location on the bone boundary is selected so that the line segment joining the adjacent CEJ-endpoint and the location accurately reflect the maximum bone loss adjacent to the tooth, and marking the digitized radiographic image comprises presenting the image on a monitor in communication with the processor and using an interface in communication with the processor to manually mark the location on the bone boundary and the locations of the CEJ-endpoints on the image. Alternatively, marking the digitized radiographic image may comprise automatically annotating the location on the bone boundary and the locations of the CEJ-endpoints on the image using an instruction set implemented by the processor. Such automated marking may be based on a machine-learning analysis of the results of manual marking of large numbers of patient images.” [0010]
Insurance claim assessment…
“For these reasons, it would be desirable to provide improved methods and systems for periodontal disease assessment. In particular, it would be desirable to provide methods and systems for implementing and facilitating the evaluation of dental images and other patient information by experts and expert systems. Such methods and systems may find use in a variety of circumstances, including but not limited to periodontal insurance claim assessment, where evidence-based claim examination processes can be made more consistent, more accurate, less expensive and more reliable. At least some of these objectives will be met by the inventions described and claimed herein.” [0004]
Tagged (annotated) for insurance dental claim…
“Semantic labeling of the scanned, image-formatted data is performed automatically through the use of appropriate algorithms. This allows individual documents to be tagged as a type of form (e.g., insurer-specific dental claim form), free-form text (e.g., correspondence), a radiograph, photograph or probe depth-chart. Further classification refinement also can be achieved where radiographs are tagged as bitewing, periapical, or panorama, photographs tagged as color, grey-scale or binary and forms tagged by specific layout identifiers (e.g., DD-Form 2017).” [0048]
Data submitted to insurers…
“Step 2: Retrieve and Display Radiographic Evidence. Radiographic data submitted to insurers is prepared in a variety of ways, including computer screenshot, photocopy and direct digital readout. As a result, the radiographs are not of uniform image quality or orientation and are not normally optimized for human interpretation.” [0050]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to file a claim for insurance as taught by Brooks et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Brooks et al. who teaches the advantages of filing claims with insurers and providing support information.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
The following prior art teaches at least dental and imaging:
US-20210338387-A1; US-20210343400-A1; US-20230248243-A1; US-20240212153-A1; US-20240335162-A1; US-20250037834-A1; US-20210321872-A1; US-20220202295-A1; US-20210353393-A1; US-20160038092-A1; US-20220180447-A1; US-20220280104-A1; US-20200146646-A1; US-20200305808-A1; US-20230215001-A1; WO-2021211871-A1; WO-2022011342-A1; CN-114418989-A; KR-20210006244-A; KR-20230007124-A; KR-20230024047-A; US-12033742-B2
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNETH BARTLEY whose telephone number is (571)272-5230. The examiner can normally be reached Mon-Fri: 7:30 - 4:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHAHID MERCHANT can be reached at (571) 270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KENNETH BARTLEY/Primary Examiner, Art Unit 3684