Prosecution Insights
Last updated: April 19, 2026
Application No. 18/294,374

Method, apparatus, and computer-readable recording medium for providing orthodontic status and orthodontic treatment evaluation information based on dental scan data of patient

Non-Final OA §101§102
Filed
Feb 01, 2024
Examiner
ELLIOTT, JORDAN MCKENZIE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Innodtech Inc.
OA Round
1 (Non-Final)
45%
Grant Probability
Moderate
1-2
OA Rounds
2y 10m
To Grant
31%
With Interview

Examiner Intelligence

Grants 45% of resolved cases
45%
Career Allow Rate
9 granted / 20 resolved
-17.0% vs TC avg
Minimal -14% lift
Without
With
+-13.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
40 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§101 §102
DETAILED ACTION Claims 1-18 are pending in this application and have been examined using the priority date of 8/10/2021 in accordance with the applicant’s claim to foreign priority. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 2/01/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 18 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Regarding claim 18, a "computer readable recording medium" is defined in the specification to include “magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks and DVDs” (Originally Filed Specification [0219]) which does not disavowal the claimed computer readable recording medium to include transitory propagating signals per se, since the phrase “in at least one embodiment” implies the machine-readable medium is not always a “non-transitory computer-readable recording medium that excludes transitory signals” for every embodiment disclosed. The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C 101 as covering non-statutory subject matter. The claims, as defined in the specification, cover both non-statutory subject matter and statutory subject matter. A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments by adding the limitation "non-transitory" to the claim. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: Computing device in claim 18 initial image acquisition step in claim 18 orthodontic image acquisition step in claim 18 orthodontic appliance design generation step in claim 18 intermediate image acquisition step in claim 18 orthodontic status information provision step in claims 18 initial image acquisition unit of claim 17 orthodontic image acquisition unit of claim 17 orthodontic appliance design generation unit of claim 17 intermediate image acquisition unit of claim 17 an orthodontic status information provision unit of claim 17 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sherwood (US 20170100212 A1). Regarding claim 1 Sherwood discloses; A method for providing an orthodontic status and orthodontic treatment evaluation information based on tooth part scan data of a patient (Sherwood, [0008]-[0009] that system takes multiple images of teeth and determines orthodontic alignment shifts based upon), which is implemented by a computing device including one or more processors and one or more memories storing instructions executable by the processors (Sherwood, [0150] the system has a memory unit which instructions are stored to be executed, [0147] the system may be run on a computer, which inherently has processors to execute the program), the method comprising: an initial image acquisition step of acquiring a first tooth image (Sherwood, [0098] Multiple high resolution tooth scans can be acquired, [0108] a first step is acquiring either a mold or an image/scan of the patient’s teeth (initial image)), which is an image for patient's teeth arrangement (Sherwood, [0108] a first step is acquiring either a mold or an image/scan of the patient’s teeth (initial image)), based on received first tooth part scan data when first tooth part scan data, which is three- dimensional scan data acquired by capturing a patient's head, is received (Sherwood, [0108] a first step is acquiring either a mold or an image/scan of the patients teeth (initial image), the scan data can include a radiographic, tomographic or sonographic scan of the patient’s teeth/jaw/gums etc., which would be obtained from a scan of the head and may be three dimensional); PNG media_image1.png 190 372 media_image1.png Greyscale (Sherwood, [0108]) an orthodontic image acquisition step of, when the acquisition of the first tooth part image is completed, confirming a teeth arrangement state based on the first tooth image through a pre-stored algorithm (Sherwood, [0108] from the data obtained (first tooth images) the tooth arrangement pre-treatment is obtained, [0107] the methods of tooth analysis and arrangements can be performed using computer programs (pre-stored algorithm)), acquiring treatment solution information for correcting the teeth arrangement based on the confirmed teeth arrangement state (Sherwood, [0109] and [0110] the tooth movement paths are determined based upon the captured tooth data to move the teeth into optimal alignment), and acquiring a second tooth image, which is an image for predicted teeth arrangement upon orthodontic completion based on the acquired treatment solution information (Sherwood, [0109]-[0110] the final position of the teeth can be determined after the pre-treatment data is processed, where the final position is the desired and predicted final position/post treatment position); PNG media_image2.png 408 372 media_image2.png Greyscale (Sherwood, [0109]-[0110]) an orthodontic appliance design generation step of, when the acquisition of the second tooth image is completed, generating a design of a transparent orthodontic appliance for correcting the patient's teeth arrangement into teeth arrangement corresponding to the second tooth image (Sherwood,[0112] after the second tooth image/predicted final result image is computed, the appliance can be generated based on the paths which the teeth must move to reach the corrected/final tooth result image); PNG media_image3.png 96 336 media_image3.png Greyscale PNG media_image4.png 118 330 media_image4.png Greyscale (Sherwood, [0112]) PNG media_image5.png 720 549 media_image5.png Greyscale (Sherwood, Figure 2A, emphasis added) an intermediate image acquisition step of acquiring a third tooth image, which is an image for arrangement of patient's teeth being corrected (Sherwood, [0125] the aligner can be changed to correct one or more target teeth using input data, [0128] the actual vs expected position of the target tooth is modeled using a tooth model, where [0136] and [0137] an x-ray or CBCT is captured to generate parts of the tooth model during a step in which a new aligner is generated to correct a target tooth during treatment), based on received second tooth part scan data when second tooth part scan data, which is new three-dimensional scan data, is received in a process of correcting the patient's teeth arrangement as the patient wears the transparent orthodontic appliance based on the generated design (Sherwood, [0125] the aligner can be changed to correct one or more target teeth using input data which is patient scan data from multiple scans, [0128] the actual vs expected position of the target tooth is modeled using a tooth model, where [0136] and [0137] an x-ray or CBCT (third tooth image, 3D data) is captured to generate parts of the tooth model during a step in which a new aligner is generated to correct a target tooth during treatment, [0114] the treatment is performed as successive aligners to gradually shift the teeth, so any input images taken after the initial treatment would therefore be analogous to a second and third tooth image taken during corrective treatment); and an orthodontic status information provision step of, when the acquisition of the third tooth image is completed, acquiring tooth movement vector information about the patient through the first tooth image, the second tooth image, and the third tooth image to generate orthodontic status information for orthodontic treatment of the patient based on the acquired tooth movement vector information to provide the orthodontic status information to a medical personnel account (Sherwood, [0120] For each new aligner in the process, tooth movement paths (tooth vectors) are generated, because this occurs for each step of treatment iteratively, it would therefore occur after all images are captured, [0121] tooth movement path including angular velocity and maximum allowable displacement (tooth movement vector information) are determined and compared to actual tooth position based on the real time treatment (orthodontic status and treatment plan adjustments)). Regarding claim 2 Sherwood; The method of claim 1, wherein the orthodontic image acquisition step includes: a process start step of starting a malocclusion confirmation process when the acquisition of the first tooth image is completed (Sherwood, [153] patient tooth data (first tooth images) can be used to look for malocclusions from an indexed database); a malocclusion classification step of, when the malocclusion confirming process starts, acquiring teeth arrangement state information about the patient by analyzing the first tooth image through the pre-stored algorithm to classify the acquired teeth arrangement state information as one of a plurality of malocclusion type information (Sherwood, [0155] and [0156] the system stores the patient’s dentitions (tooth image data and arrangements) and stores a set of parameters for classifying a malocclusion based on the tooth position and angles); PNG media_image6.png 264 404 media_image6.png Greyscale PNG media_image7.png 434 414 media_image7.png Greyscale (Sherwood, [0155] and [0156], emphasis added) and a solution information acquisition step of, when the teeth arrangement state information is classified as one of the plurality of malocclusion type information (Sherwood, [0156] the tooth dimensions are classified as a type of malocclusion such as an overbite based on the tooth arrangement data acquired from the patient), acquiring treatment solution information about the classified malocclusion type information through a machine learning-based artificial intelligence solution generation algorithm that derives a solution for orthodontic treatment (Sherwood, [0169]-[0174] based on the determination of the malocclusion, the system may be queried to determine if treatment is needed and what treatments are needed based upon 4 different goals outlined in paragraphs [0170]-[0174]). Regarding claim 3 Sherwood; The method of claim 1, wherein in the orthodontic status information provision step, tooth movement vector information about each of the patient's teeth is acquired based on a common point included in a first cephalometric image corresponding to the first tooth part scan image and a second cephalometric image corresponding to the second tooth part scan data (Sherwood, [0068] tooth movement vector information is generated for each of the teeth based upon a centerline of the patient’s dentition obtained from multiple scans, [0070] a tooth superposition algorithm is used to determine common tooth features across multiple scans to measure movement of each tooth). PNG media_image8.png 258 408 media_image8.png Greyscale (Sherwood, [0070]) Regarding claim 4 Sherwood teaches; The method of claim 3, wherein the common point is a common location located on a patient's cephalic part included in the first cephalometric image and the second cephalometric image (Sherwood, [0070] based on the patient’s arch scans, a reference or references are selected to relate the two together, the references being common to the two images), in which the common point includes at least three locations that are not changed even when the patient's teeth arrangement is corrected (Sherwood, [0070] based on the patient’s arch scans, a reference or references are selected to relate the two together, the references being common to the two images, where the plurality of teeth that do not move are selected as references (multiple, indicating at least three)), and serves as a reference point for overlapping the first tooth image, the second tooth image, and the third tooth image (Sherwood, [0070] based on the patient’s arch scans, a reference or references are selected to relate the two together, the references being common to the two images, where the plurality of teeth that do not move are selected as references (multiple, indicating at least three), then a tooth superposition algorithm is used to determine common tooth features across multiple scans to measure movement of each tooth). Regarding claim 5 Sherwood discloses; The method of claim 3, wherein the orthodontic status information provision step includes: an image overlapping step of generating a prognostic image by overlapping the first tooth image, the second tooth image, and the third tooth image based on the common point included in the first cephalometric image and the second cephalometric image (Sherwood, [0069] the system collects a pretreatment (first tooth image), a predicted outcome image (second tooth image) and an actual tooth image during correction (third tooth image) to compare during treatment and generate new aligners and treatment plans (prognosis), [0070] each scan is matched using superimposition (image overlay/overlapping) using common reference points to the scans); an orthodontic progress confirmation step of, when the generation of the prognostic image is completed, generating first tooth movement vector information including first tooth movement direction information and first tooth movement distance information based on the prognostic image by confirming a direction and distance in which each of the teeth is moved by comparing teeth arrangement corresponding to the first tooth image with teeth arrangement corresponding to the third tooth image (Sherwood, [0070] tooth movement is measured by looking at changes between the superimposed scans (completed generation of prognostic image), [0069] the tool compares the actual achieved tooth movement (outcome/third tooth image) to the initial image (first tooth image) and the achieved tooth positions to the predicted tooth positions, [0111] using the initial and final tooth positions the tooth motion paths (tooth movement vectors), rotation and translations are calculated as well as tooth movement direction information for each tooth (plurality of vectors for each tooth indicating at least a first tooth movement vector)); an orthodontic progress prediction step of, when a function of the orthodontic progress confirmation step is performed, generating second tooth movement vector information including second tooth movement direction information and second tooth movement distance information based on the prognostic image by confirming a direction and distance in which each of the teeth is expected to be moved by comparing the teeth arrangement corresponding to the first tooth image with teeth arrangement corresponding to the second tooth image (Sherwood, [0070] tooth movement is measured by looking at changes between the superimposed scans (completed generation of prognostic image), [0111] using the initial and final tooth positions from the image (first and second tooth images) the tooth motion paths (tooth movement vectors), rotation and translations are calculated as well as tooth movement direction information for each tooth (plurality of vectors for each tooth indicating at least a first tooth movement vector and a second tooth movement vector)); and an information generation step of, when the acquisition of the first tooth movement vector information and the second tooth movement vector information is completed, starting an information generation process by generating orthodontic status information based on the first tooth movement vector information and the second tooth movement vector information (Sherwood, [0113] at various stages in the process after the tooth paths have been defined the clinician may assess the treatment of the patient and review the treatment plan and paths of the teeth, which the examiner is interpreting as orthodontic status information). Regarding claim 6 Sherwood discloses; The method of claim 5, wherein in the image overlapping step, the prognostic image is generated by applying a graphic effect such that the first tooth image, the second tooth image, and the third tooth image, which overlap each other based on the common point (Sherwood, [0069] the system collects a pretreatment (first tooth image), a predicted outcome image (second tooth image) and an actual tooth image during correction (third tooth image) to compare during treatment and generate new aligners and treatment plans (prognosis), [0070] each scan is matched using superimposition (image overlay/overlapping) using common reference points to the scans), are visually distinguished (Sherwood, [0255] when the plurality of images are displayed and overlapped, the images may be displayed with varying levels of opacity so as to visually distinguish the images). Regarding claim 7 Sherwood discloses; The method of claim 5, wherein the orthodontic progress prediction step includes: a fourth tooth image acquisition step of, when the direction and distance in which each of the teeth is expected to be moved, generating a plurality of fourth tooth images corresponding to each of a plurality of time points based on the treatment solution information from the first tooth image and the second tooth image (Sherwood, [0121] the system can use finite element analysis to generate a the tooth repositioning model for the amount of real elapsed time for each segment of the tooth data, [0113] throughout the process of modeling the iterative position shifts of the teeth to the final alignment, the clinician may have an animate or visual model displayed to them of the tooth positions at each time points, which the examiner is interpreting a step of generating an image/display of the tooth positions at each point in tooth movement); And a second tooth movement vector information generation step of, when the generation of the plurality of fourth tooth images is completed, generating the second tooth movement vector information about teeth arrangement included in each of the plurality of fourth tooth images by comparing each of the plurality of fourth tooth images in order of progress, and each of the plurality of time points is at least two time points input by the medical personnel account (Sherwood, [0113] the tooth movement vectors are computed and visually modeled successively for each stage of the tooth alignment process, where the model is a generation of successive “fourth tooth images” and movement paths are being interpreting a plurality of “movement vector information”, given that there is plurality of movement steps there is at least a first and second tooth movement path being generated). Regarding claim 8 Sherwood discloses; The method of claim 7, wherein the information generation step includes: a direction confirmation step of, when the generation of the first tooth movement vector information and the second tooth movement vector information is completed, confirming that an error rate of a movement axis direction based on the first tooth movement direction information with respect to a movement axis direction based on the second tooth movement direction information is equal to or less than a specified error rate by comparing the second tooth movement direction information with the first tooth movement direction information (Sherwood, [0111] once the plurality of tooth movement paths are generated (after the movement vectors are determined), the tooth motion paths are verified to stay within a rotational and linear translational threshold to verify that the movements are clinically viable and do not result in tooth collision, [0112] the thresholds are set so that they are limits, which indicates that the tooth movement must be less than or equal to the specific movement threshold to be viable, further this threshold is determined based upon the comparison of the initial position to the final movement position (comparison of the first to the second movement vector information)); a distance confirmation step of confirming that an error rate of a movement distance based on the first tooth movement distance information with respect to a movement distance based on the second tooth movement distance information is equal to or less than the specified error rate by comparing the second tooth movement distance information with the first tooth movement distance information during the direction confirmation step (Sherwood, [0111] once the plurality of tooth movement paths are generated (after the movement vectors are determined), the tooth motion paths are verified to stay within a rotational and linear translational threshold to verify that the movements are clinically viable and do not result in tooth collision, [0112] the thresholds are set so that they are limits, which indicates that the tooth movement must be less than or equal to the specific movement threshold to be viable, further this threshold is determined based upon the comparison of the initial position to the final movement position (comparison of the first to the second movement vector information, which also includes a comparison of tooth translation (distance))); and an orthodontic status information generation step of, when result information based on the direction confirmation step and the distance confirmation step is acquired, generating the orthodontic status information indicating a status of orthodontic treatment for the patient's teeth arrangement based on the result information (Sherwood, [0113] the determined and constrained tooth paths (as determined in the above mappings [0111]-[0112]) are then displayed to the clinician for verification, which is analogous to status information being generated/displayed on the patient’s teeth). Regarding claim 9 Sherwood discloses; The method of claim 1, wherein the method for providing an orthodontic status and orthodontic treatment evaluation information based on tooth part scan data of a patient further comprises (Sherwood, [0113] the clinician is displayed a visualization of the tooth movement plan based on initial, current and predicted information in order to adjust treatment): an orthodontic treatment evaluation information provision step, and the orthodontic treatment evaluation information provision step includes: a target value acquisition step of, when a function of the initial image acquisition step is completed, acquiring a malocclusion image for patient's teeth based on the first tooth part scan data (Sherwood, [0179] the patient’s dentition images (first tooth images) that were previously captured are compared to a plurality of malocclusion types, and the type of malocclusion that patient has is selected and diagnosed from the images (malocclusion image is acquired based on the first tooth image)), PNG media_image9.png 72 258 media_image9.png Greyscale PNG media_image10.png 78 258 media_image10.png Greyscale (Sherwood, [0179]) acquiring treatment solution information based on the acquired malocclusion image (Sherwood, [0228] the method includes determining and initial malocclusion and obtaining a reference malocclusion image, and then determining a treatment and final goal dentition from these images), and acquiring an orthodontic target value for correcting the patient's malocclusion based on the acquired treatment solution information (Sherwood, [0291] based on the orthodontic condition or malocclusion information a treatment goal is determined, and [0292] therapies and dental appliances are generated to treat the condition, [0074]-[0075] in generating the aligners to treat a condition, target tooth movement values are determined); an orthodontic completion image acquisition step of, in a state in which the acquisition of the orthodontic target value is completed (Sherwood, [0207]-[0209] the target dentition values may be obtained and compared with the actual achieved position values to determine the outcome), acquiring an orthodontic completion image that is an image for corrected teeth arrangement based on the third tooth part scan data when third tooth part scan data (Sherwood, [0066] the “achieved outcome” which is the outcome after completion of orthodontic treatment, is capturing using a scan of the patient’s dentition), which is new three-dimensional scan data acquired by capturing a patient who has completed the orthodontic treatment by the transparent orthodontic appliance (Sherwood, [0066] the “achieved outcome” which is the outcome after completion of orthodontic treatment, is capturing using a scan of the patient’s dentition, this is a scan taken at completion, which would be a new dental scan); and an evaluation information provision step of acquiring an orthodontic achievement value for each of the corrected teeth based on the acquired orthodontic completion image (Sherwood, [0116] the achieved positions of the teeth are evaluated to determine if the position values meet the end condition position values (achievement values) for treatment, if the positions meet the final criteria finite element analysis is performed based on the input data (which includes patient scan data) to determine if they are orthodontically acceptable), and when error information is acquired by comparing the orthodontic achievement value with the orthodontic target value (Sherwood, [0098] the system models and determines discrepancies/errors between target tooth position values and achieved position values (comparing the orthodontic achievement value with the orthodontic target value), generating orthodontic treatment evaluation information (Sherwood, [0098] the system revised the intended/expected position value information based on the comparison), which is evaluation information about the orthodontic treatment, based on the acquired error rate to provide the orthodontic treatment evaluation information to the medical personnel account (Sherwood, [0098] the system revised the intended/expected position value information based on the comparison [0111] once the plurality of tooth movement paths are generated, the tooth motion paths and final positions are verified to stay within a rotational and linear translational threshold to verify that the movements are clinically viable and do not result in tooth collision, this verification is based on the tooth movement thresholds (error rates)). Regarding claim 10 Sherwood discloses; The method of claim 9, wherein the target value acquisition step includes: a malocclusion confirmation start step of starting a malocclusion confirmation process when the first tooth part scan data is received from the medical personnel account (Sherwood, [0179] the patient’s dentition images (first tooth images) that were previously captured are compared to a plurality of malocclusion types, and the type of malocclusion that patient has is selected and diagnosed from the images (malocclusion image is acquired based on the first tooth image)); a malocclusion determination step of, when the malocclusion confirmation process is started, determining patient's malocclusion by analyzing the malocclusion image through a pre-stored malocclusion confirmation algorithm to confirm the patient's teeth arrangement through the analyzed malocclusion image (Sherwood, [0179] the patient’s dentition images (first tooth images) that were previously captured are compared to a plurality of malocclusion types, and the type of malocclusion that patient has is selected and diagnosed from the images (malocclusion image is acquired based on the first tooth image), the patient’s malocclusion type is confirmed through comparative assessment of malocclusion images), and classifying the confirmed teeth arrangement as any one of a plurality of malocclusion information (Sherwood, [0196] the type of malocclusion can be identified such as overbite, moderate crowding, etc. based on the patient’s tooth arrangement); and a solution acquisition step of, when the determination on the patient's malocclusion is completed, acquiring treatment solution information about the patient's malocclusion through a machine learning-based artificial intelligence solution generation algorithm that derives a solution for orthodontic treatment (Sherwood, [0080] in one embodiment training of artificial neural networks may be trained/used to assess the patient’s treatment, conditions and outcomes, [0081] the network may be trained to recognize and recommend dental treatment plans based on dental input data from the patient, [0098] the system may be trained to consider patient’s dental dragonesses which includes malocclusion type). Regarding claim 11 Sherwood discloses; The method of claim 10, wherein in the malocclusion determination step, when the patient's teeth arrangement is confirmed, at least one of a position, a contact relationship between adjacent teeth, a vertical relationship, rotation, and inclination for each of the patient's teeth included in the malocclusion image is confirmed through the pre-stored malocclusion confirmation algorithm (Sherwood, [0068] the system determines tooth movements in all three directions, rotation of the tooth centerline for the patient’s teeth, and vertical centerline of each tooth are determined to be used in generating the orthodontic application to correct the patient’s teeth. This is a part of the treatment plan generation step, which occurs after malocclusion classification as detailed in [0200]-[0206], where in [0203] this process of iteratively generating rotations and positions of each tooth is described as well). Regarding claim 12 Sherwood; The method of claim 11, wherein the target value acquisition step includes: a guide application step of, when the acquisition of the treatment solution information is completed by performing a function of the solution acquisition step, applying an orthodontic guide based on the treatment solution information to the malocclusion image through the solution generation algorithm (Sherwood, [0121] the initial untreated tooth image (malocclusion image) has finite element analysis performed on it to simulate the application of the tooth movement paths (tooth movement vectors) to the misaligned teeth, the model performing the analysis is functionally equivalent to the solution generation algorithm); a virtual orthodontic image acquisition step of, as the orthodontic guide based on the treatment solution information is applied to the malocclusion image, acquiring a virtual orthodontic image, which is a virtual image corresponding to teeth arrangement in which the patient' s orthodontic treatment is completed, by arranging each of the patient's teeth so as to be in a state in which the patient's orthodontic treatment is completed (Sherwood, [0121] the system generates a virtual animation showing the teeth being repositioned (virtual orthodontic image) according to the movement path information (orthodontic guide) to simulate the end positions of the teeth); and an orthodontic value acquisition step of, when the acquisition of the virtual orthodontic image is completed, acquiring orthodontic target direction information and orthodontic target distance information about each of the teeth by comparing the virtual orthodontic image with the malocclusion image to acquire the orthodontic target value, which is a reference value for correcting the patient's malocclusion (Sherwood, [0121] the maximum allowable displacement (target distance and direction of movement) for each tooth is determined from the step of using the initial tooth position image (malocclusion image) and the final simulated/predict image (virtual orthodontic image), this is used to determine the aligner shape to move the teeth to the target position which will correct the misalignment). Regarding claim 13 Sherwood discloses; The method of claim 12, wherein the evaluation information provision step includes: an achievement value acquisition step of, when the orthodontic completion image is acquired by performing a function of the orthodontic completion image acquisition step, acquiring the orthodontic achievement value for each of the corrected patient's teeth by analyzing the orthodontic completion image through the pre-stored malocclusion confirmation algorithm (Sherwood, [0116] the achieved positions of the teeth are evaluated to determine if the position values (achievement values) meet the end condition position values for treatment, if the positions meet the final criteria finite element analysis is performed based on the input data (which includes patient scan data) to determine if they are orthodontically acceptable, this is based on the final image (orthodontic completion image)); an error value confirmation step of, when the acquisition of the orthodontic achievement value is completed, determining whether the acquired error value is within a range of a specified error value by comparing the orthodontic target value with the orthodontic achievement value to acquire an error value of the orthodontic achievement value for the orthodontic target value (Sherwood, [0116] the achieved positions of the teeth are evaluated to determine if the position values (achievement values) meet the end condition position values for treatment, if the positions meet the final criteria finite element analysis is performed based on the input data (which includes patient scan data) to determine if they are orthodontically acceptable, this is based on the final image (orthodontic completion image), [0127] a difference/disparity is computed between the actual end position value for each tooth and the desired end values (achieved vs target value error), [0116] the achieved tooth position is compared to the target tooth position value to confirm whether the two values are sufficiently close to one another or not); and an error information analysis step of, based on a result determined according to a function of performing the error value confirmation step, generating the orthodontic treatment evaluation information by generating error information and analyzing the generated error information through the artificial intelligence solution generation algorithm (Sherwood, [0080] the system may have trained artificial neural networks for performing the orthodontic diagnosis and treatment outcome determination, [0097] this machine learning system updates and modifies treatment plans based upon outcome data as it is received, therefore the machine learning/artificial neural network system would be capable of performing the treatment outcome assessments of [0116], wherein the achieved tooth position is compared to the target tooth position value to confirm whether the two values are sufficiently close to one another or not, and based on this disparity/error value the status of the patient’s treatment is determined (orthodontic treatment evaluation information)). Regarding claim 14 Sherwood discloses; The method of claim 13, wherein the orthodontic treatment evaluation information is information obtained by determining whether or not orthodontic treatment for each of the patient's teeth is successful based on the error information (Sherwood, [0116] the end position values are compared to the desired end position values (error information) and then the system determined whether an acceptable end position has been reached based upon this value information, if an acceptable position is not reached, a new aligner is computed to continue treatment), and includes orthodontic improvement information for correcting each of the teeth that has failed the orthodontic treatment when it is determined that the orthodontic treatment for each of the patient's teeth based on the error information has failed (Sherwood, [0116] the end position values are compared to the desired end position values (error information) and then the system determined whether an acceptable end position has been reached based upon this value information, if an acceptable position is not reached, a new aligner is computed to continue treatment such that the new aligner will continue to move the teeth towards the target final position (improvement information for correcting each tooth)), and the orthodontic improvement information is information generated through the artificial intelligence solution generation algorithm (Sherwood, [0080] the system may have trained artificial neural networks for performing the orthodontic diagnosis and treatment outcome determination, [0097] this machine learning system updates and modifies treatment plans based upon outcome data as it is received, therefore the machine learning/artificial neural network system would be capable of performing the treatment outcome assessments of [0116] where the system assesses the tooth position to determine if a new aligner is needed), and is exemplary treatment information derived based on orthodontic treatment history information about another patient learned through the artificial intelligence solution generation algorithm (Sherwood, [0080] the system may have trained artificial neural networks for performing the orthodontic diagnosis and treatment outcome determination, [0097] this machine learning system updates and modifies treatment plans based upon outcome data as it is received [0066]-[0067] the system is trained on a dataset of dental input data, treatment data and outcome data to predict and assess current patient data from treatments and outcomes). Regarding claim 15 Sherwood discloses; The method of claim 14, wherein the evaluation information provision step includes: an orthodontic appliance design generation step of, when the orthodontic treatment evaluation information including the exemplary treatment information is provided to the medical personnel account, generating a first design, which is design information about the transparent orthodontic appliance based on the exemplary treatment information (Sherwood, [0116] the system uses the input data, and computed tooth movement paths, along with finite element analysis to generate a design for an aligners to move the teeth along the tooth movement paths, the achieved tooth positions after each step are compared with the target tooth position values, which is analogous to the treatment evaluation information being provided to the clinician/clinician account, then the tooth position differences between the target and achieved values are compared, and if the position value is not acceptable, a new aligner design is generated based off of the treatment information); and an orthodontic appliance design provision step of, when the generation of the first design is completed, providing an interface capable of comparing the first design with the second design to the medical personnel account by acquiring second design information, which is design information about the transparent orthodontic appliance based on the orthodontic target value (Sherwood, [0116] the system uses the input data, and computed tooth movement paths, along with finite element analysis to generate a design for an aligners to move the teeth along the tooth movement paths, the achieved tooth positions after each step are compared with the target tooth position values, which is analogous to the treatment evaluation information being provided to the clinician/clinician account, then the tooth position differences between the target and achieved values are compared, and if the position value is not acceptable, a new aligner design is generated based off of the treatment information, further the first and second aligner shapes are compared if the teeth have reached an acceptable positions to determine the best aligner shape). Regarding claim 16 Sherwood discloses; The method of claim 15, wherein when doctor opinion information for changing the design of the transparent orthodontic appliance is received from the medical personnel account, the interface modifies the first design based on the received doctor opinion information (Sherwood, [0113] the clinician can interact with the treatment plan at different stages in the process so that they can view the tooth path information, and make changes to the final positions of the teeth, [0114] the clinician can also update the aligner design information prior to manufacture [0145] the aligner is a polymeric sheet aligner (transparent) made by updating the tooth model and manufacturing according to the tooth model generated). Regarding claim 17 Sherwood discloses; An apparatus for providing an orthodontic status and orthodontic treatment evaluation information based on tooth part scan data of a patient (Sherwood, [0008]-[0009] that system takes multiple images of teeth and determines orthodontic alignment shifts based upon), which includes a computing device including one or more processors and one or more memories storing instructions executable by the processors (Sherwood, [0150] the system has a memory unit which instructions are stored to be executed, [0147] the system may be run on a computer, which inherently has processors to execute the program), the apparatus comprising: an initial image acquisition unit which acquires a first tooth image (Sherwood, [0098] Multiple high resolution tooth scans can be acquired, [0108] a first step is acquiring either a mold or an image/scan of the patients’ teeth (initial image)), which is an image for patient's teeth arrangement (Sherwood, [0108] a first step is acquiring either a mold or an image/scan of the patients teeth (initial image)), based on received first tooth part scan data when first tooth part scan data, which is three- dimensional scan data acquired by capturing a patient's head, is received (Sherwood, [0108] a first step is acquiring either a mold or an image/scan of the patients teeth (initial image), the scan data can include a radiographic, tomographic or sonographic scan of the patient’s teeth/jaw/gums etc., which would be obtained from a scan of the head and may be three dimensional); an orthodontic image acquisition unit which, when the acquisition of the first tooth part image is completed, confirms a teeth arrangement state based on the first tooth image through a pre-stored algorithm (Sherwood, [0108] from the data obtained (first tooth images) the tooth arrangement pre-treatment is obtained, [0107] the methods of tooth analysis and arrangements can be performed using computer programs (pre-stored algorithm)), acquires treatment solution information for correcting the teeth arrangement based on the confirmed teeth arrangement state (Sherwood, [0109] and [0110] the tooth movement paths are determined based upon the captured tooth data to move the teeth into optimal alignment), and acquires a second tooth image, which is an image for predicted teeth arrangement upon orthodontic completion based on the acquired treatment solution information (Sherwood, [0109]-[0110] the final position of the teeth can be determined after the pre-treatment data is processed, where the final position is the desired and predicted final position/post treatment position); an orthodontic appliance design generation unit which, when the acquisition of the second tooth image is completed, generates a design of a transparent orthodontic appliance for correcting the patient's teeth arrangement into teeth arrangement corresponding to the second tooth image (Sherwood,[0112] after the second tooth image/predicted final result image is computed, the appliance can be generated based on the paths which the teeth must move to reach the corrected/final tooth result image); an intermediate image acquisition unit which acquires a third tooth image, which is an image for arrangement of patient' s teeth being corrected (Sherwood, [0125] the aligner can be changed to correct one or more target teeth using input data, [0128] the actual vs expected position of the target tooth is modeled using a tooth model, where [0136] and [0137] an x-ray or CBCT is captured to generate parts of the tooth model during a step in which a new aligner is generated to correct a target tooth during treatment), based on received second tooth part scan data when second tooth part scan data, which is new three-dimensional scan data, is received in a process of correcting the patient's teeth arrangement as the patient wears the transparent orthodontic appliance based on the generated design (Sherwood, [0125] the aligner can be changed to correct one or more target teeth using input data which is patient scan data from multiple scans, [0128] the actual vs expected position of the target tooth is modeled using a tooth model, where [0136] and [0137] an x-ray or CBCT (third tooth image, 3D data) is captured to generate parts of the tooth model during a step in which a new aligner is generated to correct a target tooth during treatment, [0114] the treatment is performed as successive aligners to gradually shift the teeth, so any input images taken after the initial treatment would therefore be analogous to a second and third tooth image taken during corrective treatment); and an orthodontic status information provision unit which, when the acquisition of the third tooth image is completed, acquires tooth movement vector information about the patient through the first tooth image, the second tooth image, and the third tooth image to generate orthodontic status information for orthodontic treatment of the patient based on the acquired tooth movement vector information to provide the orthodontic status information to a medical personnel account (Sherwood, [0120] For each new aligner in the process, tooth movement paths (tooth vectors) are generated, because this occurs for each step of treatment iteratively, it would therefore occur after all images are captured, [0121] tooth movement path including angular velocity and maximum allowable displacement (tooth movement vector information) are determined and compared to actual tooth position based on the real time treatment (orthodontic status and treatment plan adjustments)). Regarding claim 18 Sherwood discloses; A computer-readable recording medium that stores instructions for allowing a computing device to perform the following steps (Sherwood, [0150] the system has a memory unit which instructions are stored to be executed, [0147] the system may be run on a computer, which inherently has processors to execute the program), wherein the steps comprise: an initial image acquisition step of acquiring a first tooth image (Sherwood, [0098] Multiple high resolution tooth scans can be acquired, [0108] a first step is acquiring either a mold or an image/scan of the patients teeth (initial image)), which is an image for patient's teeth arrangement (Sherwood, [0108] a first step is acquiring either a mold or an image/scan of the patients teeth (initial image)), based on received first tooth part scan data when first tooth part scan data, which is three- dimensional scan data acquired by capturing a patient's head, is received (Sherwood, [0108] a first step is acquiring either a mold or an image/scan of the patients teeth (initial image), the scan data can include a radiographic, tomographic or sonographic scan of the patient’s teeth/jaw/gums etc., which would be obtained from a scan of the head and may be three dimensional); an orthodontic image acquisition step of, when the acquisition of the first tooth part image is completed, confirming a teeth arrangement state based on the first tooth image through a pre-stored algorithm (Sherwood, [0108] from the data obtained (first tooth images) the tooth arrangement pre-treatment is obtained, [0107] the methods of tooth analysis and arrangements can be performed using computer programs (pre-stored algorithm)), acquiring treatment solution information for correcting the teeth arrangement based on the confirmed teeth arrangement state (Sherwood, [0109] and [0110] the tooth movement paths are determined based upon the captured tooth data to move the teeth into optimal alignment), and acquires a second tooth image, which is an image for predicted teeth arrangement upon orthodontic completion based on the acquired treatment solution information (Sherwood, [0109]-[0110] the final position of the teeth can be determined after the pre-treatment data is processed, where the final position is the desired and predicted final position/post treatment position); an orthodontic appliance design generation step of, when the acquisition of the second tooth image is completed, generating a design of a transparent orthodontic appliance for correcting the patient's teeth arrangement into teeth arrangement corresponding to the second tooth image (Sherwood,[0112] after the second tooth image/predicted final result image is computed, the appliance can be generated based on the paths which the teeth must move to reach the corrected/final tooth result image); an intermediate image acquisition step of acquiring a third tooth image, which is an image for arrangement of patient's teeth being corrected(Sherwood, [0125] the aligner can be changed to correct one or more target teeth using input data, [0128] the actual vs expected position of the target tooth is modeled using a tooth model, where [0136] and [0137] an x-ray or CBCT is captured to generate parts of the tooth model during a step in which a new aligner is generated to correct a target tooth during treatment), based on received second tooth part scan data when second tooth part scan data, which is new three-dimensional scan data, is received in a process of correcting the patient's teeth arrangement as the patient wears the transparent orthodontic appliance based on the generated design (Sherwood, [0125] the aligner can be changed to correct one or more target teeth using input data which is patient scan data from multiple scans, [0128] the actual vs expected position of the target tooth is modeled using a tooth model, where [0136] and [0137] an x-ray or CBCT (third tooth image, 3D data) is captured to generate parts of the tooth model during a step in which a new aligner is generated to correct a target tooth during treatment, [0114] the treatment is performed as successive aligners to gradually shift the teeth, so any input images taken after the initial treatment would therefore be analogous to a second and third tooth image taken during corrective treatment); and an orthodontic status information provision unit which, when the acquisition of the third tooth image is completed, acquires tooth movement vector information about the patient through the first tooth image, the second tooth image, and the third tooth image to generate orthodontic status information for orthodontic treatment of the patient based on the acquired tooth movement vector information to provide the orthodontic status information to a medical personnel account (Sherwood, [0120] For each new aligner in the process, tooth movement paths (tooth vectors) are generated, because this occurs for each step of treatment iteratively, it would therefore occur after all images are captured, [0121] tooth movement path including angular velocity and maximum allowable displacement (tooth movement vector information) are determined and compared to actual tooth position based on the real time treatment (orthodontic status and treatment plan adjustments)). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Nguyen, US 20210259808, teaches an automated generation of an orthodontic treatment based on provided malocclusion tooth scans for treatment of misalignment. Relevant figures 1-4 and relevant paragraphs [001]-[0012], [0024]-[0027], [0029] and [0053]. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN M ELLIOTT whose telephone number is (703)756-5463. The examiner can normally be reached M-F 8AM-5PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.M.E./Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Feb 01, 2024
Application Filed
Mar 19, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573117
METHOD AND DEVICE FOR DEEP LEARNING-BASED PATCHWISE RECONSTRUCTION FROM CLINICAL CT SCAN DATA
2y 5m to grant Granted Mar 10, 2026
Patent 12475998
SYSTEMS AND METHODS OF ADAPTIVELY GENERATING FACIAL DEVICE SELECTIONS BASED ON VISUALLY DETERMINED ANATOMICAL DIMENSION DATA
2y 5m to grant Granted Nov 18, 2025
Patent 12450918
AUTOMATIC LANE MARKING EXTRACTION AND CLASSIFICATION FROM LIDAR SCANS
2y 5m to grant Granted Oct 21, 2025
Patent 12437415
METHODS AND SYSTEMS FOR NON-DESTRUCTIVE EVALUATION OF STATOR INSULATION CONDITION
2y 5m to grant Granted Oct 07, 2025
Patent 12406358
METHODS AND SYSTEMS FOR AUTOMATED SATURATION BAND PLACEMENT
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
45%
Grant Probability
31%
With Interview (-13.7%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month