Prosecution Insights
Last updated: April 19, 2026
Application No. 18/542,589

VIEWFINDER IMAGE SELECTION FOR INTRAORAL SCANNING

Non-Final OA §102§103§112
Filed
Dec 15, 2023
Examiner
TSAI, TSUNG YIN
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Align Technology, Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
804 granted / 984 resolved
+19.7% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
1015
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
4.3%
-35.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 984 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of claims: claims 1-22 are examined below. Information Disclosure Statement The information disclosure statement (IDS) submitted on 5/28/2024 was filed and considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 2 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 2 recited “unique” and “relative” is view as indefinite and does not provide a solid reference values to determine what considered unique and relative. Please amend to clarify. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-5 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Atiya et al (US 2019/0388194). Claim 1: Atiya et al (US 2019/0388194) teaches the following subject matter: An intraoral scanning system (0010 teaches intraoral scanning with one or more cameras), comprising: an intraoral scanner comprising a plurality of cameras configured to generate a first set of intraoral images, each intraoral image from the first set of intraoral images being associated with a respective camera of the plurality of cameras (0010-0013 teaches scanner with one or more cameras for plurality of images); and a computing device configured to (figure 1 part 96 processor): receive the first set of intraoral images (0013-0015, teaches each camera capture plurality of images); select a first camera of the plurality of cameras that is associated with a first intraoral image of the first set of intraoral images that satisfies one or more criteria (0013-0015 teaches plurality of camera, where each camera capture plurality of images (first set of images)); and output the first intraoral image associated with the camera to a display (figure 28b teaches display of image). Claim 2: The intraoral scanning system of claim 1, wherein the plurality of cameras comprises an array of cameras, each camera in the array of cameras having a unique position and orientation in the intraoral scanner relative to other cameras in the array of cameras (figure 2A and 0288 teach each camera with angle θ (theta) between two respective optical axes 46 of at least two cameras 24 is 90 degrees or less, e.g., 35 degrees or less). Claim 3: The intraoral scanning system of claim 1, wherein the first set of intraoral images is to be generated at a first time during intraoral scanning, and wherein the computing device is further to: receive a second set of intraoral images generated by the intraoral scanner at a second time; select a second camera of the plurality of cameras that is associated with a second intraoral image of the second set of intraoral images that satisfies the one or more criteria; and output the second intraoral image associated with the second camera to the display (0013-0015 teaches plurality of camera, where each camera capture plurality of images (first set of images) and next camera capture second set…..x camera capture x set of images, where figure 28b teaches display of image). Claim 4: The intraoral scanning system of claim 1, wherein the first set of intraoral images comprises at least one of near infrared (NIR) images or color images (paragraph 0319 teaches two-dimensional color images of object 32; 0368 teaches further color of the intraoral scanner 1020; 0374 teaches using of infrared). Claim 5: The intraoral scanning system of claim 1, wherein the computing device is further to: determine, for each intraoral image of the first set of intraoral images, a tooth area depicted in the intraoral image; and select the first camera responsive to determining that the first intraoral image associated with the first camera has a largest tooth area as compared to a remainder of the first set of intraoral images (0018 detail various camera with field of view output images such as tooth features such as curve, where use of 3-D feature (regardless of size) to improve accuracy for stitching of the overlap between scans; 0191-0195 teaches light field camera and tooth imaging). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6-22 are rejected under 35 U.S.C. 103 as being obvious over Atiya et al (US 2019/0388194) in view of Minchenkov et al (US 2020/0349698). Claim 6: Atiya et al (US 2019/0388194) teaches the following subject matter above: The intraoral scanning system of claim 5, wherein the computing device is further to perform the following for each intraoral image of the first set of intraoral images. Atiya et al (US 2019/0388194) do not teach the following: input the intraoral image into a trained machine learning model that performs classification of the intraoral image to identify teeth in the intraoral image, wherein the tooth area for the intraoral image is based on a result of the classification. Minchenkov et al (US 2020/0349698) teaches the following: input the intraoral image into a trained machine learning model that performs classification of the intraoral image to identify teeth in the intraoral image, wherein the tooth area for the intraoral image is based on a result of the classification (figure 2 block 238 and paragraph 0080 teaches use of trained machine learning for classifying to predict dental (tooth) class with height maps, probability map; 0077 detail the use of recurrent neural network as neural network). Atiya et al and Minchenkov et al are both in the field of image analysis, especially scanning of intraoral with cameras for image data sets such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Atiya et al by Minchenkov et al regarding use of neural network improve the accuracy of 3D models of dental arches or other dental sites produced from an intraoral scan as disclosed by Minchenkov et al in 0067. Claim 7: Minchenkov et al teach: The intraoral scanning system of claim 6, wherein the classification comprises pixel-level classification or patch-level classification, and wherein the tooth area for the intraoral image is determined based on a number of pixels classified as teeth (0005-0006 which pixel belongs to dental classes). Claim 8: Minchenkov et al teach: The intraoral scanning system of claim 6, wherein the computing device is further to: input the first set of intraoral images into a trained machine learning model, wherein the trained machine learning model outputs an indication to select the first camera associated with the first intraoral image (0053, 0055, figure 8 and 0083 teaches inputting images to neural network/recurrent for classification). Claim 9: Minchenkov et al teach: The intraoral scanning system of claim 6, wherein the trained machine learning model comprises a recurrent neural network (figures A-B and 0077 detail the use of recurrent neural network as neural network). Claim 10: Atiya et al (US 2019/0388194) do not teach the following subject matter: The intraoral scanning system of claim 1, wherein the computing device is further to: determine that the first intraoral image associated with the first camera satisfies the one or more criteria; output a recommendation for selection of the first camera; and receive user input to select the first camera Minchenkov et al teach the following subject matter: The intraoral scanning system of claim 1, wherein the computing device is further to: determine that the first intraoral image associated with the first camera satisfies the one or more criteria; output a recommendation for selection of the first camera; and receive user input to select the first camera (above teaches camera image to display and criteria, where 0047-0049 detail user interface to controls and input to enable viewing of model from any desire direction, and automatic segmentation image generated (recommendation) for operation of workflow). Atiya et al and Minchenkov et al are both in the field of image analysis, especially scanning of intraoral with cameras for image data sets such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Atiya et al by Minchenkov et al regarding such user input would assist in provide acceptable and accurate representation of 3D model as disclosed by Minchenkov et al in paragraph 0048. Claim 11: Atiya et al (US 2019/0388194) do not teach the following subject matter: The intraoral scanning system of claim 1, wherein the computing device is further to: determine that the first intraoral image associated with the first camera satisfies the one or more criteria, wherein the first camera is automatically selected without user input Minchenkov et al teach the following subject matter: The intraoral scanning system of claim 1, wherein the computing device is further to: determine that the first intraoral image associated with the first camera satisfies the one or more criteria, wherein the first camera is automatically selected without user input (0047-0049 detail user interface to controls and input to enable viewing of model from any desire direction). Atiya et al and Minchenkov et al are both in the field of image analysis, especially scanning of intraoral with cameras for image data sets such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Atiya et al by Minchenkov et al regarding such user input would assist in provide acceptable and accurate representation of 3D model as disclosed by Minchenkov et al in paragraph 0048. Claim 12: Atiya et al (US 2019/0388194) do not teach the following subject matter: The intraoral scanning system of claim 1, wherein the computing device is further to: determine, for each intraoral image of the first set of intraoral images, a score based at least in part on a number of pixels in the intraoral image classified as teeth, wherein the one or more criteria comprise one or more scoring criteria Minchenkov et al teach the following subject matter: The intraoral scanning system of claim 1, wherein the computing device is further to: determine, for each intraoral image of the first set of intraoral images, a score based at least in part on a number of pixels in the intraoral image classified as teeth, wherein the one or more criteria comprise one or more scoring criteria (0069-0070, 0078-079, 0085-0086, 0107 detail pixel related to value (score) for probability for dental classification relating pixel to height). Atiya et al and Minchenkov et al are both in the field of image analysis, especially scanning of intraoral with cameras for image data sets such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Atiya et al by Minchenkov et al regarding the use of pixel and size to probability map or mask to provide accurate than traditional image and signal processing as disclosed by Minchenkov et al in 0069-0070. Claim 13: Minchenkov et al teach: The intraoral scanning system of claim 12, wherein the computing device is further to: adjust scores for one or mor intraoral images of the first set of intraoral images based on scores of one or more other intraoral images of the first set of intraoral images (0112 detail further adjustment of probability threshold regarding value for pixel with classification; figure 5 and 0125 and 0132 teaches other adjustment of values such as height values for pixel to indicate surface). Claim 14: Minchenkov et al teach: The intraoral scanning system of claim 13, wherein the one or more scores are adjusted using a weighting matrix (0062 teaches building light weight for deep neural network; 0065 detail tune weights of backpropagation across all the layers and nodes for error minimizing; figure 3 block 312 and 0086 detail values applies to weight to generate output values). Claim 15: Minchenkov et al teach: The intraoral scanning system of claim 14, wherein the computing device is further to: determine an area of an oral cavity being scanned based on processing of the first set of intraoral images; and select the weighting matrix based on the area of the oral cavity being scanned (figure 3 block 312 and 0086 teaches weight of nodes (weight matrix) across layers for different class such as excess material, teeth, gum…etc). Claim 16: Minchenkov et al teach: The intraoral scanning system of claim 15, wherein the computing device is further to: input the first set of intraoral images into a trained machine learning model, wherein the trained machine learning model outputs an indication of the area of the oral cavity being scanned (figure 3 block 312 and 0086 teaches weight of nodes (weight matrix) across layers for different class such as excess material, teeth, gum…etc). Claim 17: Minchenkov et al teach: The intraoral scanning system of claim 15, wherein the area of the or cavity being scanned comprises one of an upper dental arch, a lower dental arch, or a bite (figure 3A and 0083 detail classification region for dental sites such as dental arch). Claim 18: Minchenkov et al teach: The intraoral scanning system of claim 15, wherein the computing device is further to: determine, for each intraoral image of the first set of intraoral images, a restorative object area depicted in the intraoral image; and select the first camera responsive to determining that the first intraoral image associated with the first camera has a largest restorative object area as compared to a remainder of the first set of intraoral images (figure 3C and 0098-0099 teaches processing with correcting (restorative) of the 3D model generated of soft tissue as well as removal of artifacts). Claim 19: Minchenkov et al teach: The intraoral scanning system of claim 15, wherein the computing device is further to: determine, for each intraoral image of the first set of intraoral images, a margin line area depicted in the intraoral image; and select the first camera responsive to determining that the first intraoral image associated with the first camera has a largest margin line area as compared to a remainder of the first set of intraoral images (0048 detail margin line accurately represent model; 0091-0092 teaches threshold (margin lines area) for accuracy for machine learning improvement from the processing of dataset (set of images); paragraph 0112 also teaches threshold for probability map to pixel classification; figure 5A and 0114 teaches use of threshold for classifying regions; paragraph 0123-0124 detail threshold for pixel classification to identify points (image data)). Claim 20: Atiya et al (US 2019/0388194) do not teach the following subject matter: The intraoral scanning system of claim 1, wherein the computing device is further to: select a second camera of the plurality of cameras that is associated with a second intraoral image of the first set of intraoral images that satisfies the one or more criteria; generate a combined image based on the first intraoral image and the second intraoral image. Minchenkov et al teach the following subject matter: The intraoral scanning system of claim 1, wherein the computing device is further to: select a second camera of the plurality of cameras that is associated with a second intraoral image of the first set of intraoral images that satisfies the one or more criteria (0013-0015 teaches plurality of camera, where each camera (first, second, third….etc) capture plurality of images; 0018 detail various camera with field of view output images such as tooth features such as curve, where use of 3-D feature (regardless of size) to improve accuracy for stitching of the overlap between scans); generate a combined image based on the first intraoral image and the second intraoral image (paragraph 0071-0072 and 0102 teaches multiple individual intraoral images generated sequentially during the intraoral scan are combined to form a blended image outputting a color image as well); and output the combined image to the display (28b teaches display of image). Atiya et al and Minchenkov et al are both in the field of image analysis, especially scanning of intraoral with cameras for image data sets such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Atiya et al by Minchenkov et al where particular blended scan allow distinguishing of different dental classes for good accuracy as disclosed by Minchenkov et al in 0072. Claim 21: Atiya et al (US 2019/0388194) do not teach the following subject matter: The intraoral scanning system of claim 1, wherein the computing device is further to: output a remainder of the first set of intraoral images to the display, wherein the first intraoral image is emphasized on the display Minchenkov et al teach the following subject matter: The intraoral scanning system of claim 1, wherein the computing device is further to: output a remainder of the first set of intraoral images to the display, wherein the first intraoral image is emphasized on the display (0047-0048 detail register images to common reference frames with generated and display 3D model to be check by doctor or user; 0071-0072 teaches color (emphasis) for different texture as well as color information use as better quality as well as indicator for artifact). Atiya et al and Minchenkov et al are both in the field of image analysis, especially scanning of intraoral with cameras for image data sets such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Atiya et al by Minchenkov et al where particular blended scan allow distinguishing of different dental classes further with color for good accuracy as disclosed by Minchenkov et al in 0072. Claim 22: Atiya et al (US 2019/0388194) do not teach the following subject matter: The intraoral scanning system of claim 1, wherein the computing device is further to: determine a score for each image of the first set of intraoral images; determine that the first intraoral image associated with the first camera has a highest score; determine the score for a second intraoral image of the first set of intraoral images associated with a second camera that was selected for a previous set of intraoral images; determine a difference between the score for the first intraoral image and the score for the second intraoral image; and select the first camera associated with the first intraoral image responsive to determining that the difference exceeds a difference threshold. Minchenkov et al teach the following subject matter: The intraoral scanning system of claim 1, wherein the computing device is further to: determine a score for each image of the first set of intraoral images; determine that the first intraoral image associated with the first camera has a highest score; determine the score for a second intraoral image of the first set of intraoral images associated with a second camera that was selected for a previous set of intraoral images; determine a difference between the score for the first intraoral image and the score for the second intraoral image; and select the first camera associated with the first intraoral image responsive to determining that the difference exceeds a difference threshold (above teaches images, determined value (score); figure 5A and 0114 teaches improving quality with threshold difference between blended scan (first and second set of intraoral images) values). Atiya et al and Minchenkov et al are both in the field of image analysis, especially scanning of intraoral with cameras for image data sets such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Atiya et al by Minchenkov et al where threshold difference calculated between enable less computational resource of the machine learning model in real time as disclosed by Minchenkov et al in 0114. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Meyer et la (US 2021/0090272) teaches METHOD, SYSTEM AND COMPUTER READABLE STORAGE MEDIA FOR REGISTERING INTRAORAL MEASUREMENTS teaches use a dental camera to scan teeth and a trained deep neural network may automatically detect portions of the input images that can cause registration errors and reduce or eliminate the effect of these sources of registration errors (abstract). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TSUNG-YIN TSAI whose telephone number is (571)270-1671. The examiner can normally be reached 7am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TSUNG YIN TSAI/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Dec 15, 2023
Application Filed
Jan 09, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597118
IMAGE INSPECTION APPARATUS, IMAGE INSPECTION METHOD, AND IMAGE INSPECTION PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12597237
INFERENCE LEARNING DEVICE AND INFERENCE LEARNING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12579797
VIDEO PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12573029
IMAGE ANNOTATION USING ONE OR MORE NEURAL NETWORKS
2y 5m to grant Granted Mar 10, 2026
Patent 12567235
Visual Explanation of Classification
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+10.9%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 984 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month