Prosecution Insights
Last updated: April 19, 2026
Application No. 18/258,747

SYSTEM AND METHOD FOR DENTAL IMAGE ACQUISITION AND RECOGNITION OF EARLY ENAMEL EROSIONS OF THE TEETH

Final Rejection §101§103
Filed
Jun 21, 2023
Examiner
BURLESON, MICHAEL L
Art Unit
2681
Tech Center
2600 — Communications
Assignee
Haleon US Holdings LLC
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
68%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
365 granted / 489 resolved
+12.6% vs TC avg
Minimal -6% lift
Without
With
+-6.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
36 currently pending
Career history
525
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
55.2%
+15.2% vs TC avg
§102
21.8%
-18.2% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 489 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 02/04/26 have been fully considered but they are not persuasive. Regarding 35 USC 101 rejection, Applicant states training the neural network in a first stage using the first training set does recite a judicial exception (Applicants Remarks pages 5-6). Examiner disagrees with Applicant. Although Applicant has amended to claims to further define the neural network model, Applicant has not provided any specific steps as to how the deep learning convolutional neural network model is trained and thus failing to provide a practical application. As amended, the claims still recite a mental process since using a conventional neural network to analyze images (which is something a human would do mentally) seems more like applying utterly conventional elements of computing devices to perform the mental process (See MPEP 2106.05(h) and In re Berkheimer). The rejection is maintained. Regarding claim 1, Applicant states that Kopelman does not teach of training a neural network or tagging digital images and then using tagged images to train a deep learning neural network (Applicants Remarks pages 7-8). Examiner disagrees with Applicant. Kopelman is not relied on to teach of training a neural network, but Kopelman does disclose the AR system may analyze an image of a tooth, multiple teeth, or a dental arch using dental condition profiles generated using machine learning techniques (paragraph 0043). Swank et al is relied on to teach of training neural network with tagged images. Kopelman teaches of identifying Areas of interest (AOI) identify AOIs from reference data 138 (paragraph 0065). The AOI identified in the image data are read as tags in an image because they identify specific areas within an image, the AOI in image data are highlighted, distinguishing the areas from the rest of the image. The rejection is maintained Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/11/25 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 and 9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites analyzing images, tag areas of erosion on each image and detect enamel erosion in images. These limitations of 1) receive from a digital device, a set of images; 2) tag one or more areas on each image of the set where there exists an indication of early enamel erosion; 3) provide the tagged image to a deep learning convolutional neural network model to train the deep learning convolutional neural network model to recognize enamel erosion based on the tagged dental image; and 4) detect enamel erosion from the trained deep learning convolutional neural network model., under the broadest reasonable interpretation, is mental process, see MPEP 2106.04(a)(2)(III)). For example, a human can train themselves to look at or receive a set of images; pick or tag one or more areas on each image of the set where there exists an indication of early enamel erosion by looking at the images; and detect enamel erosion. The use of a deep learning convolutional neural network model to train the neural network model to recognize enamel erosion based on the tagged dental image is mere a generic computer performing a generic function by collecting information using generic computer process, which can be done in the human mind, thus reciting an abstract idea. This judicial exception is not integrated into a practical applicant because the idea does not improve the claimed “system”. In particular, the system merely collects, analyzes and reports data. In other words, it is just a claim to collecting and comparing known information, which are steps that can be practically performed in the human mind. The claim does not add a meaningful limitation to the method of image analysis. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform receiving a set of images, tagging areas of the images where erosion exists and detect enamel erosion from the images amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 2 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. These limitations of wherein the trained neural network model is a deep learning convolutional neural network model, wherein an object of recognition is enamel erosion, under the broadest reasonable interpretation, are mental process, see MPEP 2106.04(a)(2)(III)), thus reciting an abstract idea. This judicial exception is not integrated into a practical applicant. The addition of this limitation does not add a meaningful limitation to the method of image analysis. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a neural network for object detection amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 3 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. These limitations of wherein the deep learning convolutional neural network model is trained by dental images of persons associated with corresponding early enamel erosion images, under the broadest reasonable interpretation, are mental process, see MPEP 2106.04(a)(2)(III)), thus reciting an abstract idea. This judicial exception is not integrated into a practical applicant. The addition of this limitation does not add a meaningful limitation to the method of image analysis. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a neural network amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 4 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. These limitations of wherein the deep learning convolutional neural network model is capable of receiving input data for the object of recognition, performing object recognition, and outputting the object recognition result, under the broadest reasonable interpretation, are mental process, see MPEP 2106.04(a)(2)(III)), thus reciting an abstract idea. This judicial exception is not integrated into a practical applicant. The addition of this limitation does not add a meaningful limitation to the method of image analysis. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a neural network amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 5 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. These limitations of comprising a server and a network, wherein the trained neural network model is stored on the server, under the broadest reasonable interpretation, are mental process, see MPEP 2106.04(a)(2)(III)), thus reciting an abstract idea. This judicial exception is not integrated into a practical applicant. The addition of this limitation does not add a meaningful limitation to the method of image analysis. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a neural network amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. These limitations of a digital device, wherein the digital device is configured to capture the images, and wherein the digital device is electronically coupled to the network, under the broadest reasonable interpretation, are mental process, see MPEP 2106.04(a)(2)(III)), thus reciting an abstract idea. This judicial exception is not integrated into a practical applicant. The addition of this limitation does not add a meaningful limitation to the method of image analysis. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a digital device amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. These limitations of wherein the image processor is further configured to evaluate the images to determine the degree of enamel erosion, under the broadest reasonable interpretation, are mental process, see MPEP 2106.04(a)(2)(III)), thus reciting an abstract idea. This judicial exception is not integrated into a practical applicant. The addition of this limitation does not add a meaningful limitation to the method of image analysis. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. These limitations of an electronic device to receive the detected enamel erosion and transmit the input from the electronic device to a smart phone, under the broadest reasonable interpretation, are mental process, see MPEP 2106.04(a)(2)(III)), thus reciting an abstract idea. This judicial exception is not integrated into a practical applicant. The addition of electronic device is a generic computer and this limitation does not add a meaningful limitation to the method of image analysis. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a electronic device amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. These limitations of a light source, under the broadest reasonable interpretation, are mental process, see MPEP 2106.04(a)(2)(III)), thus reciting an abstract idea. This judicial exception is not integrated into a practical applicant. The claim is directed to an abstract idea using generic computer. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a light source amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. These limitations of wherein the light source is configured to emit visible and near infrared light., under the broadest reasonable interpretation, are mental process, see MPEP 2106.04(a)(2)(III)), thus reciting an abstract idea. This judicial exception is not integrated into a practical applicant. The claim is directed to generic computer which is an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a light source amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. These limitations of wherein the image capturing device is sensitive to visible and near infrared light sources, under the broadest reasonable interpretation, are mental process, see MPEP 2106.04(a)(2)(III)), thus reciting an abstract idea. This judicial exception is not integrated into a practical applicant. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a image capturing device amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. These limitations of wherein the image capture is based on a timer, under the broadest reasonable interpretation, are mental process, see MPEP 2106.04(a)(2)(III)), thus reciting an abstract idea. This judicial exception is not integrated into a practical applicant. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a timer amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. These limitations of wherein the image capture is based on a voice command, under the broadest reasonable interpretation, are mental process, see MPEP 2106.04(a)(2)(III)), thus reciting an abstract idea. This judicial exception is not integrated into a practical applicant. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a voice command amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. These limitations of wherein the enamel erosion detection uses a pre-defined set of anchors specific to recognizing early enamel erosions at ratios 1:1, 1:1.4 and 1.4:1 in the scales of 24, 46 and 64 during region proposal, under the broadest reasonable interpretation, are mental process, see MPEP 2106.04(a)(2)(III)), thus reciting an abstract idea. This judicial exception is not integrated into a practical applicant. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a ratios amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. These limitations of method for training an image recognition algorithm for early enamel erosion detection using the system, under the broadest reasonable interpretation, are mental process, see MPEP 2106.04(a)(2)(III)), thus reciting an abstract idea. This judicial exception is not integrated into a practical applicant. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a system amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-12 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kopelman US 20180168781 in view of Swank et al 20190340760 Regarding 1, Kopelman et al teaches a system for training an image recognition algorithm for early enamel erosion detection (fig 1a), the system comprising: an image processor connected to a network (fig 1a (paragraph 0054), the image processor configured to: receive from a digital device, a set of images (intraoral scanner 180 may be used to perform an intraoral scan of a patient's oral cavity. Intraoral scan application 109 running on computing device 105 may communicate with intraoral scanner 180 to effectuate the intraoral scan. A result of the intraoral scan may be a sequence (set) of intraoral images that have been discretely generated (paragraph 0075); tag one or more areas on each image of the set where there exists an indication of early enamel erosion (AOI identifying modules 115 are responsible for identifying areas of interest (AOIs) (tag of areas on each image) from image data 135 received from image capture device 160. Such areas of interest may include areas indicative of tooth wear, areas indicative of tooth decay. (paragraph 0065); detect enamel erosion from the trained neural network model (AOI identifying modules 115 additionally include one or more dental condition identifiers 174. Each dental condition identifier 174 may be responsible for identifying a particular dental condition in the oral cavity of the patient from the image data 162 (paragraph 0092). a dental condition profile 192 may be trained based on reference data 190. The dental condition identifier 174 may then provide an image or an extracted representation of a dentition feature to the dental condition profile 192 and receive an indication of potential AOIs (paragraph 0096). dental condition profile 192 may be trained by extracting contents from a training data set and performing machine-learning analysis on the contents to generate a classification model and a feature set for the particular dental condition (paragraph 0097). Kopelman et al fails to teach provide the tagged image to a deep learning convolutional neural network model to train the neural network model to recognize enamel erosion based on the tagged dental image Swank et al 20190340760 teaches provide the tagged image to a deep learning convolutional neural network model to train the neural network model to recognize enamel erosion based on the tagged dental image (image segmentation using neural networks, such as CNNs, can be employed to classify pixels in an image. Various classifications can be implemented. For example, some classifications (tagged image) include classifying existence or non-existence of various oral health issues (e.g., tooth decay) (paragraph 0050). image segmentation can detect existence of tooth decay and the number of tooth decays (e.g., the oral cavity 12 may be detected to have 5 instances of tooth decay). An object detection process can be performed in combination with image segmentation to also identify the location of detected tooth decays (paragraph 0051). Various collected oral health data (tagged images) is fed to the network as inputs 70. Inputs can include image-based data, such as images 64, non-image-based data, such as PH sensor data 66 and non-sensor-based oral health data 68. The inputs 64, 66 and 68 may undergo some preprocessing 72 before proceeding into a neural network (NN) 62, which may be a CNN (paragraph 0052); and Therefore, it would have been obvious to one of ordinary skill in the art to modify Kopelman et al to include: provide the tagged image to a deep learning convolutional neural network model to train the neural network model to recognize enamel erosion based on the tagged dental image. The reason of doing so would be to identify troubled areas of a tooth. Regarding claim 2, Kopelman et al fails to teach wherein the trained neural network model is a deep learning convolutional neural network model, wherein an object of recognition is enamel erosion. Swank et al teaches wherein the trained neural network model is a deep learning convolutional neural network model (machine learning module may comprise image segmentation, neural networks, deep learning, convolutional neural network (CNN), etc (paragraph 0012 and 0048), wherein an object of recognition is enamel erosion (An object detection process can be performed in combination with image segmentation to also identify the location of detected tooth decays (paragraph 0051) Therefore, it would have been obvious to one of ordinary skill in the art to modify Kopelman et al to include: wherein the trained neural network model is a deep learning convolutional neural network model. The reason of doing so would be to more accurately identify troubled areas of a tooth. Regarding claim 3, Kopelman et al teaches wherein the deep learning convolutional neural network model is trained by dental images of persons associated with corresponding early enamel erosion images (AOI identifying modules 115 may also identify AOIs from reference data 138, which may include patient history, virtual 3D models generated from intraoral scan data, or other patient data. Such areas of interest may include areas indicative of tooth wear, areas indicative of tooth decay, areas indicative of receding gums, a gum line, a patient bite, a margin line (paragraph 0065) Regarding claim 4, Kopelman et al fails to teach wherein the deep learning convolutional neural network model is capable of receiving input data for the object of recognition, performing object recognition, and outputting the object recognition result Swank et al teaches wherein the deep learning convolutional neural network model is capable of receiving input data for the object of recognition, performing object recognition, and outputting the object recognition result (Various collected oral health data (tagged images) is fed to the network as inputs 70. Inputs can include image-based data, such as images 64, non-image-based data, such as PH sensor data 66 and non-sensor-based oral health data 68. The inputs 64, 66 and 68 may undergo some preprocessing 72 before proceeding into a neural network (NN) 62, which may be a CNN (paragraph 0052) Therefore, it would have been obvious to one of ordinary skill in the art to modify Kopelman et al to include: wherein the deep learning convolutional neural network model is capable of receiving input data for the object of recognition, performing object recognition, and outputting the object recognition result. The reason of doing so would be to more accurately identify troubled areas of a tooth. Regarding claim 5, Kopelman et al teaches comprising a server and a network, wherein the trained neural network model is stored on the server (the machine may operate in the capacity of a server paragraph 0282 and fig 30). Regarding claim 6, Kopelman et al teaches a digital device (intraoral scanner 180 is intraoral digital scanner (paragraph 0074), wherein the digital device is configured to capture the images (result of the intraoral scan may be a sequence (set) of intraoral images that have been discretely generated (paragraph 0075), and wherein the digital device is electronically coupled to the network (computing device 105 may be a computing device connected to the intraoral scanner 180 (paragraph 0072). Regarding claim 7, Although Kopelman et al teaches (dental condition identifier determines a confidence level for the determined classification. If the confidence value for the dental condition is 100%, then it is more likely that the decision that the dental condition is present (or not present) is accurate than if the confidence value is 50%, for example (paragraph 0098) Kopelman et al fails to teach wherein the image processor is further configured to evaluate the images to determine the degree of enamel erosion Swank et al teaches wherein the image processor is further configured to evaluate the images to determine the degree of enamel erosion (a confidence measure module can be added to state a confidence measure regarding any identified oral health issue. The confidence measure can be used, for example, to decide whether visiting a dentist is warranted. For example, a 25% confidence in a tooth decay detection may not be enough for some users to schedule an appointment with their dentists (paragraph 0055) Therefore, it would have been obvious to one of ordinary skill in the art to modify Kopelman et al to include: wherein the image processor is further configured to evaluate the images to determine the degree of enamel erosion. The reason of doing so would be to more accurately determine how troubled the areas of a tooth. Regarding claim 8, Kopelman et al fails to teach an electronic device to receive the detected enamel erosion and transmit the input from the electronic device to a smart phone. Swank et al teaches an electronic device to receive the detected enamel erosion and transmit the input from the electronic device to a smart phone (The communication interface 44 can transmit the oral health data collected by the sensors 18 to a computing device 50. Examples of computing device 50 can include a smart mobile phone, (paragraph 0041). Therefore, it would have been obvious to one of ordinary skill in the art to modify Kopelman et al to include: an electronic device to receive the detected enamel erosion and transmit the input from the electronic device to a smart phone. The reason of doing so would be to collect information on a tooth to be analyzed. Regarding claim 9, Kopelman et al teaches an image acquisition system for early enamel erosion detection, the system comprising: an image capturing device (intraoral scanner 180 may be used to perform an intraoral scan of a patient's oral cavity. Intraoral scan application 109 running on computing device 105 may communicate with intraoral scanner 180 to effectuate the intraoral scan. A result of the intraoral scan may be a sequence (set) of intraoral images that have been discretely generated (paragraph 0075); and a display device operatively connected to the image capturing device (computing device 105 may be separate from the AR display 150, but connected through either a wired or wireless connection to a processing device in the AR display 150 (paragraph 0054); wherein the image acquisition system is configured to: capture an image of a user's exposed teeth (intraoral scanner 180 may be used to perform an intraoral scan of a patient's oral cavity (paragraph 0075 and fig 8a and 8b); receive and display the analyzed image on the display device (AR display module 118 is responsible for determining how to present and/or call out the identified areas of interest on the AR display 150. AR display module 118 may provide indications or indicators highlighting identified AOIs. In one embodiment, AR display module 118 includes a visual overlay generator 184 that is responsible for generating the visual overlay 164 that is superimposed over a real-world scene viewed by a dental practitioner. The visual overlay generator 184 may determine a visual overlay for an AOI identified by one or more of the AOI identifying modules 115 (paragraph 0119) Kopelman et al fails to teach transmit the obtained image to a trained convolutional CNN that analyzes the obtained image by detecting and labeling dental pathologies to yield an analyzed image Swank et al teaches transmit the obtained image to a trained CNN that analyzes the obtained image by detecting and labeling dental pathologies to yield an analyzed image (image segmentation using neural networks, such as CNNs, can be employed to classify pixels in an image. Various classifications can be implemented. For example, some classifications (tagged image) include classifying existence or non-existence of various oral health issues (e.g., tooth decay) (paragraph 0050). image segmentation can detect existence of tooth decay and the number of tooth decays (e.g., the oral cavity 12 may be detected to have 5 instances of tooth decay). An object detection process can be performed in combination with image segmentation to also identify the location of detected tooth decays (paragraph 0051). Various collected oral health data (tagged images) is fed (transmit) to the network as inputs 70. Inputs can include image-based data, such as images 64, non-image-based data, such as PH sensor data 66 and non-sensor-based oral health data 68. The inputs 64, 66 and 68 may undergo some preprocessing 72 before proceeding into a neural network (NN) 62, which may be a CNN (paragraph 0052); and Therefore, it would have been obvious to one of ordinary skill in the art to modify Kopelman et al to include: transmit the obtained image to a trained CNN that analyzes the obtained image by detecting and labeling dental pathologies to yield an analyzed image. The reason of doing so would be to more accurately identify troubled areas of a tooth. Regarding claim 10, Kopelman et al teaches a light source (light sources that may be mounted to the AR display 150 (paragraph 0093). Regarding claim 11, Kopelman et al teaches wherein the light source is configured to emit visible and near infrared light (The light sources may emit ultraviolet light, infrared radiation, or other wavelength radiation (paragraph 0061 and 0093). Regarding claim 12, Kopelman et al teaches wherein the image capturing device is sensitive to visible and near infrared light sources (image capture device 160 may include one or more light sources to illuminate a patient for capturing images (paragraph 0061). Regarding claim 16, Kopelman et al teaches a method for training an image recognition algorithm for early enamel erosion detection using the system of claim 1 (AOI identifying module 115 may use one or more algorithms or detection rules to analyze the shape of a tooth, color of a tooth, position of a tooth, or other characteristics of a tooth to determine if there is any AOI that should be highlighted for a dental practitioner (paragraph 0065). Claim(s) 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kopelman US 20180168781 in view of Swank et al 20190340760 further in view of Geng US 20050088529. Regarding claim 13, Kopelman et al in view of Swank et al teaches all of the limitations of claim 9, Kopelman et al in view of Swank et al fails to teach wherein the image capture is based on a timer Geng teaches wherein the image capture is based on a timer (operator may then trigger the handheld camera (2300) using any number of trigger activation methods including, but in no way limited to, finger operation (timer). Once triggered, the handheld camera (2300) may delay for a short period (timer) to eliminate hand movement due to pushing button, etc. and then begins to capture three frames of images (paragraph 0088) Therefore, it would have been obvious to one of ordinary skill in the art to modify Kopelman et al in view of Swank et al to include: wherein the image capture is based on a timer. The reason of doing so would be to conveniently take an image of tooth. Regarding claim 14, Kopelman et al in view of Swank et al teaches all of the limitations of claim 9, Kopelman et al in view of Swank et al fails to teach wherein the image capture is based on a voice command. Geng teaches wherein the image capture is based on a voice command (operator may then trigger the handheld camera (2300) using any number of trigger activation methods including, but in no way limited to, voice command. Once triggered, the handheld camera (2300) may delay for a short period (timer) to eliminate hand movement due to pushing button, etc. and then begins to capture three frames of images (paragraph 0088) Therefore, it would have been obvious to one of ordinary skill in the art to modify Kopelman et al in view of Swank et al to include: wherein the image capture is based on a voice command. The reason of doing so would be to conveniently take an image of tooth. Allowable Subject Matter Claim 15 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication should be directed to Michael Burleson whose telephone number is (571) 272-7460 and fax number is (571) 273-7460. The examiner can normally be reached Monday thru Friday from 8:00 a.m. – 4:30p.m. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached at (571) 270- 3438. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Michael Burleson Patent Examiner Art Unit 2683 Michael Burleson July 26, 2025 /MICHAEL BURLESON/ /AKWASI M SARPONG/SPE, Art Unit 2681 3/9/2026
Read full office action

Prosecution Timeline

Jun 21, 2023
Application Filed
Jul 31, 2025
Non-Final Rejection — §101, §103
Feb 04, 2026
Response Filed
Feb 27, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603965
PRINTING DEVICE SETTING EXPANDED REGION AND GENERATING PATCH CHART PRINT DATA BASED ON PIXELS IN EXPANDED REGION
2y 5m to grant Granted Apr 14, 2026
Patent 12585826
DOCUMENT AUTHENTICATION USING ELECTROMAGNETIC SOURCES AND SENSORS
2y 5m to grant Granted Mar 24, 2026
Patent 12566125
SEQUENCER FOCUS QUALITY METRICS AND FOCUS TRACKING FOR PERIODICALLY PATTERNED SURFACES
2y 5m to grant Granted Mar 03, 2026
Patent 12561548
SYSTEM SIMULATING A DECISIONAL PROCESS IN A MAMMAL BRAIN ABOUT MOTIONS OF A VISUALLY OBSERVED BODY
2y 5m to grant Granted Feb 24, 2026
Patent 12562549
LIGHT EMITTING ELEMENT, LIGHT SOURCE DEVICE, DISPLAY DEVICE, HEAD-MOUNTED DISPLAY, AND BIOLOGICAL INFORMATION ACQUISITION APPARATUS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
68%
With Interview (-6.1%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 489 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month