Prosecution Insights
Last updated: April 19, 2026
Application No. 18/000,987

COMPUTER-IMPLEMENTED DETECTION AND PROCESSING OF ORAL FEATURES

Non-Final OA §103
Filed
Dec 07, 2022
Examiner
BUDISALICH, ANDREW STEVEN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Oral Tech AI Pty Ltd.
OA Round
3 (Non-Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
87%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
36 granted / 46 resolved
+16.3% vs TC avg
Moderate +9% lift
Without
With
+8.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
35 currently pending
Career history
81
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
65.6%
+25.6% vs TC avg
§102
5.2%
-34.8% vs TC avg
§112
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 46 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/30/2025 has been entered. Status of Claims Claims 1-4, 6-12, 14-24, and 26-29 are pending. Claims 5, 13, 25, and 30-56 are canceled. Response to Arguments Applicant’s arguments, see p.10-15, filed 09/30/2025, with respect to the rejections of Claims 1-4, 6-12, 14-24, and 26-29 under 35 U.S.C. 103 have been fully considered but are moot because Applicant’s amendments have altered the scope of the claims, and therefore, necessitated new grounds of rejection which are presented below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3-4, 9-12, 14, and 26-28 are rejected under 35 U.S.C. 103 as being unpatentable over Yoon et al. (KR 20190092699 A) in view of Qian et al. (US 20190325200 A1), Oved (US 20200334897 A1), and Owen (US 20170372128 A1). Regarding Claim 1, Yoon teaches "A computer-implemented method for assessing or at least partially reconstructing an image of one or more facial regions of a user, comprising: receiving, at a processor, an input image of at least a portion of a face"; (Yoon, Abstract, teaches a simulation system that receives a front picture of the patient and a picture of the patient wearing a dental opener to comprise the virtual image of the patient, i.e., computer-implemented method for assessing an input image received of a patient's face); " " ""; "receiving, at a display communicatively coupled to the processor, a user input selection of a template comprising a desired aesthetic for one or more of the classified regions of interest"; (Yoon, Pg. 10 Paras. 7-8 starting with "FIG. 34", teaches a screen which allows a user to select a tooth part for correction with a tooth template, i.e., user input on a display to select a template of a desired aesthetic for a region of interest); "applying one or more characteristics of the selected template to the input image"; (Yoon, Pg. 7 Para. 4 starting with "Then, for image replacement", teaches the size and shape of teeth being adjusted according to the selected tooth template, i.e., applying one or more characteristics of the template to the input image); "and outputting an output image comprising the desired aesthetic of the one or more regions based on the selected template and said applying"; (Yoon, FIG. 15 and Pg. 9 Para. 22 starting with "As shown in Fig. 25", teaches creating an image as a virtual function after treatment of patients wherein a tooth template is applied, i.e., outputting an output image comprising the desired correction of the region based on the selected template and application of characteristics). However, Yoon does not explicitly teach "using one or more trained machine learning algorithms configured to: segment the input image into one or more regions, identify which of the one or more regions is a region of interest, and classify the regions of interest into one of: a mouth region, a lip region, a teeth region, or a gum region; using a shape predictor algorithm configured to identify a location of the one or more classified regions of interest in the input image, map the location to template coordinates of a template selected by the user from a plurality of template variations corresponding to the regions of interest, and determine whether a rotation of the input image is needed and a re-mapping of the location to template coordinates based on the rotation". In an analogous field of endeavor, Qian teaches "using one or more trained machine learning algorithms configured to: segment the input image into one or more regions, identify which of the one or more regions is a region of interest, and classify the regions of interest into one of: a mouth region, a lip region, a teeth region, or a gum region"; (Qian, Paras. 4 and 44-45, teaches segmenting a face in an image into at least one organ image block and using neural networks to obtain key point information corresponding to at least one organ of the face wherein the organ image blocks includes a mouth image block and lip image block, i.e., using machine learning to segment an image into regions and identify regions of interest using key point information wherein regions are classified as a mouth region or lip region). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Yoon by including the machine learning neural network segmentation of an image into regions including the mouth and lips taught by Qian. One of ordinary skill in the art would be motivated to combine the references since it improves the semantic information of the face for a more accurate key point determination (Qian, Para. 2, teaches the motivation of combination to be to improve semantic information of the face through accurate key point determination). However, the combination of references of Yoon in view of Qian does not explicitly teach "using a shape predictor algorithm configured to identify a location of the one or more classified regions of interest in the input image, map the location to template coordinates of a template selected by the user from a plurality of template variations corresponding to the regions of interest, and determine whether a rotation of the input image is needed and a re-mapping of the location to template coordinates based on the rotation". In an analogous field of endeavor, Oved teaches "using a shape predictor algorithm configured to identify a location of the one or more classified regions of interest in the input image, map the location to template coordinates of a template selected by the user from a plurality of template variations corresponding to the regions of interest"; (Oved, Paras. 105, 108, and 112, teaches decomposing input 2D images into a set of features which hold the information required for the 3D object shape prediction and output the coordinates of the 3D anatomical structured detected in the 2D radiographs, i.e., using a shape predictor algorithm configured to identify a location of a classified region of interest in the input image, wherein one of the point clouds is selected as a template which is then registered with the other point clouds to compute a warped template that has a shape of the respective point clouds and retains the coordinate order of the template wherein the 2D input image is mapped into the 3D point cloud output, i.e., map the location identified to template coordinates wherein the template is selected from a plurality of potential template variations corresponding to the region of interest). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Yoon and Qian by including the determination of a position of a region of interest using shape and mapping the location to a template’s coordinates taught by Oved. One of ordinary skill in the art would be motivated to combine the references since it improves predictive capabilities of the model (Oved, Para. 36, teaches the motivation of combination to be to improve neural networks by enabling the mapping of 2D images to 3D point clouds). However, the combination of references of Yoon in view of Qian and Oved does not explicitly teach "and determine whether a rotation of the input image is needed and a re-mapping of the location to template coordinates based on the rotation". In an analogous field of endeavor, Owen teaches "and determine whether a rotation of the input image is needed and a re-mapping of the location to template coordinates based on the rotation"; (Owen, Figs. 3A-3C and Paras. 12 and 34-37, teaches locating the face of the individual in each of the plurality of images of the individual, locating facial landmarks of the face of the individual, mapping the face of the individual onto a previously determined face shape template such that the position of facial landmarks in each image coincides with corresponding facial landmarks in the previously determined face shape template wherein affine transformations such as rotations of the image are used to correct image data if necessary for alignment of the image such that the detected face image is mapped onto a previously determined face shape template wherein the procedure for locating the facial landmarks within the image data of the face involves execution of an algorithm by the controller based upon an ensemble of regression trees on pixel intensity data, i.e., shape predictor algorithm used to detect facial landmarks in image data to determine shape of the face and determine whether a rotation is needed and then re-mapping the location of the template coordinates or landmarks based on the rotation). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Yoon, Qian, and Oved by including the shape predictor algorithm also determining whether a rotation of an image is needed and then re-mapping the location of the coordinates based on the rotation taught by Owen. One of ordinary skill in the art would be motivated to combine the references since it corrects image data through alignment (Owen, Para. 36, teaches the motivation of combination to be to correct image data through simple alignment of the image). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Regarding Claim 3, the combination of references of Yoon in view of Qian, Oved, and Owen teaches "The method of claim 1, wherein the portion of the face comprises one or more of: a mouth, one or more teeth, a nose, one or both lips, or a combination thereof"; (Yoon, Pg. 7 Para. 4 starting with "Then, for image replacement", teaches the extraction a patient's mouth area from the input image including the teeth, i.e., the portion of the face comprises at least a mouth and teeth). Regarding Claim 4, the combination of references of Yoon in view of Qian, Oved, and Owen teaches "The method of claim 1, wherein outputting the output image having the desired aesthetic comprises outputting the output image having a desired smile appearance"; (Yoon, Fig. 15 and Pg. 7 Para. 4 starting with "Then, for image replacement", teaches the replacement of the teeth part of the image with the ready-made tooth template provided in the program which creates an image of the patient's smile after treatment, i.e., outputting an image with the desired aesthetic corrections from the tooth template to give the patient the desired smile appearance of the teeth). Regarding Claim 9, the combination of references of Yoon in view of Qian, Oved, and Owen teaches "The method of claim 1, wherein the one or more identified regions comprise: a lip region, individual teeth, a cuspid point, a right corner position of a mouth, a left corner position of the mouth, a mouth coordinate position, a mouth area, or a combination thereof"; (Yoon, Figs. 16, 18, and 33 and Pg. 7 Para. 4 starting with "Then, for image replacement", teaches the identified regions comprising the lips, individual teeth, cuspid points of the teeth, corner positions of the mouth and extracting a patient's mouth area from the input image including the teeth). Regarding Claim 10, the combination of references of Yoon in view of Qian, Oved, and Owen teaches "The method of claim 9, wherein applying comprises using the cuspid point as an initial reference to apply the template to a mouth region in the image"; (Yoon, Figs. 16 and 32-33 and Pg. 10 Paras. 4-5 starting with "29 to 33", teaches applying the tooth template to a patient uses the end points of the tooth, i.e., using a cuspid point as reference for applying the correction template to the mouth region). Regarding Claim 11, the combination of references of Yoon in view of Qian, Oved, and Owen teaches "The method of claim 1, wherein applying further comprises: warping the template, resizing the template for a best fit to the portion of the face in the input image, adjusting one or both of a brightness or a contrast of the template to match with the input image, replacing the template in the portion of the face, or a combination thereof"; (Yoon, Fig. 15 and Pg. 7 Para. 4 starting with "Then, for image replacement", teaches image replacement of the patient's mouth area using the template and adjusting the size and shape of each tooth as well as being able to correct the color of the teeth, i.e., warping or resizing the template as well as adjusting the color of the template and replacing the portion of the face with the template). Regarding Claim 12, the combination of references of Yoon in view of Qian, Oved, and Owen teaches "The method of claim 1, wherein the classified region is one or both of: a gum region such that the method further comprises identifying a gum color of the gum region in the input image and applying a desired gum color to the input image, or the mouth region, such that the method further comprises filling one or more corridors of the mouth region with nearest pixel values"; (Yoon, Figs. 21-23 and Pg. 9 Paras. 2-3 starting with "20 to 23", teaches changing the color of the tooth gum of the patient using a color filter, i.e., a classified region comprises a gum region wherein a desired gum color can be applied to the input image). Regarding Claim 14, the combination of references of Yoon in view of Qian, Oved, and Owen teaches "The method of claim 1, further comprising displaying one or more guides for positioning of the at least a portion of the face in the input image"; (Yoon, Pg. 7 Para. 3 starting with "The virtual image simulation", teaches displaying leveling lines corresponding to the vertical and horizontal axis for accurate alignment, i.e., displaying one or more guides for positioning). Regarding Claim 26, the combination of references of Yoon in view of Qian, Oved, and Owen teaches "A method for customizing a facial feature of an image, the method comprising: receiving, at a processor, an input image having a facial feature identified for customization, wherein the facial feature is one or both of: a lip region and a teeth region"; (Yoon, FIG. 15, teaches receiving an input image for correcting, i.e., customizing, a facial feature comprising lip and teeth regions); "identifying, at the processor, a plurality of facial landmark coordinates for the input image, wherein the plurality of facial landmark coordinates corresponds to one or more of: a lip region, a teeth region, cuspid points, a right corner position of a mouth, a left corner position of the mouth, a mouth coordinate position, a mouth area, a left eye, a right eye, or a combination thereof"; (Qian, FIGS. 3-5, teaches the coordinate key point information corresponding to image organ blocks, i.e., facial landmark coordinates, including a lip region, cuspid points, left and right mouth corners, mouth coordinates, a mouth area, a left eye, and a right eye); "and wherein the facial feature is located by a shape predictor algorithm that maps the plurality of facial landmark coordinates of the facial feature to template coordinates of a template selected by a user from a plurality of template variations corresponding to the facial feature"; (Oved, Paras. 105, 108, and 112, teaches decomposing input 2D images into a set of features which hold the information required for the 3D object shape prediction and output the coordinates of the 3D anatomical structured detected in the 2D radiographs, i.e., using a shape predictor algorithm configured to identify a location of a classified region of interest or feature in the input image, wherein one of the point clouds is selected as a template which is then registered with the other point clouds to compute a warped template that has a shape of the respective point clouds and retains the coordinate order of the template wherein the 2D input image is mapped into the 3D point cloud output, i.e., map the location identified to template coordinates wherein the template is selected from a plurality of potential template variations corresponding to the region of interest or feature); "and determines whether a rotation of the input image is needed and a re-mapping of the facial landmark coordinates to the template coordinates based on the rotation"; (Owen, Figs. 3A-3C and Paras. 12 and 34-37, teaches locating the face of the individual in each of the plurality of images of the individual, locating facial landmarks of the face of the individual, mapping the face of the individual onto a previously determined face shape template such that the position of facial landmarks in each image coincides with corresponding facial landmarks in the previously determined face shape template wherein affine transformations such as rotations of the image are used to correct image data if necessary for alignment of the image such that the detected face image is mapped onto a previously determined face shape template wherein the procedure for locating the facial landmarks within the image data of the face involves execution of an algorithm by the controller based upon an ensemble of regression trees on pixel intensity data, i.e., shape predictor algorithm used to detect facial landmarks in image data to determine shape of the face and determine whether a rotation is needed and then re-mapping the location of the template coordinates or landmarks based on the rotation); "transmitting to the user a plurality of teeth style templates from the plurality of template variations"; (Yoon, Pg. 4 Para. 2 starting with "To this end", teaches a system operation API providing templates and template customization, i.e., the selected template variation is able to be selected from a plurality of template variations corresponding to the facial features being the teeth or gums); "receiving, at the processor, a selection of one of the teeth style templates, wherein the selected teeth style templates comprise a plurality of coordinates"; (Yoon, FIGS. 15, 21-22, and 36 and Pg. 7 Para. 4 starting with "Then, for image replacement" and Pg. 9 Para. 22 starting with "As shown in FIG. 25", teaches the selection of a ready-made tooth template or the tooth template creation function, i.e., selection of one of varying teeth templates, wherein the templates and teeth comprise coordinate information); "and altering the plurality of coordinates of the selected teeth style template to match the plurality of facial landmark coordinates of the input image"; (Yoon, FIGS. 15 and 21-22 and Pg. 7 Para. 4 starting with "Then, for image replacement" and Pg. 9 Para. 22 starting with "As shown in FIG. 25", teaches aligning a patient's image for the application of the tooth template wherein the size and shape of each tooth can be adjusted as well as the changing in tooth brightness color and gum brightness color, i.e., altering the tooth template based on the coordinates by warping or adjusting the shape of the tooth template, resizing the teeth of the template, and adjusting brightness color of the teeth of the template for a best fit of the facial feature for the image); "and replacing the teeth region of the input image with the altered teeth style template to provide a customized output image"; (Yoon, Fig. 15, teaches a display which presents the changed patient image after dental treatment, i.e., output image of customized image which replaces teeth region with the template). The proposed combination as well as the motivation for combining the Yoon, Qian, Oved, and Owen references presented in the rejection of Claim 1, applies to claim 26. Thus, the method recited in claim 26 is met by Yoon in view of Qian, Oved, and Owen. Regarding Claim 27, the combination of references of Yoon in view of Qian, Oved, and Owen teaches "The method of claim 26, wherein replacing the teeth region comprises mapping cuspid point coordinates of the selected teeth style template with the cuspid point coordinates of the input image"; (Qian, Figs. 3-5 and Para. 79, teaches mapping coordinate information of key point information of an organ image block to the coordinate information of the key points of the preprocessed image wherein the coordinates mapped include cuspid points of the eyes and mouth, i.e., using a plurality of coordinates including cuspid points as reference for mapping a selected image sample or template). The proposed combination as well as the motivation for combining the Yoon in view of Qian, Oved, and Owen references presented in the rejection of Claim 1, applies to claim 27. Thus, the method recited in claim 27 is met by Yoon in view of Qian, Oved, and Owen. Regarding Claim 28, the combination of references of Yoon in view of Qian, Oved, and Owen teaches "The method of claim 26, wherein altering the selected teeth style template includes one or more of: warping the selected teeth style template for a best fit to the facial feature of the input image, resizing the selected teeth style template for a best fit to the facial feature of the input image, adjusting one or both of a brightness or a contrast of the selected teeth style template to match a brightness or a contrast of the input image, or a combination thereof"; (Yoon, FIGS. 15 and 21-22 and Pg. 7 Para. 4 starting with "Then, for image replacement" and Pg. 9 Para. 22 starting with "As shown in FIG. 25", teaches aligning a patient's image for the application of the tooth template wherein the size and shape of each tooth can be adjusted as well as the changing in tooth brightness color and gum brightness color, i.e., altering the tooth template based on the coordinates by warping or adjusting the shape of the tooth template, resizing the teeth of the template, and adjusting brightness color of the teeth of the template for a best fit of the facial feature for the image). Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Yoon in view of Qian, Oved, Owen, and Ichikawa et al. (US 20040146194 A1). Regarding Claim 2, the combination of references of Yoon in view of Qian, Oved, and Owen does not explicitly teach "The method of claim 1, further comprising aligning a midpoint of the selected template with a midpoint of the region of interest". In an analogous field of endeavor, Ichikawa teaches "The method of claim 1, further comprising aligning a midpoint of the selected template with a midpoint of the region of interest"; (Ichikawa, Para. 62, teaches the alignment of the center of a region image with the center of the template image). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Yoon, Qian, Oved, and Owen by including the midpoint alignment with a template image to a region taught by Ichikawa. One of ordinary skill in the art would be motivated to combine the references since it enables the accurate detection of marks in the input image (Ichikawa, Para. 65, teaches the motivation of combination to be to accurately detect the mark in the input image regardless of condition). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Claims 6 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Yoon in view of Qian, Oved, Owen, and Han (KR 102240932 B1). Regarding Claim 6, the combination of references of Yoon in view of Qian, Oved, and Owen does not explicitly teach "The method of claim 1, further comprising: receiving one or more user inputs related to hygiene of the one or more facial regions; ranking the one or more user inputs based on a health guideline; and generating an oral health score report based on said ranking". In an analogous field of endeavor, Han teaches "The method of claim 1, further comprising: receiving one or more user inputs related to hygiene of the one or more facial regions"; (Han, Abstract, teaches receiving oral survey data and oral image data, i.e., receiving user inputs related to hygiene of a facial region); "ranking the one or more user inputs based on a health guideline"; (Han, Abstract and Pg. 7 Para. 3 starting with "Through the above", teaches an oral rating of the account based on training data of an artificial intelligence using oral survey data and oral image data wherein a dentist may modify or correct the output, i.e., ranking or rating the user inputs using a health guideline dictated by a doctor); "and generating an oral health score report based on said ranking"; (Han, Pg. 7 Para. 3 starting with "Through the above", teaches earning individual oral health scores based on the oral rating and score items of the account which are modified by a dentist, i.e., generate oral health score based on the ranking). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Yoon, Qian, Oved, and Owen by including the rating of hygiene inputs by a doctor or health guideline in order to generate an oral health score taught by Han. One of ordinary skill in the art would be motivated to combine the references since it improves treatment efficiency (Han, Pg. 7 Para. 3 starting with "Through the above", teaches the motivation of combination to be to improve efficiency of treatment, consultation and patient management). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Regarding Claim 29, the combination of references of Yoon in view of Qian, Oved, and Han teaches "The method of claim 26, further comprising: analyzing, at the processor, the teeth and gums of the input image, and calculating an oral health score based on one or more of: a presence or absence of dental caries, or gum disease; and transmitting the oral health score to the user"; (Han, Pg. 6 Para. 6 starting with "The oral health score", teaches calculating an oral health score through analysis of the teeth and gums of an input image including the analysis of tooth caries and periodontal condition, i.e., provide an oral health score to the user based on a presence or absence of dental caries and gum disease). The proposed combination as well as the motivation for combining the Yoon in view of Qian, Oved, Owen, and Han references presented in the rejection of Claim 6, applies to claim 29. Thus, the method recited in claim 29 is met by Yoon in view of Qian, Oved, Owen, and Han. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Yoon in view of Qian, Oved, Owen, and Ning et al. (US 20200074678 A1). Regarding Claim 7, the combination of references of Yoon in view of Qian, Oved, and Owen does not explicitly teach "The method of claim 1, wherein the one or more trained machine learning algorithms comprise a mask R-Convolutional Neural Network architecture using a Residual Network and Feature Pyramid Network backbone". In an analogous field of endeavor, Ning teaches "The method of claim 1, wherein the one or more trained machine learning algorithms comprise a mask R-Convolutional Neural Network architecture using a Residual Network and Feature Pyramid Network backbone"; (Ning, Paras. 67 and 72, teaches a region-based convolutional neural network with Feature Pyramid Network and Residual Net backbones wherein masks are included in the outputs, i.e., machine learning algorithms with mask R-CNNs, ResNets, and Feature Pyramid). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Yoon, Qian, Oved, and Owen by including the use of a R-CNN for masking, a Residual Network, and a Feature Pyramid Network backbone taught by Ning. One of ordinary skill in the art would be motivated to combine the references since it improves multi-object pose tracking (Ning, Paras. 8-9, teaches the motivation of combination to be to improve multi-object pose tracking). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Yoon in view of Qian, Oved, Owen, and Krishna et al. (US 10617294 B1). Regarding Claim 8, the combination of references of Yoon in view of Qian, Oved, and Owen does not explicitly teach "The method of claim 1, wherein the shape predictor algorithm is a dlib shape predictor algorithm". In an analogous field of endeavor, Krishna teaches "The method of claim 1, wherein the shape predictor algorithm is a dlib shape predictor algorithm"; (Krishna, Col. 2 lines 49-52, teaches using Dlib digital object detection library techniques in the machine learning model, i.e., dlib shape predictor algorithm). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Yoon, Qian, Oved, and Owen by including the use of a dlib shape predictor algorithm taught by Krishna. One of ordinary skill in the art would be motivated to combine the references since it improves the model (Krishna, Col. 11 lines 52-67, teaches the motivation of combination to be to improve predictive performance of the model). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Yoon in view of Qian, Oved, Owen, and Mehta (US 20130251219 A1). Regarding Claim 15, the combination of references of Yoon in view of Qian, Oved, and Owen does not explicitly teach "The method of claim 14, further comprising outputting an error message when the one or more classified regions of interest are out of a predetermined range". In an analogous field of endeavor, Mehta teaches "The method of claim 14, further comprising outputting an error message when the one or more classified regions of interest are out of a predetermined range"; (Mehta, Para. 13, teaches generating an error code which prompts a user of deficient image quality for an anatomical region of interest compared to a template reference image due to artifacts, banding, or poor luminance range, i.e., outputting error message of region of interest that is out of a predetermined range). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Yoon, Qian, Oved, and Owen by including the output of an error message to a user when a region is out of a range of quality taught by Mehta. One of ordinary skill in the art would be motivated to combine the references since it improves user satisfaction and fine tunes the image (Mehta, Para. 13, teaches the motivation of combination to be to fine tune image processing and increase user satisfaction). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Claims 16 and 18-23 are rejected under 35 U.S.C. 103 as being unpatentable over Yoon in view of Krishna, Owen, and Qian. Regarding Claim 16, the combination of references of Yoon in view of Krishna, Owen, and Qian teaches "A computer-implemented application system for assessing and customizing a facial feature of an image, the application comprising: a user interface having a plurality of interaction screens associated with a processor, the user interface configured to receive user interaction"; (Yoon, Pg. 9 Paras. 22-24 starting with "As shown in FIG. 25", teaches a dental patient consultation simulation system, i.e., a computer-implemented application system for assessing and customizing a facial feature of an image, which comprises a plurality of screens such as a screen for adding patient information and a details screen wherein the screens are configured for patient input, i.e., a user interface with screens associated with a processor for user interaction); "an image input screen configured to receive an input image of at least one facial feature"; (Yoon, Pg. 9 Para. 22 starting with "As shown in FIG. 25", teaches a screen for addition a pre-treatment photo image, i.e., input screen receives an input image of the facial features); "the at least one facial feature is located by a shape predictor algorithm that maps a plurality of input image coordinates along the at least one facial feature"; (Krishna, Col. 1 lines 58-67 and Col. 2 lines 1-22, teaches predicting a position of a plurality of facial landmarks on the facial image of the subject including points on the eyelid and canthi wherein the eye region is automatically segmented and identified based on the position of the facial landmarks and automatically determining a position of the eyelid by estimating the eyelid shape curve in the segmented region, i.e., locating a facial feature by a shape predictor algorithm that maps input image coordinates along the facial feature); "and determines whether a rotation of the input image is needed and a re-mapping of the input image coordinates to template coordinates of template variations based on the rotation"; (Owen, Figs. 3A-3C and Paras. 12 and 34-37, teaches locating the face of the individual in each of the plurality of images of the individual, locating facial landmarks of the face of the individual, mapping the face of the individual onto a previously determined face shape template such that the position of facial landmarks in each image coincides with corresponding facial landmarks in the previously determined face shape template wherein affine transformations such as rotations of the image are used to correct image data if necessary for alignment of the image such that the detected face image is mapped onto a previously determined face shape template wherein the procedure for locating the facial landmarks within the image data of the face involves execution of an algorithm by the controller based upon an ensemble of regression trees on pixel intensity data, i.e., shape predictor algorithm used to detect facial landmarks in image data to determine shape of the face and determine whether a rotation is needed and then re-mapping the location of the template coordinates or landmarks based on the rotation); "a selection interaction screen for presenting to the user a number of template variations corresponding to the at least one facial feature"; (Yoon, Pg. 9 Paras. 20-22 starting with "The model selection window", teaches a model selection window for selecting a tooth model and a tooth template creation function to modify and store tooth images for replacement with the template, i.e., a selection interaction screen for presenting templates to a user for selection corresponding to the teeth or facial feature); "each template variation having a plurality of template coordinates"; (Qian, Para. 93, teaches image samples having a plurality of coordinate information of key points in the image sample marked for the organ image blocks wherein the image samples may be selected, i.e., template variations having a plurality of coordinates); "wherein the selection interaction screen is configured to receive a user selection of one or more of the template variations"; (Yoon, Pg. 9 Paras. 20-22 starting with "The model selection window", teaches a model selection window for selecting a tooth model and a tooth template creation function to modify and store tooth images for replacement with the template, i.e., a selection interaction screen for presenting templates to a user for selection corresponding to the teeth or facial feature); "the processor being configured to alter the at least one facial feature of the input image based on the one or more selected template variations, and provide a customized image"; (Yoon, Pg. 7 Paras. 3-4 starting with "The virtual image", teaches adjusting size and shape of each tooth, i.e., altering the at least one facial feature of the input image, based on a ready-made tooth template in order to produce a changed patient image after dental treatment); "wherein the processor is configured to identify the plurality of input image coordinates to use as reference points for mapping to the plurality of template coordinates of the selected one or more template variations corresponding to the facial feature"; (Qian, Para. 79, teaches mapping coordinate information of key point information of the eyelid line in the eye image block to the coordinate information of the key points of the eyelid line of the preprocessed image, i.e., using a plurality of coordinates as reference for mapping a selected image sample or template which corresponds the facial feature being the eyelid line). "and an output image interaction screen configured to present the customized image to the user"; (Yoon, Fig. 15, teaches a display which presents the changed patient image after dental treatment, i.e., output image interaction screen to present customized image). The proposed combination as well as the motivation for combining the Yoon, Krishna, Owen, and Qian references presented in the rejection of claims 1 and 8, applies to claim 16. Thus, the system recited in claim 16 is met by Yoon in view of Krishna, Owen, and Qian. Regarding Claim 18, the combination of references of Yoon in view of Krishna, Owen, and Qian teaches “The computer-implemented application of claim 16, wherein the plurality of input image coordinates is facial landmark points corresponding to one or more of: a lip region, individual teeth, cuspid points, a right corner position of a mouth, a left corner position of the mouth, a mouth coordinate position, a mouth area, a left eye, a right eye, or a combination thereof"; (Qian, FIGS. 3-5, teaches the coordinate key point information corresponding to image organ blocks comprise facial landmark points including a lip region, cuspid points, left and right mouth corners, mouth coordinates, a mouth area, a left eye, and a right eye). The proposed combination as well as the motivation for combining the Yoon, Krishna, Owen, and Qian references presented in the rejection of Claims 1 and 8, applies to claim 18. Thus, the system recited in claim 18 is met by Yoon in view of Krishna, Owen, and Qian. Regarding Claim 19, the combination of references of Yoon in view of Krishna, Owen, and Qian teaches “The computer-implemented application of claim 16, wherein the input image is provided using one or both of: an input image sensor for taking and uploading the input image of the facial feature of the user or uploaded from an image library"; (Yoon, Pg. 7 Para. 3 starting with "The virtual image", teaches photographing a patient's image with a camera, i.e., input image sensor for taking and uploading the input image of the facial feature). Regarding Claim 20, the combination of references of Yoon in view of Krishna, Owen, and Qian teaches "The computer-implemented application of claim 16, wherein the at least one facial feature of the input image comprises one or more of: a mouth, one or more teeth, gums, one or both lips, or a combination thereof"; (Yoon, FIG. 15, teaches the facial feature of the input image comprising a mouth, teeth, gums, and lips). Regarding Claim 21, the combination of references of Yoon in view of Krishna, Owen, and Qian teaches "The computer-implemented application of claim 16, wherein the number of template variations comprises one or more of: a number of varying gum shades or a number of varying teeth style templates"; (Yoon, FIGS. 15 and 21-23, teaches the template variations including varying gum shades and teeth styles). Regarding Claim 22, the combination of references of Yoon in view of Krishna, Owen, and Qian teaches "The computer-implemented application of claim 21, wherein the selection interaction screen is configured to receive a user selection of one of the number of varying teeth style templates"; (Yoon, FIGS. 15 and 21-22 and Pg. 7 Para. 4 starting with "Then, for image replacement" and Pg. 9 Para. 22 starting with "As shown in FIG. 25", teaches the selection of a ready-made tooth template or the tooth template creation function, i.e., selection of one of varying teeth templates); "and wherein the processor is configured to alter the selected teeth style template based on the plurality of coordinates of the selected teeth style template and the corresponding identified plurality of input image coordinates including one or more of: warping the selected teeth style template for a best fit to the facial feature of the input image, resizing the selected teeth style template for a best fit to the facial feature of the input image, adjusting one or both of a brightness or a contrast of the selected teeth style template to match a brightness or a contrast of the input image, or a combination thereof"; (Yoon, FIGS. 15 and 21-22 and Pg. 7 Para. 4 starting with "Then, for image replacement" and Pg. 9 Para. 22 starting with "As shown in FIG. 25", teaches aligning a patient's image for the application of the tooth template wherein the size and shape of each tooth can be adjusted as well as the changing in tooth brightness color and gum brightness color, i.e., altering the tooth template based on the coordinates by warping or adjusting the shape of the tooth template, resizing the teeth of the template, and adjusting brightness color of the teeth of the template for a best fit of the facial feature for the image). Regarding Claim 23, the combination of references of Yoon in view of Krishna, Owen, and Qian teaches "The computer-implemented application of claim 22, wherein the altered selected teeth style template replaces the at least one facial feature of the input image"; Yoon, FIG.15 and Pg. 7 Para. 4 starting with "Then, for image replacement", teaches the created tooth template replaces the tooth, i.e., one facial feature of the input image). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Yoon in view of Krishna, Owen, Qian, and Tiwari et al. (US 20200218924 A1). Regarding Claim 17, the combination of references of Yoon in view of Krishna, Owen, and Qian does not explicitly teach "The computer-implemented application of claim 16, wherein the processor is configured to identify the plurality of input image coordinates by segmenting the input image into at least one region of interest, identifying boundaries of objects in the input image, and annotating each pixel based on the identified boundary". In an analogous field of endeavor, Tiwari teaches "The computer-implemented application of claim 16, wherein the processor is configured to identify the plurality of input image coordinates by segmenting the input image into at least one region of interest, identifying boundaries of objects in the input image, and annotating each pixel based on the identified boundary"; (Tiwari, Para. 59, teaches segmenting an image into regions to represent border pixels of objects wherein the segmented image data can be represented as annotated boundary encodings, i.e., segmenting input image into at least one region of interest for identifying a boundary or border of objects wherein the boundary pixels are annotated). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Yoon, Krishna, Owen, and Qian by including the identification of image coordinates through segmentation into regions for boundary annotation taught by Tiwari. One of ordinary skill in the art would be motivated to combine the references since it allows for edge detection processes (Tiwari, Para. 59, teaches the motivation of combination to be to enable the use of subsequent edge detection processes). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Yoon in view of Krishna, Owen, Qian, and Han. Regarding Claim 24, the combination of references of Yoon in view of Krishna, Owen, Qian, and Han teaches "The computer-implemented application of claim 20, wherein the processor is further configured to analyze the one or more teeth and gums of the input image, and calculate an oral health score based on one or more of: a presence or absence of dental caries, or gum disease; and provide the oral health score to the user"; (Han, Pg. 6 Para. 6 starting with "The oral health score", teaches calculating an oral health score through analysis of the teeth and gums of an input image including the analysis of tooth caries and periodontal condition, i.e., provide an oral health score to the user based on a presence or absence of dental caries and gum disease). The proposed combination as well as the motivation for combining the Yoon, Qian, Oved, Owen, and Han references presented in the rejection of Claim 6, applies to claim 24. Thus, the system recited in claim 24 is met by Yoon in view of Krishna, Owen, Qian, and Han. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW STEVEN BUDISALICH whose telephone number is (703)756-5568. The examiner can normally be reached Monday - Friday 8:30am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW S BUDISALICH/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Dec 07, 2022
Application Filed
Mar 06, 2025
Non-Final Rejection — §103
May 21, 2025
Applicant Interview (Telephonic)
May 21, 2025
Examiner Interview Summary
May 30, 2025
Response Filed
Jun 25, 2025
Final Rejection — §103
Aug 27, 2025
Examiner Interview Summary
Aug 27, 2025
Applicant Interview (Telephonic)
Sep 30, 2025
Request for Continued Examination
Oct 11, 2025
Response after Non-Final Action
Nov 18, 2025
Non-Final Rejection — §103
Apr 14, 2026
Examiner Interview Summary
Apr 14, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602820
METHOD AND APPARATUS WITH ATTENTION-BASED OBJECT ANALYSIS
2y 5m to grant Granted Apr 14, 2026
Patent 12597106
METHOD AND APPARATUS FOR IDENTIFYING DEFECT GRADE OF BAD PICTURE, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12592078
VIDEO MONITORING DEVICE, VIDEO MONITORING SYSTEM, VIDEO MONITORING METHOD, AND STORAGE MEDIUM STORING VIDEO MONITORING PROGRAM
2y 5m to grant Granted Mar 31, 2026
Patent 12586232
METHOD FOR OBJECT DETECTION USING CROPPED IMAGES
2y 5m to grant Granted Mar 24, 2026
Patent 12567151
Microscopy System and Method for Instance Segmentation
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
87%
With Interview (+8.9%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 46 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month