Prosecution Insights
Last updated: April 19, 2026
Application No. 18/580,532

System and Method for Determining an Oral Health Condition

Non-Final OA §103
Filed
Jan 18, 2024
Examiner
YANG, QIAN
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Colgate-Palmolive Company
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
709 granted / 963 resolved
+11.6% vs TC avg
Strong +31% interview lift
Without
With
+31.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
34 currently pending
Career history
997
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
48.3%
+8.3% vs TC avg
§102
21.2%
-18.8% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 963 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 – 7, 9 – 18 and 20 – 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oyama (WIPO Patent Application Publication WO 2019/150515, IDS), in view of Ota et al. (Japanese Patent Application Publication JP 2020-179173, IDS), hereinafter referred as Ota. Regarding claim 1, Oyama discloses a system for determining an oral care condition within an oral cavity (Fig. 1), the system comprising: an intraoral device (Fig. 3, #20) comprising: a light source configured to emit light within the oral cavity (Fig. 3, #24b); and a camera configured to capture an image of at least one tooth and gums adjacent the at least one tooth within the oral cavity (Fig. 3, #24a); and one or more processors (Fig. 6, #223) configured to: receive the image of the at least one tooth and gums adjacent the at least one tooth (page 4, para. 2 from bottom); differentiate the at least one tooth and the gums adjacent the at least one tooth as a tooth segment and a gums segment (page 4, para. 2 from bottom “The portable terminal 210 classifies the image received by the activated dedicated application software into various parts such as plaque, teeth, gums, and the like, and performs image processing”); determine, via a computing algorithm, the oral care condition based on the gums segment input into the computing algorithm (page 4, para. 2 from bottom; page 9, paras. 2 – 3). However, Oyama fails to explicitly disclose the system wherein input the gums segment into a machine learning model; and the determining is via a machine learning model. However, in a similar field of endeavor Ota discloses an image processing system for detecting oral condition (abstract). In addition, Ota discloses the system wherein input the gums segment into a machine learning model; and determining oral care condition via a machine learning model (pages 5 - 6). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Oyama, and input the gums segment into a machine learning model; and the determining is via a machine learning model. The motivation for doing this is to automate, speed up, and improve quality for determination. Regarding claim 2 (depends on claim 1), Oyama discloses the system wherein the camera is a consumer grade camera (Fig. 3, #24a). However, Oyama fails to explicitly disclose the system wherein the machine learning model determines the oral care condition via a deep learning technique. However, in a similar field of endeavor Ota discloses an image processing system for detecting oral condition (abstract). In addition, Ota discloses the system wherein the machine learning model determines the oral care condition via a deep learning technique (page 5, deep learning). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Oyama, and wherein the machine learning model determines the oral care condition via a deep learning technique. The motivation for doing this is to automate, speed up, and improve quality for determination. Regarding claim 3 (depends on claim 1), Oyama discloses the system wherein the light source of the intraoral device comprises a plurality of light emitting diode (LED) lights, at least one of the LED lights surrounding the camera (page 5, para. 5, LED). Regarding claim 4 (depends on claim 1), Ota discloses the system wherein the one or more processors is configured to populate the machine learning model via inputting, into the machine learning model, a plurality of training images comprising the at least one tooth and gums adjacent the at least one tooth within the oral cavity (pages 5 - 6). Regarding claim 5 (depends on claim 1), Oyama discloses the system wherein at least one of the one or more processors are located on a mobile device or a server (Fig. 5 and 6). Regarding claim 6 (depends on claim 1), Ota discloses the system wherein the one or more processors are further configured to determine degrees of severities of the oral care condition (page 6, last para.). Regarding claim 7 (depends on claim 1), Ota discloses the system wherein the one or more processors is configured to: manipulate the received image of the at least one tooth and gums adjacent the at least one tooth within the oral cavity, the manipulation of the received image comprising flipping the received image or cropping the received image (page 6, para. 4); generate a new image of the at least one tooth and gums adjacent the at least one tooth within the oral cavity based on the flipping of the received image or the cropping of the received image (page 6, para. 4); and input portions of the new image into the machine learning model (page 6, para. 4). Regarding claim 9 (depends on claim 1), Oyama discloses the system wherein the intraoral device comprises: a body portion having a button configured to cause the camera to capture the image of the at least one tooth and gums adjacent the at least one tooth within the oral cavity; a neck portion housing the light source and the camera; and wherein the camera is located on a distal portion of the neck of the intraoral device (Fig. 3). Regarding claim 10 (depends on claim 1), Ota discloses the system wherein the one or more processors are configured to cause the determined oral care condition to be displayed upon a display (page 10, para. 4). Regarding claim 11 (depends on claim 1), Ota discloses the system wherein the oral care condition comprises at least one of gingivitis, plaque, receding gums, periodontitis, or tonsillitis (page 6, last para. to page 7, para. 3; page 8, para. 2). Regarding claims 12 – 18 and 20 – 22, they are corresponding to claims 1 – 2, 4, 3, 5 – 7 and 9 – 11, respectively, thus, they are interpreted and rejected for the same reason set forth for claims 1 – 2, 4, 3, 5 – 7 and 9 – 11. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to QIAN YANG whose telephone number is (571)270-7239. The examiner can normally be reached on Monday-Thursday 8am-6pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on 571-270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QIAN YANG/ Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Jan 18, 2024
Application Filed
Jan 26, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598273
Camera Platform Incorporating Schedule and Stature
2y 5m to grant Granted Apr 07, 2026
Patent 12586560
ELECTRONIC APPARATUS, TERMINAL APPARATUS AND CONTROLLING METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12586239
SMART IMAGE PROCESSING METHOD AND DEVICE USING SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12579432
METHODS AND APPARATUS FOR AUTOMATED SPECIMEN CHARACTERIZATION USING DIAGNOSTIC ANALYSIS SYSTEM WITH CONTINUOUS PERFORMANCE BASED TRAINING
2y 5m to grant Granted Mar 17, 2026
Patent 12579686
Mixed Depth Object Detection
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+31.3%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 963 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month