Prosecution Insights
Last updated: April 19, 2026
Application No. 18/262,620

SKIN STATE ESTIMATION METHOD, DEVICE, PROGRAM, SYSTEM, TRAINED MODEL GENERATION METHOD, AND TRAINED MODEL

Final Rejection §102
Filed
Jul 24, 2023
Examiner
NATNITHITHADHA, NAVIN
Art Unit
3791
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Shiseido Company Ltd.
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
685 granted / 963 resolved
+1.1% vs TC avg
Strong +31% interview lift
Without
With
+30.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
45 currently pending
Career history
1008
Total Applications
across all art units

Statute-Specific Performance

§101
12.6%
-27.4% vs TC avg
§103
30.9%
-9.1% vs TC avg
§102
29.2%
-10.8% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 963 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment 2. According to the Preliminary Amendment, filed 13 January 2026, the status of the claims is as follows: Claims 1, 11, and 12 are currently amended; Claims 2-10 are previously presented; Claims 16-18 are new; and Claims 13-15 are cancelled. Response to Arguments 3. Applicant has amended limitations in claim 11 to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function). Therefore, the interpretation of these limitations under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph is withdrawn. 4. Applicant’s arguments, see Remarks, pp. 6-7, filed 13 January 2026, with respect to the rejection of claims 1-12 under 35 U.S.C. 102(a)(1) as being anticipated by Yamanashi et al., U.S. Patent Application Publication No. 2015/0351682 A1 (“Yamanashi”), have been fully considered, and are persuasive in view of the Amendment, filed 13 January 2026. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection, which was necessitated by amendment, is discussed below. Claim Rejections - 35 USC § 102 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 7. Claims 1-12 and 16-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Yamanashi et al., U.S. Patent Application Publication No. 2015/0356344 A1 (“Yamanashi”). As to Claim 1, Yamanashi teaches the following: A skin state estimation method (see “The present disclosure relates to a wrinkle detection apparatus and a wrinkle detection method for detecting a wrinkle area of skin included in an image.” in para. [0002]), comprising: identifying a nasal feature (“nose”) of a user (see “Facial component detection unit 130 detects, from the photographed image, positions of facial components in the photographed image. Each of the facial components refers to a section that constitutes a face, such as eyes, a nose, and cheeks, and can be defined, for example, by a position of a feature of the face, such as inner canthi.” in para. [0040]), the nasal feature including a nose shape of the user (see “For example, area estimation unit 210 estimates the gloss distribution based on the positions of the facial components, and divides the photographed image into the plurality of image areas in accordance with the gloss level. Area estimation unit 210 then determines the parameter values for each image area with reference to the previously stored table that associates the gloss level with the parameter values. When wrinkle detection apparatus 100 includes a three-dimensional shape obtaining unit for obtaining a three-dimensional shape of skin from the photographed image, area estimation unit 210 may estimate the gloss distribution based on the three-dimensional shape obtained by the three-dimensional shape obtaining unit. Area estimation unit 210 may previously store a three-dimensional shape model of a typical face, and may estimate the gloss distribution based on such a model.” in para. [0095]; and see “For example, area estimation unit 210 estimates that the gloss level is higher in a portion of a protruding shape, such as a tip of a nose or a cheek, than in other portions.” in para. [0096]); and estimating a skin state (“wrinkle”) of the user based on the nose shape of the nasal feature of the user (see “Area estimation unit 210 estimates image areas that are positions of the plurality of areas in the image, each of the areas having a different gloss level of skin, based on the facial component positional information that is input from facial component detection unit 130. Area estimation unit 210 then outputs the photographed image and area positional information that indicates the estimated respective image areas to wrinkle detection unit 220 and chloasma detection unit 230.” in para. [0044]); and see “With reference to parameter value table 310 (see FIG. 4), parameter determination unit 222 of FIG. 3 determines the one or more parameter values used for wrinkle area detection for each of the image areas indicated by the area positional information that is input from area estimation unit 210.” in para. [0064]). As to Claim 2, Yamanashi teaches the following: obtaining an image including a nose of the user, wherein the nasal feature of the user is identified from image information of the image (see “Facial component detection unit 130 detects, from the photographed image, positions of facial components in the photographed image. Each of the facial components refers to a section that constitutes a face, such as eyes, a nose, and cheeks, and can be defined, for example, by a position of a feature of the face, such as inner canthi.” in para. [0040]; and see “For example, area estimation unit 210 estimates the gloss distribution based on the positions of the facial components, and divides the photographed image into the plurality of image areas in accordance with the gloss level. Area estimation unit 210 then determines the parameter values for each image area with reference to the previously stored table that associates the gloss level with the parameter values. When wrinkle detection apparatus 100 includes a three-dimensional shape obtaining unit for obtaining a three-dimensional shape of skin from the photographed image, area estimation unit 210 may estimate the gloss distribution based on the three-dimensional shape obtained by the three-dimensional shape obtaining unit. Area estimation unit 210 may previously store a three-dimensional shape model of a typical face, and may estimate the gloss distribution based on such a model.” in para. [0095]). As to Claim 3, Yamanashi teaches the following: wherein the skin state of the user is a future skin state of the user (see “or example, chloasma detection unit 230 performs processing for extracting the pixel having the pixel value equal to or less than a threshold, for at least a detection area indicated by detection area information that is input, among the photographed image, by using signals of RGB channels, thereby performing such chloasma area detection. Chloasma detection unit 230 then outputs chloasma area information that indicates the detected chloasma area to image generation unit 150.” in para. [0066]). As to Claim 4, Yamanashi teaches the following: wherein the skin state is a wrinkle (“wrinkle”), a spot, facial sagging, dark circles, a nasolabial fold, dullness of skin, elasticity, moisture, sebum, melanin, blood circulation, a blood vessel, blood properties, texture of skin, pore of skin, a skin color, or any combination thereof (see para. [0041]). As to Claim 5, Yamanashi teaches the following: estimating a comprehensive indicator of skin from the skin state (see “Based on the area positional information and parameter information that are input from parameter determination unit 222, wrinkle detection processing unit 223 detects the wrinkle area from the photographed image that is input from parameter determination unit 222, through use of the one or more parameter values determined for each area. In the present exemplary embodiment, wrinkle detection processing unit 223 calculates the gradient value for each portion of the photographed image through use of the Gabor filter processing. Wrinkle detection processing unit 223 then detects the wrinkle area from the photographed image through comparison of the calculated gradient value with a threshold. That is, wrinkle detection processing unit 223 performs known edge detection processing. When the gradient value becomes higher as the degree of change in the pixel value becomes higher, an area where the gradient value is equal to or greater than a threshold is detected as the wrinkle area. Wrinkle detection processing unit 223 then outputs the wrinkle area information that indicates the detected wrinkle area to image generation unit 150 (see FIG. 2).” in para. [0065]). As to Claim 6, Yamanashi teaches the following: wherein the skin state is a skin state in a part of a face (“an area from a lower eyelid of a left eye to a left cheek, and an area from a lower eyelid of a right eye to a right cheek”), a whole face, or a plurality of sites in a face (see “In the present exemplary embodiment, the plurality of areas, each of the areas having a different gloss level of skin, refer to an area of from a lower eyelid of a left eye to above a left cheek and an area of from a lower eyelid of a right eye to above a right cheek (hereinafter referred to as “areas below both eyes”), and facial areas other than these areas (hereinafter referred to as “an overall area”). In the following description, the image areas corresponding to the areas below both eyes are referred to as “image areas below both eyes.” The image area corresponding to the overall area is referred to as “an overall image area.” The overall area does not necessarily need to be an entire face, and may be, for example, an area portion that is a target of detection of a wrinkle, such as cheeks or a forehead.” in para. [0045]). As to Claim 7, Yamanashi teaches the following: estimating a shape regarding a facial skeleton of the user based on the nasal feature of the user, wherein the estimation of the skin state of the user is based on the shape regarding the facial skeleton of the user (see “For example, area estimation unit 210 estimates the gloss distribution based on the positions of the facial components, and divides the photographed image into the plurality of image areas in accordance with the gloss level. Area estimation unit 210 then determines the parameter values for each image area with reference to the previously stored table that associates the gloss level with the parameter values. When wrinkle detection apparatus 100 includes a three-dimensional shape obtaining unit for obtaining a three-dimensional shape of skin from the photographed image, area estimation unit 210 may estimate the gloss distribution based on the three-dimensional shape obtained by the three-dimensional shape obtaining unit. Area estimation unit 210 may previously store a three-dimensional shape model of a typical face, and may estimate the gloss distribution based on such a model.” in para. [0095]; and see “For example, area estimation unit 210 estimates that the gloss level is higher in a portion of a protruding shape, such as a tip of a nose or a cheek, than in other portions.” in para. [0096]). As to Claim 8, Yamanashi teaches the following: wherein the skin state of the user is attributed to the shape regarding the facial skeleton of the user (see “For example, area estimation unit 210 estimates the gloss distribution based on the positions of the facial components, and divides the photographed image into the plurality of image areas in accordance with the gloss level. Area estimation unit 210 then determines the parameter values for each image area with reference to the previously stored table that associates the gloss level with the parameter values. When wrinkle detection apparatus 100 includes a three-dimensional shape obtaining unit for obtaining a three-dimensional shape of skin from the photographed image, area estimation unit 210 may estimate the gloss distribution based on the three-dimensional shape obtained by the three-dimensional shape obtaining unit. Area estimation unit 210 may previously store a three-dimensional shape model of a typical face, and may estimate the gloss distribution based on such a model.” in para. [0095]). As to Claim 9, Yamanashi teaches the following: wherein the nasal feature (“position of a feature of the face, such as inner canthi”) is a nasal root, a nasal bridge, a nasal tip, nasal wings, or any combination thereof (these features are within the scope of Yamanashi’s teaching, see “Each of the facial components refers to a section that constitutes a face, such as eyes, a nose, and cheeks, and can be defined, for example, by a position of a feature of the face, such as inner canthi.” in para. [0040], where “inner canthi” is merely an example). As to Claim 10, Yamanashi teaches the following: wherein the skin state of the user is estimated using a trained model that outputs the skin state in response to an input of the nasal feature (see the determination method of the wrinkle area in para. [0044]-[0071], which operates as a trained model). As to Claim 11, Yamanashi teaches the following: A skin state estimation device (see “The present disclosure relates to a wrinkle detection apparatus and a wrinkle detection method for detecting a wrinkle area of skin included in an image.” in para. [0002]), comprising: an identifier (“Facial component detection unit”) 130 configured to identify a nasal feature (“nose”) of a user, the nasal feature including a nose shape of the user (see “Facial component detection unit 130 detects, from the photographed image, positions of facial components in the photographed image. Each of the facial components refers to a section that constitutes a face, such as eyes, a nose, and cheeks, and can be defined, for example, by a position of a feature of the face, such as inner canthi.” in para. [0040]; see “For example, area estimation unit 210 estimates the gloss distribution based on the positions of the facial components, and divides the photographed image into the plurality of image areas in accordance with the gloss level. Area estimation unit 210 then determines the parameter values for each image area with reference to the previously stored table that associates the gloss level with the parameter values. When wrinkle detection apparatus 100 includes a three-dimensional shape obtaining unit for obtaining a three-dimensional shape of skin from the photographed image, area estimation unit 210 may estimate the gloss distribution based on the three-dimensional shape obtained by the three-dimensional shape obtaining unit. Area estimation unit 210 may previously store a three-dimensional shape model of a typical face, and may estimate the gloss distribution based on such a model.” in para. [0095]; and see “For example, area estimation unit 210 estimates that the gloss level is higher in a portion of a protruding shape, such as a tip of a nose or a cheek, than in other portions.” in para. [0096]); and an estimator (“Wrinkle detection unit”) 220 configured to estimate a skin state (“wrinkle”) of the user based on the nose shape of the nasal feature of the user (see “Area estimation unit 210 estimates image areas that are positions of the plurality of areas in the image, each of the areas having a different gloss level of skin, based on the facial component positional information that is input from facial component detection unit 130. Area estimation unit 210 then outputs the photographed image and area positional information that indicates the estimated respective image areas to wrinkle detection unit 220 and chloasma detection unit 230.” in para. [0044]); and see “With reference to parameter value table 310 (see FIG. 4), parameter determination unit 222 of FIG. 3 determines the one or more parameter values used for wrinkle area detection for each of the image areas indicated by the area positional information that is input from area estimation unit 210.” in para. [0064]). As to Claim 12, Yamanashi teaches the following: A non-transitory computer-readable recording medium storing a program that causes a computer to execute a process (see “The present disclosure relates to a wrinkle detection apparatus and a wrinkle detection method for detecting a wrinkle area of skin included in an image.” in para. [0002]) comprising: identifying a nasal feature (“nose”) of a user, the nasal feature including a nose shape of the user (see “Facial component detection unit 130 detects, from the photographed image, positions of facial components in the photographed image. Each of the facial components refers to a section that constitutes a face, such as eyes, a nose, and cheeks, and can be defined, for example, by a position of a feature of the face, such as inner canthi.” in para. [0040]; see “For example, area estimation unit 210 estimates the gloss distribution based on the positions of the facial components, and divides the photographed image into the plurality of image areas in accordance with the gloss level. Area estimation unit 210 then determines the parameter values for each image area with reference to the previously stored table that associates the gloss level with the parameter values. When wrinkle detection apparatus 100 includes a three-dimensional shape obtaining unit for obtaining a three-dimensional shape of skin from the photographed image, area estimation unit 210 may estimate the gloss distribution based on the three-dimensional shape obtained by the three-dimensional shape obtaining unit. Area estimation unit 210 may previously store a three-dimensional shape model of a typical face, and may estimate the gloss distribution based on such a model.” in para. [0095]; and see “For example, area estimation unit 210 estimates that the gloss level is higher in a portion of a protruding shape, such as a tip of a nose or a cheek, than in other portions.” in para. [0096]); and estimating a skin state (“wrinkle”) of the user based on the nose shape of the nasal feature of the user (see “Area estimation unit 210 estimates image areas that are positions of the plurality of areas in the image, each of the areas having a different gloss level of skin, based on the facial component positional information that is input from facial component detection unit 130. Area estimation unit 210 then outputs the photographed image and area positional information that indicates the estimated respective image areas to wrinkle detection unit 220 and chloasma detection unit 230.” in para. [0044]); and see “With reference to parameter value table 310 (see FIG. 4), parameter determination unit 222 of FIG. 3 determines the one or more parameter values used for wrinkle area detection for each of the image areas indicated by the area positional information that is input from area estimation unit 210.” in para. [0064]). As to Claims 16-18, Yamanashi teaches the following: wherein the nose shape of the nasal feature of the user includes at least one of a nasal root, a nasal bridge, a nasal tip, or a nasal wing (see “For example, area estimation unit 210 estimates that the gloss level is higher in a portion of a protruding shape, such as a tip of a nose or a cheek, than in other portions.” in para. [0096]). Conclusion 8. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAVIN NATNITHITHADHA whose telephone number is (571)272-4732. The examiner can normally be reached Monday - Friday 8:00 am - 8:00 am - 4:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason M Sims can be reached at 571-272-7540. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NAVIN NATNITHITHADHA/Primary Examiner, Art Unit 3791 02/11/2026
Read full office action

Prosecution Timeline

Jul 24, 2023
Application Filed
Oct 31, 2025
Non-Final Rejection — §102
Jan 13, 2026
Response Filed
Feb 11, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12569172
DEVICES, SYSTEMS, AND METHODS ASSOCIATED WITH ANALYTE MONITORING DEVICES AND DEVICES INCORPORATING THE SAME
2y 5m to grant Granted Mar 10, 2026
Patent 12564329
Optical Device for Determining Pulse Rate
2y 5m to grant Granted Mar 03, 2026
Patent 12562273
MEDICAL DEVICES AND METHODS
2y 5m to grant Granted Feb 24, 2026
Patent 12555404
DISPLAY DEVICE HAVING BIOMETRIC FUNCTION AND OPERATION METHOD THEREOF
2y 5m to grant Granted Feb 17, 2026
Patent 12543976
SYSTEM FOR MONITORING BODY CHEMISTRY
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+30.9%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 963 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month