Prosecution Insights
Last updated: April 19, 2026
Application No. 18/571,776

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM

Final Rejection §103
Filed
Dec 19, 2023
Examiner
SHIN, SOO JUNG
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
2 (Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
527 granted / 604 resolved
+25.3% vs TC avg
Strong +16% interview lift
Without
With
+16.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
28 currently pending
Career history
632
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
37.5%
-2.5% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
24.2%
-15.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 604 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Response to Amendment The amendment filed on 17 February 2026 has been entered. The amendment of claims 1-14 has been acknowledged. In view of the amendment, the claim interpretation under 35 U.S.C. 112(f) and the rejection under 35 U.S.C. 101 have withdrawn. Response to Arguments Applicant's arguments filed 17 February 2026, with respect to the pending claims, have been fully considered but they are not persuasive. Applicant’s Representative submits that the prior art of record does not teach using a template relating to a head. The examiner respectfully disagrees. The prior art (Chou, in view of Taoka) teaches detecting the features of face including hairline by analyzing face images (see Taoka Fig. 7, ¶¶0048, ¶¶0061 discussed in the previous Office action mailed on 17 November 2025, pg. 7). In addition, the prior art further teaches detecting the head of a person using a template matching technique (Taoka ¶¶0075: “The photography processor 201 may detect displacements among the post-photography images by using a known template matching technique”; also refer to Taoka Figs. 7-9 showing the use of the head templates to detect head motion). PNG media_image1.png 666 402 media_image1.png Greyscale PNG media_image2.png 664 397 media_image2.png Greyscale PNG media_image3.png 664 401 media_image3.png Greyscale In view of this reasonable interpretation of the claims and the prior art, the examiner respectfully submits that the rejections set forth below remain proper. Claim Rejections - 35 USC § 103 Claim(s) 1-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chou et al. (“Simulation of face/hairstyle swapping in photographs with skin texture synthesis,” Multimed Tools Appl (2013) 63:729–756, DOI 10.1007/s11042-011-0891-1), in view of Taoka et al. (US 2020/0167549 A1), hereinafter referred to as Chou and Taoka, respectively Regarding claim 1, Chou teaches an information processing apparatus (Chou pg. 745: “Our experiments are conducted on a machine with a Intel Core 2 1.86GHz CPU and 3GByte memory, running on MS Windows XP”), comprising: a central processing unit (CPU) (Chou pg. 745 discussed above) configured to: detect based on a front face image of a user, hairline information, wherein the hairline information is regarding a hairline of the user (Chou pg. 735: “a human face could be divided vertically into three distinct thirds, which are the hairline to the eyebrows, the eyebrows to the base of nose and the base of nose to the bottom of the chin, respectively. We refer J as the hairline of a human face”; Chou Fig. 3(b)); and estimate based on the hairline information of the user and a model relating to a head, an outline of the head of the user (Chou Fig. 2: “extracted hairstyle” & “face model”; Chou pg. 736: “we can then derive J’s vertical position … J’s horizontal position is determined as the midpoint between O and P”; Chou Fig. 5: “The entire facial contour detected by our system”; Chou Fig. 15 & pg. 745: “derive a bread head model, and the automatic hairstyle adjusting scheme”). However, Chou does not appear to explicitly teach using a template relating to a head. Pertaining to the same field of endeavor, Zhou teaches using a template relating to a head (Taoka ¶¶0031: “Facial parts are characteristic parts in the face, and examples thereof include the contour of the face, the eyes, the nose, the mouth, the eyelids, and hairline”; Taoka ¶¶0048: “Face images of users 2 and results of skin analysis on the face images are associated with each other and are managed in the database 90”; Taoka ¶¶0061: “Since the past face images are displayed together with the during-photography-face image, the user 2 can adjust the position, the size, and the orientation of the during-photography-face image so that they match the position, the size, and the orientation of the past face images by moving the position of the face. Thus, a skin analysis result of the past face images and a skin analysis result of the post-photography face images can be compared with each other with higher accuracy”; Taoka Figs. 7-9 & ¶¶0075: “The photography processor 201 may detect displacements among the post-photography images by using a known template matching technique”). Chou and Taoka are considered to be analogous art because they are directed to image processing for detecting facial features. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the face/hairstyle swapping simulation in photographs with skin texture synthesis (as taught by Chou) to use a template (as taught by Taoka) because the combination provides higher accuracy (Taoka ¶¶0061). Regarding claim 2, Chou, in view of Taoka, teaches the information processing apparatus according to claim 1, wherein the template includes a template of the outline of the head (Taoka Figs. 7-9 & ¶¶0061, ¶¶0075 discussed above), and the CPU is further configured to adjust the template based on a shape of the hairline of the user (Taoka Fig. 7 & ¶¶0061 discussed above – the template is updated with newer past face images, see 320; also see Taoka Figs. 8-9 & ¶¶0056: “The left auxiliary portion 12a is provided with a marker 40a for adjusting the position and the size of the front-view face of the user 2 when the front-view face is seen in the left auxiliary mirror 22a … when the user 2 turns to the left auxiliary portion 12a and adjusts the positions of both eyes and/or the contour of the face to the eye markers 41a and/or the contour marker, respectively, it is possible to reliably capture an image of the right-side-view face. The same also applies to eye markers 41b and a contour marker (not illustrated) on the right auxiliary portion 12b”). Regarding claim 3, Chou, in view of Taoka, teaches the information processing apparatus according to claim 2, wherein the front face image includes a face of the user (Chou Figs. 1-7), and the CPU is further configured to detect based on feature information relating to the face of the user, an area of skin of the user (Chou Abstract: “After hair removal, the facial skin of the revealed forehead needs to be recovered … Our proposed method yields a more desired facial skin patch by first interpolating a base skin patch, and followed by a non-stationary texture synthesis”; Chou Figs. 7-8). Regarding claim 4, Chou, in view of Taoka, teaches the information processing apparatus according to claim 3, wherein the CPU is further configured to detect the hairline information based on an outline of the area of skin (Chou Figs. 5 & 7). Regarding claim 5, Chou, in view of Taoka, teaches the information processing apparatus according to claim 3, wherein the CPU is further configured to select the template based on the feature information (Chou Fig. 4(b) & pg. 739: “we define the polygon to represent the facial shape by using the set of points from the 0th to the 14th derived from ASM”; Taoka ¶¶0031: “The skin analysis apparatus 10 performs facial-part recognition processing on the post-photography face images (S13). Facial parts are characteristic parts in the face, and examples thereof include the contour of the face, the eyes, the nose, the mouth, the eyelids, and hairline. The facial parts may be represented as facial portions, facial organs, facial feature parts, or the like”). Regarding claim 6, Chou, in view of Taoka, teaches the information processing apparatus according to claim 5, wherein the feature information includes information regarding a feature point of one of the face of the user or parts of the face of the user (Chou Fig. 4(b) & pg. 739 and Taoka ¶¶0031 discussed above), and the CPU is further configured to select the template, based on the feature point and a feature point in the template (Chou Fig. 4(b), pg. 739 & Taoka ¶¶0031 discussed above). Regarding claim 7, Chou, in view of Taoka, teaches the information processing apparatus according to claim 2, wherein the CPU is further configured to determine whether the detected hairline information is correct (Chou Fig. 5: “Refined upper facial contour” – refining requires correcting an incorrect result; also see Chou pg. 751: “we can hardly neglect the uncorrected hair pixels after wearing on the new hairstyle”; Taoka ¶¶0056: “may be provided with a marker for adjusting the position of the contour of the face (this marker is hereinafter referred to as a ‘contour marker’, not illustrated)”). Regarding claim 8, Chou, in view of Taoka, teaches the information processing apparatus according to claim 7, wherein the CPU is further configured to: adjust a degree of change in the template (Taoka Figs. 7-9 & 0056, ¶¶0075 discussed above); and determine, based on the adjusted degree of change in the template is within a threshold value, that the detected hairline information is correct (Chou pg. 731: “we adopt the existing active shape model or ASM for short, to extract the facial contour from a given input photo. However, as ASM is not readily applicable for detecting the upper facial contour, we further extend its capability by fitting the mixing facial contour portion with curves in concord with ASM’s extracted part” – refer to [7] cited by Chou, which describes ASM in more detail. The ASM algorithm reiterates the contour fitting based on a predetermined threshold criterion; Chou pg. 735: “according to the ASM algorithm, normally these two pixels are vertically between the eyes and eyebrow. We next determine O and P … Note that the horizontal positions of I and K are the same as O and P, respectively. The vertical positions of I and J are set to be above the right eyebrow and the left eyebrow, through the help of ASM”; Taoka Figs. 7-9 & 0056, ¶¶0075 discussed above). Regarding claim 9, Chou, in view of Taoka, teaches the information processing apparatus according to claim 8, wherein the CPU is further configured to estimate a hair part of the user, based on the determination that the detected hairline information is correct (Chou Fig. 5 & pg. 751 discussed above; Chou Fig. 1(b); Chou Fig. 2: “Extracted Hairstyle”; Chou Fig. 9(d)). Regarding claim 10, Chou, in view of Taoka, teaches the information processing apparatus according to claim 9, wherein the CPU is further configured to estimate, based on the detected the hairline information is not correct, one of the hairline of the user or a root of hair of the user (Chou pg. 735 discussed above). Regarding claim 11, Chou, in view of Taoka, teaches the information processing apparatus according to claim 1, wherein the CPU is further configured to present a user interface (UI) to take the front face image satisfying an imaging condition to the user (Taoka Figs. 1, 4, 7-9 & ¶¶0058: “the photography guide UI 300 will be described with reference to FIGS. 7, 8, and 9. FIG. 7 is a view illustrating one example of the photography guide UI 300 when an image of the front-view face is captured. FIG. 8 is a view illustrating one example of the photography guide UI 300 when an image of the right-side-view face is captured. FIG. 9 is a view illustrating one example of the photography guide UI 300 when an image of the left-side-view face is captured”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the face/hairstyle swapping simulation in photographs with skin texture synthesis (as taught by Chou) to use a UI (as taught by Taoka) because the combination provides feedback to the user and guides the user to position correctly (Taoka ¶¶0058). Regarding claim 12, Chou, in view of Taoka, teaches the information processing apparatus according to claim 11, wherein the imaging condition includes a condition to allow the detection of the hairline information regarding the hairline of the user (Taoka Figs. 7-9 & ¶¶0056 discussed above – the UI displays the contour of the skin to the user so that the hairline can be detected). Regarding claim 13, Chou, in view of Taoka, teaches an information processing method comprising the steps described in claim 1. Therefore, claim 13 is rejected using the same rationale as applied to claim 1 discussed above. Regarding claim 14, Chou, in view of Taoka, teaches a non-transitory computer-readable medium having stored thereon, computer-executable instructions which, when executed by a computer, cause the computer to execute the steps described in claim 1 (Chou pg. 745: “Our experiments are conducted on a machine with a Intel Core 2 1.86GHz CPU and 3GByte memory, running on MS Windows XP. The involved programming language is Visual C++, and the Poison equation is solved by MATLBA 7.0”). Therefore, claim 14 is rejected using the same rationale as applied to claim 1 discussed above. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SOO J SHIN whose telephone number is (571)272-9753. The examiner can normally be reached M-F; 10-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571)272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Soo Shin/Primary Examiner, Art Unit 2667 571-272-9753 soo.shin@uspto.gov
Read full office action

Prosecution Timeline

Dec 19, 2023
Application Filed
Nov 13, 2025
Non-Final Rejection — §103
Feb 17, 2026
Response Filed
Mar 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602768
SURFACE DEFECT DETECTION MODEL TRAINING METHOD, AND SURFACE DEFECT DETECTION METHOD AND SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12586411
TARGET IDENTIFICATION DEVICE, ELECTRONIC DEVICE, TARGET IDENTIFICATION METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12586204
Detecting Optical Discrepancies In Captured Images
2y 5m to grant Granted Mar 24, 2026
Patent 12586216
METHOD OF DETERMINING A MOTION OF A HEART WALL
2y 5m to grant Granted Mar 24, 2026
Patent 12573021
ULTRASONIC DEFECT DETECTION AND CLASSIFICATION SYSTEM USING MACHINE LEARNING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+16.0%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 604 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month