Prosecution Insights
Last updated: April 19, 2026
Application No. 17/795,393

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

Final Rejection §102§103
Filed
Jul 26, 2022
Examiner
VAZ, JANICE EZVI
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
4 (Final)
77%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
48 granted / 62 resolved
+15.4% vs TC avg
Strong +28% interview lift
Without
With
+27.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
21 currently pending
Career history
83
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
36.5%
-3.5% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 62 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This is in response to Applicant’s Arguments/Remarks filed on September 30th, 2025, which has been entered and made of record. Response to Arguments Rejections - 35 USC § 112 Applicant’s arguments filed September 30th, 2025 regarding the rejection of claim(s) 1-11 under 35 USC § 112 (b) have been fully considered and are persuasive. Therefore, the rejection under 35 USC § 112 (b) is withdrawn. Claim Rejections - 35 USC § 102/103 Applicant’s arguments filed September 30th, 2025 have been fully considered, but they are not persuasive. With respect to the claims as amended, Applicant argues that: “While the teachings of Yoo and Ito may share broad similarities with the previously recited subject matter, the cited combination of references simply cannot teach or suggest to generate a trimmed facial image for authentication by trimming an input facial image in a trimming facial range, along with the trimmed facial image for authentication being generated at a resolution determined based on an amount of calculation for the trimmed facial image” (Remarks, page 4). The Examiner respectfully disagrees with Applicant’s premise and conclusion. The claim language recites, “the trimmed facial image for authentication is generated at the determined resolution to have a constant amount of calculation for the trimmed facial image”. In at least Table 2 and [0125], Yoo describes extracting specific patches of a face when an occluded region is present, including zoomed in views of certain areas of the face, and extracting feature values of the patches for downstream facial verification. However, Yoo does not explicitly teach setting a specific resolution for these patches. Ito teaches extracting local regions of the face for facial recognition where from each local region, a feature vector is extracted and registered/compared to known feature vectors for authentication. Ito explicitly teaches that the local regions can have a predetermined size/resolution in at least [0072]. Having a predetermined size for the local region or patch would read on the limitation of generating an image at the determined resolution to have a “constant amount of calculation” as the feature vector extraction process downstream of the image patch/local region extraction, would occur on a fixed size. Without importing limitations from the specification into the claims, but for mere understanding, this appears analogous to applicant’s specification describing: [0010] The resolution may be, for each of the combinations, set so that a facial image for authentication to be trimmed in the trimming facial range is generated with a constant amount of calculation. [0011] The information processing apparatus may further include a feature amount extraction unit that extracts a feature amount of the facial image for authentication, in which the amount of calculation may be calculated using the total number of pixels of the trimming facial range or the number of multiply-accumulate operations at a time of feature amount extraction by the feature amount extraction unit. [0012] a feature amount extraction unit that extracts a feature amount of the facial image for authentication; and a degree-of-similarity calculation unit that calculates, on the basis of the feature amount of the facial image for authentication that is extracted by the feature amount extraction unit and a feature amount of a facial image for registration that is prepared in advance, a degree of similarity of the facial image for authentication and the facial image for registration. Further, the amended limitation of the amount of calculation is calculated using a number of multiply accumulate operations at a time of feature amount extraction is also taught by Ito in at least [0053]-[0054] describing a feature vector calculation for each local region, where the feature vector is calculated through a projection operation which may involve principal component analysis. As a result, there are insubstantial differences between the prior art element/teachings and the corresponding element/concept(s) disclosed in Applicant's filed Specification/PG Pub. Further, the breadth of the claims permits the teachings of the prior art reference of Yoo and Ito to continue to read upon the claim language as currently stated because Applicant fails to explicitly define and describe subject matter in a way that prohibits the teachings of Yoo and Ito from reading upon the claim language. Thus, the prior art of record does meet the limitation of the claims as disclosed within the rejection below. Applicant's other arguments/remarks with respect to the current claims have been fully considered and given the appropriate weight, and so are believed to have been fully addressed in the Examiner’s response above and therefore not persuasive because they are fully met by the prior art of record as expressed in the rejection below. Status of Claims Claims 1-11 are pending. Claim(s) 1, 6, 9, and 10 were amended. No claims were canceled. No new claims were added. Claims 1-11 are considered below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, and 5-11, are rejected under 35 U.S.C. 103 as being unpatentable over Yoo (US-20180373924-A1) in view of Ito (US-20110091113-A1). Regarding Claim 1, representative of Claims 9 and 10, Yoo teaches an information processing apparatus, comprising: circuitry configured to determine, based on an occluded region of a face in an input facial image for authentication, a trimming facial range from the input facial image for authentication and a resolution of a trimmed facial image for authentication ([0123] FIG. 7 illustrates an example of configuring an image patch set based on whether an occlusion region is present. [0125] When it is determined that the occlusion region is present in the face area 720, an image patch set 750 is configured based on a corresponding predefined set composition, predefined for such a situation of the determined full face with mask occlusion. Examiner interpreting the predefined set composition to describe an area of focus (i.e. eye, nose, mouth) as analogous to a trimming facial range, and to describe whether an area will be zoomed in (reduced resolution), see Table 2 including composition descriptions), and generate the trimmed facial image for authentication by trimming the input facial image in the trimming facial range ([0129]: each of the partial images 815 may be generated by cropping and normalizing a predetermined area in the image 810, [0125] an image patch set 750 is configured based on a corresponding predefined set composition… For example, when it is determined that the mask 740 is present, the facial verification apparatus may generate the image patch set by generating a zoom-in image patch 752 of an eye area, a zoom-in image patch 754 of an eye and nose area, and an image patch 756 corresponding to a full face, see Table 2 describing compositions for image patch sets including description of what facial area the image patch should consist of i.e. nose, and if it should be a zoom-in/reduced resolution), wherein the trimmed facial image for authentication is generated at the determined resolution ([0125] an image patch set 750 is configured based on a corresponding predefined set composition… For example, when it is determined that the mask 740 is present, the facial verification apparatus may generate the image patch set by generating a zoom-in image patch 752 of an eye area, a zoom-in image patch 754 of an eye and nose area, and an image patch 756 corresponding to a full face, see Table 2 describing compositions for image patch sets including description of what facial area the image patch should consist of i.e. nose, and if it should be a zoom-in/reduced resolution) Yoo does not explicitly teach generating a trimmed facial image at the determined resolution to have a constant amount of calculation for the trimmed facial image, and wherein the amount of calculation is calculated using a number of multiply-accumulate operations at a time of feature amount extraction with respect to a feature amount of each facial image for authentication. Ito teaches generating a trimmed facial image at the determined resolution to have a constant amount of calculation for the trimmed facial range ([0010]: a face recognition method in which the position of a local region and the extraction size are set with reference to detected feature points. Extracted rectangular local regions are normalized to a normal size, [0072]: a local region is calculated based on a plurality of feature points in each face, and the size is set to a predetermined size. Examiner notes a predetermined size ensures constant calculation), and wherein the amount of calculation is calculated using a number of multiply-accumulate operations at a time of feature amount extraction with respect to a feature amount of each facial image for authentication (See Fig. 2, feature extraction downstream of local region extraction, [0053] A feature vector calculation unit 207 receives an input of each local region image extracted by the local region extraction unit 204, lines up the pixels in a predetermined order for each region, and performs vectorization. Then, the feature vector calculation unit 207 calculates a feature vector by performing a projection operation through a matrix multiplication with the input vector. [0054] A projection matrix is usually a dimensionally reduced projection matrix that projects an input vector onto a lower dimensional sub-space…by using a technique such as principle component analysis (PCA). Examiner notes a local region that was set by a predetermined size would ensure constant calculation for feature vector extraction via projection through PCA which involves multiply accumulate operations). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present invention to have modified Yoo to include the teachings of Ito by including a predetermined size/resolution determined to have a constant amount of calculation for a trimmed facial image. Doing so would improve the accuracy of feature extraction and comparison for authentication by ensuring the sizes of the images prior to feature extraction are the same. Regarding Claim 2, the Yoo and Ito combination teaches the information processing apparatus according to claim 1. In addition, Yoo teaches wherein the circuitry determines the trimming facial range and the resolution to be used for generating the trimmed facial image for authentication from a plurality of combinations of trimming facial ranges to be trimmed and corresponding -resolutions ([0014]: generating the plurality of image patches from among a select one, as the predefined first set composition, of a plurality of first set compositions, where the selected one of the plurality of the first set compositions may be selected from among the plurality of first set compositions dependent on a determination of an occlusion type, from among different occlusion types, [0125]: For example, when it is determined that the mask 740 is present, the facial verification apparatus may generate the image patch set by generating a zoom-in image patch 752 of an eye area, a zoom-in image patch 754 of an eye and nose area), the plurality of combinations being determined in advance ([0011]: generating the plurality of image patches based on a set composition, among set predefined compositions for image patch sets). Regarding Claim 5, Yoo and Ito combination teaches the information processing apparatus according to claim 2. In addition, Yoo teaches wherein the resolution is, for each of the combinations, set so that each trimmed facial image for authentication to be trimmed in the trimming facial range is generated with the constant amount of calculation ([0084]: when the detection position of the face area corresponds to a predefined first area, the facial verification apparatus generates the image patches based on a first set composition corresponding to the predefined first area, [0125] an image patch set 750 is configured based on a corresponding predefined set composition… For example, when it is determined that the mask 740 is present, the facial verification apparatus may generate the image patch set by generating a zoom-in image patch. Examiner interpreting when a patch set is generated, there will be a constant calculation based on the predefined composition, the composition inclusive of a resolution change/zoom in). In addition, Ito teaches the resolution is set so that each trimmed facial image for authentication to be trimmed in the trimming facial range is generated with the constant amount of calculation ([0010]: a face recognition method in which the position of a local region and the extraction size are set with reference to detected feature points. Extracted rectangular local regions are normalized to a normal size, [0072]: a local region is calculated based on a plurality of feature points in each face, and the size is set to a predetermined size. Examiner notes a predetermined size ensures constant calculation). Regarding Claim 6, the Yoo and Ito combination teaches the information processing apparatus according to claim 5. In addition, Yoo teaches further comprising: wherein the circuitry is further configured to extract a feature amount of each facial image for authentication ([0129]: a feature extractor is used to extract a feature value from the image patches). However, Yoo does not explicitly teach the remaining limitations of Claim 6. Ito teaches wherein the amount of calculation is calculated further using a total number of pixels of the trimming facial range ([0010]: a face recognition method in which the position of a local region and the extraction size are set with reference to detected feature points [0074]: a circumscribed rectangle of a region that will be a normal state local region [0125]: the extraction size is set to the same size as the circumscribed rectangle). Regarding Claim 7, the Yoo and Ito combination teaches the information processing apparatus according to claim 2. In addition, Yoo teaches further comprising: wherein the circuitry is further configured to extract a feature amount of the trimmed facial image for authentication ([0129]: a feature extractor is used to extract a feature value from the image patches); and calculate, based on the extracted feature amount of the trimmed facial image for authentication and a feature amount of a facial image for registration that is prepared in advance ([0130]: feature values may then be extracted based on each image patch set and corresponding reference image patch sets by a feature extractor), a degree of similarity of the trimmed facial image for authentication and the facial image for registration ([0138]: determines whether a facial verification is successful based on a comparison result between the extracted feature value and a registered feature value). Regarding Claim 8, the Yoo and Ito combination teaches the information processing apparatus according to claim 7. In addition, Yoo teaches wherein the feature amount of the facial image for registration is extracted in each of a plurality of facial images for registration generated in accordance with the plurality of combinations using an input facial image for registration that is prepared in advance ([0125] When it is determined that the occlusion region is present in the face area 720, an image patch set 750 is configured based on a corresponding predefined set composition, [0130]: feature values may then be extracted based on each image patch set). Regarding Claim 11, the Yoo and Ito combination teaches the information processing apparatus according to Claim 1. In addition, Ito teaches wherein the amount of calculation is calculated using a total number of pixels of the trimming facial range ([0074]: a circumscribed rectangle of a region that will be a normal state local region [0125]: the extraction size is set to the same size as the circumscribed rectangle). Claim(s) 3 are rejected under 35 U.S.C. 103 as being unpatentable over Yoo (US 20180373924 A1) and Ito (US 20110091113 A1) in view of Hayasaka (US 20160110586 A1). Regarding Claim 3, the Yoo and Ito combination teaches the information processing apparatus according to claim 2. In addition, Yoo teaches further comprising: wherein the circuitry is further configured to detect a plurality of facial part points from the input facial image for authentication ([0124]: the facial verification apparatus detects for facial landmarks, a landmark 730 may be detected in the face area, and the facial verification apparatus may then determine whether an occlusion is present, i.e., that the mask 740 is present, based on a detection result of the facial landmarks), ; and detect,([0124]: determine whether an occlusion is present, i.e., that the mask 740 is present, based on a detection result of the facial landmarks), wherein an assessment value is associated with each of the plurality of combinations ([0073]: composition or configuration of the multiple image patches may be determined based on a determined condition, for example…whether an occlusion is present, Examiner interpreting assessment value to be whether an occlusion is present or not), and wherein the circuitry is further configured to select, from the plurality of combinations, a combination, the trimming facial range of which does not overlap the occlusion part points ([0125]: For example, when it is determined that the mask 740 is present, the facial verification apparatus may generate the image patch set by generating a zoom-in image patch 752 of an eye area, a zoom-in image patch 754 of an eye and nose area) and the assessment value of which is highest ([0030] In the generating of the plurality of image patches based on the predefined first set composition, the predefined first set composition may be a selected one of a plurality of first set compositions, where the selected one…dependent on a determination of an occlusion type, from among different occlusion types, of the occlusion region. Examiner interpreting combination with the highest assessment value to be the patch set based on the composition with the strongest association with the occlusion type detected when it is determined that an occlusion is present), and determine the selected combination as a combination to be used for generating the trimmed facial image for authentication ([0125]: When it is determined that the occlusion region is present in the face area 720, an image patch set 750 is configured based on a corresponding predefined set composition, predefined for such a situation), and Although Yoo teaches occlusion region detection, neither Yoo nor Ito explicitly teach to calculate a reliability score for each of the facial part points, and detect, from the plurality of facial part points based on the reliability, occlusion part points associated with the occluded region. Hayasaka teaches to calculate a reliability score for each of the facial part points, and detect, from the plurality of facial part points based on the reliability, occlusion part points associated with the occluded region ([0052]: reliability calculation unit 24 may calculate the reliability on a basis of the comparison result for each of the small regions in the facial image. The reliability may represent the possibility of whether or not a certain small region is an occluded region). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present invention to have modified the Yoo and Ito combination by substituting the teachings of Hayasaka. Both Yoo and Hayasaka are related to the field of facial authentication and both involve steps of detecting an occluded region. Substituting Hayasaka’s teachings involving calculating a reliability score would provide the predictable result of detecting an occluded region in a facial image. Claim(s) 4 is rejected under 35 U.S.C. 103 as being unpatentable over Yoo (US 20180373924 A1) and Ito (US 20110091113 A1) in view of Araki (US 20190228249 A1). Regarding Claim 4, the Yoo and Ito combination teaches the information processing apparatus according to claim 2. In addition, Yoo teaches further comprising: wherein the circuitry is further configured to detect,([0124]: determine whether an occlusion is present, i.e., that the mask 740 is present, based on a detection result of the facial landmarks), wherein an assessment value is associated with each of the plurality of combinations ([0073]: composition or configuration of the multiple image patches may be determined based on a determined condition, for example…whether an occlusion is present, Examiner interpreting assessment value to be whether an occlusion is present or not), wherein the determination circuitry is further configured to, select, from the plurality of combinations, a combination, the trimming facial range of which does not overlap the occluded pixels ([0125]: For example, when it is determined that the mask 740 is present, the facial verification apparatus may generate the image patch set by generating a zoom-in image patch 752 of an eye area, a zoom-in image patch 754 of an eye and nose area) and the assessment value of which is highest ([0030] In the generating of the plurality of image patches based on the predefined first set composition, the predefined first set composition may be a selected one of a plurality of first set compositions, where the selected one…dependent on a determination of an occlusion type, from among different occlusion types, of the occlusion region. Examiner interpreting combination with the highest assessment value to be the patch set based on the composition with the strongest association with the occlusion type detected when it is determined that an occlusion is present), and determine the selected combination as a combination to be used for generating the trimmed facial image for authentication ([0125]: When it is determined that the occlusion region is present in the face area 720, an image patch set 750 is configured based on a corresponding predefined set composition, predefined for such a situation). Although Yoo teaches occlusion region detection, neither Yoo nor Ito explicitly teach circuitry configured to generate each facial image with a score indicating a degree of validity in facial identification; and detect, based on pixel information that constitutes each facial image with the score, occluded pixels associated with the occluded region. Araki teaches circuitry configured to generate each facial image with a score indicating a degree of validity in facial identification ([0060]: FIG. 7 illustrates one example of score data generated when a covered face (specifically, a face that is the same as the face for which the score data are computed in FIG. 6 with the left side of the face covered) is included in an image, see Fig. 7, [0105] Some or all of the components of the information processing devices 110 and 120 or the image processing device 210 may be achieved by … a processor); and detect, based on pixel information that constitutes each facial image with the score, occluded pixels associated with the occluded region ([0061]: N region herein refers to a region composed of continuous pixels whose scores (after smoothing processing) are less than or equal to a first threshold (e.g. “−0.1”). In other words, the N region can also be said to be a region which does not have a feature likely to be of a face). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present invention to have modified the Yoo and Ito combination by substituting the teachings of Araki. Both Yoo and Araki teach detecting an occluded region of a face. Substituting Araki’s teachings involving generating a facial image with a score indicating a degree of validity would provide the predictable result of detecting an occluded region in a facial image. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JANICE VAZ whose telephone number is (703)756-4685. The examiner can normally be reached Monday-Friday 9:00-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JANICE E. VAZ/Examiner, Art Unit 2667 /MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Jul 26, 2022
Application Filed
Nov 01, 2024
Non-Final Rejection — §102, §103
Jan 27, 2025
Response Filed
Apr 02, 2025
Final Rejection — §102, §103
May 08, 2025
Response after Non-Final Action
Jun 13, 2025
Request for Continued Examination
Jun 16, 2025
Response after Non-Final Action
Jul 25, 2025
Non-Final Rejection — §102, §103
Sep 30, 2025
Response Filed
Jan 08, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602831
METHOD AND SYSTEM FOR ENHANCING IMAGES USING MACHINE LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12602811
IMAGE PROCESSING SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602935
DRIVING ASSISTANCE DEVICE AND DRIVING ASSISTANCE METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12591847
SYSTEMS AND METHODS OF TRANSFORMING IMAGE DATA TO PRODUCT STORAGE FACILITY LOCATION INFORMATION
2y 5m to grant Granted Mar 31, 2026
Patent 12591977
AUTOMATICALLY AUTHENTICATING AND INPUTTING OBJECT INFORMATION
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+27.5%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 62 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month