Prosecution Insights
Last updated: April 19, 2026
Application No. 16/976,389

POSE INVARIANT FACE RECOGNITION

Final Rejection §103
Filed
Aug 27, 2020
Examiner
DULANEY, KATHLEEN YUAN
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Carnegie Mellon University
OA Round
8 (Final)
77%
Grant Probability
Favorable
9-10
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
504 granted / 653 resolved
+15.2% vs TC avg
Strong +24% interview lift
Without
With
+24.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
32 currently pending
Career history
685
Total Applications
across all art units

Statute-Specific Performance

§101
21.2%
-18.8% vs TC avg
§103
33.1%
-6.9% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
26.4%
-13.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 653 resolved cases

Office Action

§103
DETAILED ACTION DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The response received on 12/10/2025 has been placed in the file and was considered by the examiner. An action on the merit follows. Response to Amendment The amendments filed on 2025 December 10 have been fully considered. Response to these amendments is provided below. Summary of Amendment/ Arguments and Examiner’s Response: The applicant has amended the claims to state that the features that are compared to the one or more features are compared only from half of the face of the facial images. ON pages 5 and 6 of the remarks, the applicant notes the amendments claim features extracted from only one side of the face are used to compare with the features extracted from the generated frontal half face image, as consistent with the examiner’s commentary of the previous office action. It is noted that in the previous office action, the examiner stated “If the applicant wishes only to be extracted from half of the face, the applicant must explicitly state so in the claim” and “The broadest reasonable interpretation is the one or more features are to be extracted from one side of the face, but does not exclude features also extracted from the other side as well”. The amendments do not claim what the applicant argues is consistent with the examiner’s previous remarks, because currently, the broadest reasonable interpretation is that the features that are compared are extracted from one side of the face. There is no exclusion that features are extracted from the other side of the face, only that the features that are compared are from only one side of the face. Though previous reference Hua et al provides such a limitation, a new reference is provided that more clearly illustrates that corresponding features from corresponding portions of the face are compared, due the applicant’s amendment. The rejection follows below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 5 and 12-13 are rejected under 35 U.S.C. 103(a) as being unpatentable over U.S. Patent Application Publication NO. 20160086017 (Rodriguez et al) in view of “3D Morphable Models as Spatial Transformer Networks (Bas et al) and U.S. Patent Application Publication NO. 20070036398 (Chen). Regarding claim 1, Rodriguez et al discloses a method (fig. 1) for performing pose-invariant facial recognition comprising: receiving an off-angle facial image containing a 2D image of a face (fig. 1, “a”, page 3, paragraphs 73-74), wherein the face is rotated off-angle from a directly frontal view (page 2, paragraph 71, fig. 3-5); generating a personalized 3D model of the face from the 2D image, i.e. the result of UV mapping “D” of fig. 1 (page 4, paragraph 87), adjusting the personalized 3D model to represent the face from a frontal viewpoint by rectifying the pose for the 2D projection (page 4, paragraph 89); creating a frontal half-face image of the face from the 3D model, i.e. the pose-rectified 2D image (fig. 1, “E”, page 4, paragraphs 89, 90, fig. 9), the frontal half-face image comprising a half of the face visible in the off-angle facial image (fig. 9). Rodriguez et al further discloses using a facial recognition model (page 1, paragraphs 5,15, page 2, paragraph 20, page 3, paragraph 71, page 4, paragraph 93), by comparing a frontal half face image (fig. 9, page 4, paragraph 93) generated from a right-facing off-angle facial image (fig. 8, subject facing right, page) with a gallery image (page 4, paragraph 93, page 3, paragraph 71, page 1, paragraph 3). Rodriguez et al does not disclose expressly the generating a personalized 3D model of the face from the 2D image using a 3D spatial transformer network, and in facial recognition, one or more features extracted from a test image corresponding to the left half of the face (fig. 9 of Rodriguez et al) is compared with one or more features extracted only from a left half of facial images from a gallery or comparing one or more features extracted only from a frontal half- face image generated from a left-facing off-angle facial image with one or more features extracted from a right half of facial images from a gallery. Bas et al discloses the personalized 3D model of the face from the image is generated using a 3D spatial transformer network, a 3DMM-STN (page 895, paragraph 2, page 899, paragraph 2, fig. 6, image 2). Rodriguez et al and Bas et al are combinable because they are from the same field of endeavor, i.e. facial modeling. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use a 3DMM-STN model. The suggestion/motivation for doing so would have been to provide a more robust normalization. Rodriguez et al (as modified by Bas et al) does not disclose expressly in facial recognition, one or more features extracted from a test image corresponding to the left half of the face (fig. 9 of Rodriguez et al) is compared with one or more features extracted from a left half of facial images from a gallery or comparing one or more features extracted only from a frontal half- face image generated from a left-facing off-angle facial image with one or more features extracted only from a right half of facial images from a gallery. Chen et al discloses in facial recognition, one or more features extracted from a test image corresponding to the left half of the face (fig. 8 of Rodriguez et al) is compared with one or more features extracted only from a left half of facial images from a gallery (fig. 2, two images are compared with corresponding features from the same area of the face, features extracted only from left side of face of fig. 2, item 202, 204 for comparison of corresponding part in fig. 6, item 618). Rodriguez et al (as modified by Bas et al) & Chen et al are combinable because they are from the same field of endeavor, i.e. facial recognition. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use features of corresponding regions of the face for recognition. The suggestion/motivation for doing so would have been to provide a more accurate recognition by comparing like areas. Therefore, it would have been obvious to combine the method of Rodriguez et al with the transformer network of Bas et al and the corresponding features of Chen et al to obtain the invention as specified in claim 1. Regarding claim 5, Rodriguez et al discloses the frontal half- face image (fig. 9) is created using a left half of the off-angle facial image (fig. 9) for right-facing poses (fig. 8) and a right half of the off-angle facial image for left-facing poses, because when the face faces the left direction, the right half would be exposed and created into the image of fig. 9. Regarding claim 12, Rodriguez et al discloses using frontalized half-face images for enrollment (page 3, paragraph 71, page 4, paragraph 93). Chen et al discloses the facial recognition model is trained on enrolled images and whole face images (fig. 2, 5). Regarding claim 13, Chen et al discloses the facial recognition model is trained on whole face images (fig. 2, 5). Claim 11 is rejected under 35 U.S.C. 103(a) as being unpatentable over Rodriguez et al in view of Bas et al and Chen et al, as applied to claim 1, above, and further in view of U.S. Patent Application Publication No. 20120293635 (Sharma et al). Regarding claim 11, Rodriguez et al (as modified by Bas et al and Chen et al) discloses all of the claimed elements as set forth above and is incorporated herein by reference. Rodriguez et al further discloses the off-angle facial image is from the camera (fig. 1, item “a”, page 3, paragraph 73-74), estimating a pose of the face, i.e. the pose of the face in fig. 1, step c, or the alignment of the face to each of the datasets (page 2, paragraph 71); and masking non-visible regions of the facial image based on the pose estimate, since the non-visible regions are masked as seen in fig. 9 and carried out in step E of fig. 1. Rodriguez et al (as modified by Bas et al and Chen et al) does not disclose expressly receiving an estimate of parameters of a camera generating the image; estimating a pose of the face based on the parameters. Sharma discloses receiving an estimate of parameters of a camera, i.e. calibration parameters (page 1, paragraph 20), or the state variables of the camera (page 3, paragraphs 36-41), generating the image (fig. 1); estimating a pose of the face based on the parameters, since the parameters define the camera position to the world coordinates with respect to the face (page 3, paragraph 43), and further define the pose based on correct calibration (page 1, paragraph 20), and being defined as aligning the datasets (page 1, paragraph 20). Rodriguez et al (as modified by Bas et al and Chen et al) & Sharma et al are combinable because they are from the same field of endeavor, i.e. processing facial images. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use camera parameters for estimating pose. The suggestion/motivation for doing so would have been to provide a more robust method by allowing positioning to be understood in a global manner that would allow for addition of data. Therefore, it would have been obvious to combine Rodriguez et al (as modified by Bas et al and Chen et al) with camera data of Sharma et al to obtain the invention as specified in claim 11. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN YUAN DULANEY whose telephone number is (571)272-2902. The examiner can normally be reached M1:9am-5pm, th1:9am-1pm, fri1 9am-3pm, m2: 9am-5pm, t2:9-5 th2:9am-5pm, f2: 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at 5712703717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATHLEEN Y DULANEY/Primary Examiner, Art Unit 2666 1/5/2026
Read full office action

Prosecution Timeline

Aug 27, 2020
Application Filed
Jul 31, 2023
Non-Final Rejection — §103
Oct 11, 2023
Response Filed
Oct 17, 2023
Final Rejection — §103
Feb 20, 2024
Request for Continued Examination
Mar 05, 2024
Response after Non-Final Action
Mar 30, 2024
Non-Final Rejection — §103
Jul 03, 2024
Response Filed
Aug 12, 2024
Final Rejection — §103
Nov 15, 2024
Request for Continued Examination
Nov 19, 2024
Response after Non-Final Action
Dec 16, 2024
Non-Final Rejection — §103
Mar 17, 2025
Response Filed
Mar 25, 2025
Final Rejection — §103
Jun 30, 2025
Request for Continued Examination
Jul 01, 2025
Response after Non-Final Action
Sep 08, 2025
Non-Final Rejection — §103
Dec 10, 2025
Response Filed
Jan 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602801
IMAGE PROCESSING CIRCUITRY AND IMAGE PROCESSING METHOD FOR DEPTH ESTIMATION IN A TIME-OF-FLIGHT SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602930
METHOD AND SYSTEM FOR CONTINUOUSLY TRACKING HUMANS IN AN AREA
2y 5m to grant Granted Apr 14, 2026
Patent 12593019
INFORMATION PROCESSING APPARATUS USING PARALLAX IN IMAGES CAPTURED FROM A PLURALITY OF DIRECTIONS, METHOD AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12586242
Method, System, And Computer Program For Recognizing Position And Attitude Of Object Imaged By Camera
2y 5m to grant Granted Mar 24, 2026
Patent 12586165
APPARATUS AND METHOD FOR RECONSTRUCTING IMAGE USING MOTION DEBLURRING
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+24.0%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 653 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month