Prosecution Insights
Last updated: April 19, 2026
Application No. 18/470,459

IMAGE PROCESSING APPARATUS, AUTHENTICATION SYSTEM, METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Final Rejection §103
Filed
Sep 20, 2023
Examiner
LI, RUIPING
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
95%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
722 granted / 933 resolved
+15.4% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
40 currently pending
Career history
973
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
41.2%
+1.2% vs TC avg
§102
25.9%
-14.1% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 933 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This is in response to the applicant response filed on 01/09/2026. In the applicant’s response, claims 1-4, 6-10, 12-15, 17, and 18 were amended. Accordingly, claims 1-18 are pending and being examined. Claims 1, 17, and 18 are independent form. Claim Rejections - 35 USC § 103 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claims 1-8, 11, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Sung et al (US 2006/0291001, hereinafter “Sung”) in view of Lerdsudwichai et al (“Tracking multiple people with recovery from partial and total occlusion”, Pattern Recognition, 2005, hereinafter “Lerdsudwichai”). Regarding claim 1, Sung discloses an image processing apparatus (the method and the apparatus for detecting an occluded face and discrimination an illegal user in images; see figs.8-9 and abstract) comprising at least one processor; and at least one memory coupled to the at least one processor, the memory storing instructions (these hardware related features are inherent in the apparatus of Sung) that, when executed by the processor, cause the processor to: detect first face information of a first person from an image; obtain at least see “detect facial region 817” of fig.8A and para.62: “In operation 817, a facial region is detected from the k-th frame image [of a person].”); see “is facial region detected? 819” of fig.8A and para.62: “In operation 819, it is determined whether a facial region is detected [in the input frame image].”); and perform face authentication of the first person, when it is determined that the face of the first person is not occluded by part of the first person if an occluded face is not detected and the thresholds passed steps 829, 833, and 837, then the current user is authorized as a normal user; see 825->831->929->833->837->839 in figs.8A-8B; see para.61-para.65). As can be see, the mere difference between the claimed inventions and the apparatus of Sung is: Sung does not explicitly disclose wherein the face of the person in the image is blocked by another person different from the person but instead is blocked by a hand, sunglasses, or a mask. However, in the same field of endeavor, Lerdsudwichai teaches this feature. Specifically, in the Abstract, Lerdsudwichai states, “we present an algorithm for tracking faces of multiple people even in cases of total occlusion”; wherein “the robustness of the algorithm, and its capability to correctly track multiple people even when faces are temporarily occluded by other faces or by other objects in the scene.” In sec. 2.3.1, paragraph 1, Lerdsudwichai discloses “[a] face can become occluded by another tracked face or by another object. In the first case, where the face is occluded by another tracked face, the occlusion detection is achieved using an occlusion grid. The locations that the objects occupy in the image are recorded into the occlusion grid. This grid is used to determine the locations of the moving objects and their overlap.” It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to incorporate the teachings of Lerdsudwichai into the teachings of Sung and track/identify a face of a person occluded by another tracked face taught by Lerdsudwichai in a surveillance video recoded at ATMs taught by Sung. Suggestion or motivation for doing so would have been to “correctly track multiple people even when faces are temporarily occluded by other faces or by other objects in the scene” and “use the color distribution of the face as well as the color distribution of the clothes to identify the correct face after occlusion” as taught by Lerdsudwichai, cf., Abstract, and Sec. 2.3.,2, paragraph 1. Therefore, the claim is unpatentable over Sung in view of Lerdsudwichai. Regarding claim 2, the combination of Sung and Lerdsudwichai discloses the image processing apparatus according to claim 1, wherein the instructions, when executed by the processor, further cause the processor to determine a state of the face of the first person, in a case where the face of the first person is occluded by part of the first person or part of the second person in an image captured before the image from which the first person is detected; and perform face authentication of the first person, using a model that is based on the state of the face of the first person (Sung, see 825->831->829->833->835 in figs.8A-8B where the current user is an illegal user since the face of the user is occluded by the subject’s self.). Regarding claim 3, the combination of Sung and Lerdsudwichai discloses the image processing apparatus according to claim 1, wherein, when the processor determines that the face of the first person is occluded by part of the first person or part of the second person, the Regarding claim 4, the combination of Sung and Lerdsudwichai discloses the image processing apparatus according to claim 2, wherein the instructions, when executed by the processor, further cause the processor toperson, in a case where the face of the first person is not occluded by part of the first person or part of the second person in an image captured before the image from which the first person is detected (Sung, see 825->827->829->833->837 in figs.8A-8B, where the current user is a normal user since the face of the user is not occluded). Regarding claim 5, the combination of Sung and Lerdsudwichai discloses the image processing apparatus according to claim 2, wherein the model is a model for extracting a feature of the face of the first person from the image from which the first person is detected (Sung, see “detect facial region 817” of fig.8A). Regarding claim 6, the combination of Sung and Lerdsudwichai discloses the image processing apparatus according to claim 1, wherein the instructions, when executed by the processor, further cause the processor to associate face information and posture information of a same person in the image (Sung, see “detect facial region 817” and “detect occluded face 823” of fig.8A). Regarding claim 7, the combination of Sung and Lerdsudwichai discloses the image processing apparatus according to claim 1,wherein the first posture information includes positions of joint points of the first person and the second posture information includes positions of joint points of the second person, and wherein the instructions, when executed by the processor, further cause the processor determine whether the face of the first person is occluded by part of the first person or part of the second person, based on the first face information and a reliability of one of the joint point of the first person in the first posture information and the joint point of the second person in the second posture information (Sung, see para.6: “In this situation, the images of the eyes and the mouth are not accurately detected when a user blocks a portion of the face with a hand,”). Regarding claim 8, the combination of Sung and Lerdsudwichai discloses the image processing apparatus according to claim 1, wherein the instructions, when executed by the processor, further cause the processor todetermine whether the face of the first person is occluded by part of the first person or part of the second person, based on a distance between a position of the face of the first person and a position of one of a joint point of the first person in the first posture information and a joint point of the second person in the second posture information (Sung, see para.6: “In this situation, the images of the eyes and the mouth are not accurately detected when a user blocks a portion of the face with a hand,”). Regarding claim 11, the combination of Sung and Lerdsudwichai discloses the image processing apparatus according to claim 2, wherein the state of the face of the first person is at least one of a state in which the face of the first person is wearing a face mask, a state in which the face of the first person is wearing sunglasses, and a state in which the face of the first person is wearing lipstick (Sung, see para.56: “the occluded facial image class includes facial images in which higher regions or lower regions are unidentifiable because the faces are partially occluded by sunglasses, masks, or scarves.”). Regarding claims 16-18, each of which parallels claim 1 and is an inherent variation of claim 1, thus it is interpreted and rejected for the reasons set forth in the rejection of claim 1. 6. Claims 9-10, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Sung in view of Lerdsudwichai and further in view of Mostafa et al (CN110555364, hereinafter “Mostafa”). A machine translated English version (called CN110555364-Eng) of document CN110555364 wa provided by the examiner with the previous office action. Regarding claim 9, the combination of Sung and Lerdsudwichai does not explicitly disclose the claimed features. However, in the same field of endeavor, Mostafa teaches a comparing unit which is configured to compare a feature of the face of the first person with a feature of a face of a registered first person (see CN110555364-Eng, pg.2, lines 33-44: “[w]hen the user attempts to use the facial recognition authentication to obtain access to the device, it can compare the characteristic of one or more captured user images with each registered configuration file on the device. The user may obtain access to the device by having a matching score exceeding at least one of an unlock threshold value for the face recognition authentication process and a registration configuration file.”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to incorporate the teachings of Mostafa into the teachings of the combination of Sung and Lerdsudwichai and calculate a matching score by comparing the characteristic of one or more captured user images with each registered configuration file on the device taught by Mostafa for people identification and people tracking. Suggestion or motivation for doing so would have been to “adapt to the change of the facial feature of the authorized user with time” as taught by Mostafa, cf., Page 2, lines 1-6. Therefore, the claim is unpatentable. Regarding claim 10, the combination of Sung, Lerdsudwichai, and Mostafa discloses the image processing apparatus according to claim 9, to compare whether the face of the first person is the same as the face of the registered first person, based on a similarity between the feature of the face of the first person and the feature of the face of the registered first person (Mostafa, ibid.). Regarding claim 12, the combination of Sung, Lerdsudwichai, and Mostafa discloses the image processing apparatus according to claim 9, further to display a comparison result of the comparing unit with a method that depends on the comparison result (this feature is obvious and straightforward for one of ordinary skill in the art based on the teachings of the combination of Sung, Lerdsudwichai, and Mostafa.). 7. Claims 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Sung in view of Lerdsudwichai and further in view of Xu et al (WO 2020015477, hereinafter “Xu”). A machine translated English version (called WO2020015477-Eng) of document WO2020015477 is provided by the examiner with this office action. Regarding claim 13, the combination of Sung and Lerdsudwichai does not explicitly disclose, setting the first person as a marked person, in a case where it is determined that a state in which the face of the first person is occluded by part of the first person or part of the second person is continuous. However, in the same field of endeavor, Xu teaches: “At 310, the user is prompted on the terminal device 1 to enter the user's mobile phone number. After the user inputs a mobile phone number on the terminal device 1, the terminal device 1 sends the user's mobile phone number to the identification terminal device.” See fig.3 of WO2020015477, and WO2020015477-Eng, pg.11, lines 16-19. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to incorporate the teachings of Xu into the teachings of the combination of Sung and Lerdsudwichai and set a user and notice a user taught by Xu for user identification. Suggestion or motivation for doing so would have been to identify a user in a situation including “at least one of glasses detection, shielding detection and face quality evaluation detection” as taught by Xu, see Abstract. Therefore, the claim is unpatentable. Regarding claim 14, the combination of Sung, Lerdsudwichai, and Xu discloses the image processing apparatus according to claim 13, further notify a user terminal that the first person is set as a marked person (Xu, see 361, 371, 381, and 391 of fig.3; see WO2020015477-Eng, pg.11, line 41—pg.12, line 22). Regarding claim 15, the combination of Sung, Lerdsudwichai, and Xu discloses the image processing apparatus according to claim 13, wherein the authentication unit changes a face authentication method of the first person set for the marked person (Xu, ibid.). Response to Arguments 8. Applicant’s arguments, with respects to claim 1, filed on 01/09/2026, have been fully considered but they are not persuasive. On page 8 of applicant’s response, paragraph 3, applicant argues: Lerdsudwichai discusses tracking multiple faces and detecting occlusion using an "occlusion grid," where overlap between tracked faces (or objects) indicates occlusion. (Lerdsudwichai, Sec. 2.3.1.) However, Lerdsudwichai's occlusion detection is not based on obtaining posture information of a person. Instead, Lerdsudwichai uses tracked face/object location overlap in an occlusion grid and similarity comparisons between faces across images to infer occlusion... Although Lerdsudwichai includes sample images in Section 3.1 where a face is occluded by a person's hand, those images are provided as verification examples of the occlusion-detection method. They do not disclose detecting posture information of the person (e.g., hand posture or skeletal pose) or using posture information to determine face occlusion. (The emphases added by the examiner.) First of all, the examiner respectfully points out that “the posture information of a person” recited in the claim under its broadest reasonable interpretation (BRI) includes the facial information of a person. Second, as recognized by the applicant, Lerdsudwichai discloses “an algorithm for tracking/detecting faces of multiple people even in cases of total occlusion”; wherein “the robustness of the algorithm, and its capability to correctly track multiple people even when faces are temporarily occluded by other faces or by other objects in the scene”. In addition, Lerdsudwichai, see Sec.2.3.2, paragraph 1, discloses “our system uses the color distribution of the face as well as the color distribution of the clothes to identify the correct face after occlusion.” It is apparent that the face tracking algorithm of Lerdsudwichai can identify each face of the multiple people in an image even when faces are temporarily occluded by other faces or by other objects in the scene. Therefore, the face tracking algorithm of Lerdsudwichai is based on the obtained facial information (or, the posture information) of a person. The argument is unpersuasive. Conclusion 9. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUIPING LI whose telephone number is (571)270-3376. The examiner can normally be reached 8:30am--5:30pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HENOK SHIFERAW can be reached on (571)272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit https://patentcenter.uspto.gov; https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center, and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RUIPING LI/Primary Examiner, Ph.D., Art Unit 2676
Read full office action

Prosecution Timeline

Sep 20, 2023
Application Filed
Oct 11, 2025
Non-Final Rejection — §103
Jan 09, 2026
Response Filed
Jan 29, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602754
DYNAMIC IMAGING AND MOTION ARTIFACT REDUCTION THROUGH DEEP LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12597183
METHOD AND APPARATUS FOR PERFORMING PRIVACY MASKING BY REFLECTING CHARACTERISTIC INFORMATION OF OBJECTS
2y 5m to grant Granted Apr 07, 2026
Patent 12597289
IMAGE ACCUMULATION APPARATUS, METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586408
METHOD AND APPARATUS FOR CANCELLING ANONYMIZATION FOR AN AREA INCLUDING A TARGET
2y 5m to grant Granted Mar 24, 2026
Patent 12573239
SYSTEM AND METHOD FOR LIVENESS VERIFICATION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
95%
With Interview (+18.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 933 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month