Prosecution Insights
Last updated: April 19, 2026
Application No. 18/029,796

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM

Final Rejection §103
Filed
Mar 31, 2023
Examiner
GILLIARD, DELOMIA L
Art Unit
2661
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
2 (Final)
90%
Grant Probability
Favorable
3-4
OA Rounds
2y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
976 granted / 1089 resolved
+27.6% vs TC avg
Moderate +10% lift
Without
With
+10.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
12 currently pending
Career history
1101
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1089 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claims 1-4 and 7-8 are currently amended. Claims 1-8 are pending. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 7 and 8 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Examiner relies on US 2020/0211184 A1 to Fukuda et al., hereinafter, “Fukuda”. Accordingly, THIS ACTION IS MADE FINAL. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1 and 5-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2014/0247374 A1 to Murakami et al., hereinafter, “Murakami” in view of US 2020/0211184 A1 to Fukuda et al., hereinafter, “Fukuda”. Claim 1. An image processing device comprising: a memory configured to store instructions; and a processor configured to execute the instructions to: detect a face region of a person appearing in an image; Murakami [0052] The face detection section 32 performs a face detection process which detects the face region of a living body from the picked-up image. The face collation section 33 performs a face collation process which collates the face region of the current frame with a face region in the face region list. In addition, the face collation section 33 performs a face tracking process which tracks the face region of the tracking target, that is, a tracking face region. The body tracking section 34 tracks the body region set by the body region detection section 36. That is, the body tracking section 34 performs a body tracking process. The tracking target determination section 35 determines at least one of the face region and the body region as the tracking target. The body tracking detection section 36 sets a region having a prescribed position relation with the face region as the body region of the tracking target, that is, as a tracking body region. The feature amount extraction section 37 extracts a feature amount from the tracking body region. detect a body region of the person appearing in the image; Murakami [0052] The face detection section 32 performs a face detection process which detects the face region of a living body from the picked-up image. The face collation section 33 performs a face collation process which collates the face region of the current frame with a face region in the face region list. In addition, the face collation section 33 performs a face tracking process which tracks the face region of the tracking target, that is, a tracking face region. The body tracking section 34 tracks the body region set by the body region detection section 36. That is, the body tracking section 34 performs a body tracking process. The tracking target determination section 35 determines at least one of the face region and the body region as the tracking target. The body tracking detection section 36 sets a region having a prescribed position relation with the face region as the body region of the tracking target, that is, as a tracking body region. The feature amount extraction section 37 extracts a feature amount from the tracking body region. perform face collation processing using image information of the face region; Murakami [0052] The face detection section 32 performs a face detection process which detects the face region of a living body from the picked-up image. The face collation section 33 performs a face collation process which collates the face region of the current frame with a face region in the face region list. In addition, the face collation section 33 performs a face tracking process which tracks the face region of the tracking target, that is, a tracking face region. identify a correspondence relationship between the image information of the face region and image information of the body region when the image information of the face region and the image information of the body region satisfy a predetermined correspondence relationship; Murakami [0052] The face detection section 32 performs a face detection process which detects the face region of a living body from the picked-up image. The face collation section 33 performs a face collation process which collates the face region of the current frame with a face region in the face region list. In addition, the face collation section 33 performs a face tracking process which tracks the face region of the tracking target, that is, a tracking face region. The body tracking section 34 tracks the body region set by the body region detection section 36. That is, the body tracking section 34 performs a body tracking process. The tracking target determination section 35 determines at least one of the face region and the body region as the tracking target. The body tracking detection section 36 sets a region having a prescribed position relation with the face region as the body region of the tracking target, that is, as a tracking body region. The feature amount extraction section 37 extracts a feature amount from the tracking body region. Murakami [0005] The image processing system further includes a tracking determination unit to select at least one of the face and the partial region for tracking based on a predetermined condition )satisfy a predetermined correspondence relationship), and to track the selected at least one of the face and the partial region. Murakami [0018] FIG. 10 is an explanatory diagram which shows a state in which a position relation between a newly detected face region and a body region of a tracking target is judged. Murakami [0019] FIG. 11 is an explanatory diagram which shows a state in which a tracking target moves from a face region to a body region. Murakami fails to explicitly teach record the image information of the body region of the person identified as a result of the face collation processing when the image information of the body region satisfies a pre-stored person shape. However, Fukuda, in the field of collating and matching images of a person teaches and record the image information of the body region of the person identified as a result of the face collation processing when the image information of the body region satisfies apre-stored person shape. Fukuda [0058-0060] image processing apparatus acquires the captured two-dimensional image. Fukuda [0075] The display information generation unit 16 extracts a plurality of candidates from a plurality of registrants based on the similarity degree obtained by matching of a two-dimensional image of a matching target with a registered image group including three-dimensional registered images of a plurality of registrants and generates display information used for displaying the extracted candidates in order in accordance with the similarity degree… Examiner interprets three-dimensional image to be a person shape. Fukuda [0076] The composite image generation unit 17 displays a person check window including the generated composite image on the display. Examiner interprets displaying the composite to be recording the result. Fukuda [FIG. 4A and 4B] Fukuda teaches increasing the efficiency of a check operation performed by a user in image matching (collating) of a person. Thus, before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the teachings of Murakami with the teachings of Fukuda [0005] so to increase the efficiency of a check operation performed by a user in image matching of a person. Claim 5. Murakami further teaches wherein the processor is configured to execute the instructions to perform body collation processing using previously recorded image information of a body region of the person identified as the result of the face collation processing and the image information of the body region having the correspondence relationship with the image information of the face region used in the face collation processing, Murakami [0052] The face detection section 32 performs a face detection process which detects the face region of a living body from the picked-up image. The face collation section 33 performs a face collation process which collates the face region of the current frame with a face region in the face region list. In addition, the face collation section 33 performs a face tracking process which tracks the face region of the tracking target, that is, a tracking face region. The body tracking section 34 tracks the body region set by the body region detection section 36. That is, the body tracking section 34 performs a body tracking process. The tracking target determination section 35 determines at least one of the face region and the body region as the tracking target. The body tracking detection section 36 sets a region having a prescribed position relation with the face region as the body region of the tracking target, that is, as a tracking body region. The feature amount extraction section 37 extracts a feature amount from the tracking body region. and wherein the processor is configured to execute the instructions to record the image information of the body region of the person identified as the result of the face collation processing when the image information of the body region having the correspondence relationship with the image information of the face region used in the face collation processing is determined in the body collation processing to be the image information of the body region of the person identified as the result of the face collation processing. Murakami [0052] The face detection section 32 performs a face detection process which detects the face region of a living body from the picked-up image. The face collation section 33 performs a face collation process which collates the face region of the current frame with a face region in the face region list. In addition, the face collation section 33 performs a face tracking process which tracks the face region of the tracking target, that is, a tracking face region. The body tracking section 34 tracks the body region set by the body region detection section 36. That is, the body tracking section 34 performs a body tracking process. The tracking target determination section 35 determines at least one of the face region and the body region as the tracking target. The body tracking detection section 36 sets a region having a prescribed position relation with the face region as the body region of the tracking target, that is, as a tracking body region. The feature amount extraction section 37 extracts a feature amount from the tracking body region. Claim 6. Murakami further teaches wherein the processor is configured to execute the instructions to perform tracking processing using at least one of the image information of the face region or the image information of the body region. Murakami [0052] The face detection section 32 performs a face detection process which detects the face region of a living body from the picked-up image. The face collation section 33 performs a face collation process which collates the face region of the current frame with a face region in the face region list. In addition, the face collation section 33 performs a face tracking process which tracks the face region of the tracking target, that is, a tracking face region. The body tracking section 34 tracks the body region set by the body region detection section 36. That is, the body tracking section 34 performs a body tracking process. The tracking target determination section 35 determines at least one of the face region and the body region as the tracking target. The body tracking detection section 36 sets a region having a prescribed position relation with the face region as the body region of the tracking target, that is, as a tracking body region. The feature amount extraction section 37 extracts a feature amount from the tracking body region. Claim 7. Analyzed and reviewed in the same way as claim 1. See the above analysis. Claim 8. Analyzed and reviewed in the same way as claim 1. See the above analysis. Claim(s) 2-3 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2014/0247374 A1 to Murakami et al., hereinafter, “Murakami” in view of US 2020/0211184 A1 to Fukuda et al., hereinafter, “Fukuda and further in view of US 2015/0110364 A1 to Niinuma et al., hereinafter, “Niinuma”. Claim 2. Murakami fails to explicitly teach wherein the recording condition is information indicating that a state of the image is a predetermined state. However, Niinuma, in the field of identifying a person in image data, teaches wherein the recording condition is information indicating that a state of the image is a predetermined state. [0031] the determination unit 8 determines an image quality of the first image, for example, and outputs the image quality of the first image to the switching unit 10. When it is determined that the image quality of the first image does not satisfy the first threshold value… [0032] it is possible to switch imaging conditions of the imaging unit 3 so as to obtain an image quality suitable for the continuous authentication without stopping the continuous authentication. Niinuma teaches improving the image quality for authentication. Thus, before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the teachings of Murakami with the teachings of Niinuma [0016] to improve or maintain image quality during continuous authentication. Claim 3. Murakami fails to explicitly teach the recording condition is information indicating that a posture of the person whose body region has been detected is in a predetermined state. However, Niinuma, in the field of identifying a person in image data, teaches wherein the recording condition is information indicating that a posture of the person whose body region has been detected is in a predetermined state. Niinuma [0032] …the switching unit 10 replaces the first feature amount by the second feature amount as the feature amount for registration. At this time, the extraction unit 6 may extract a second extraction amount by selecting one arbitrary second image from the plurality of second images. In addition, the switching unit 10 may replace the first feature amount by the second feature amount as the feature amount for registration based on the stability in posture which is evaluated by the evaluation unit 9. In this manner, it is possible for the authentication unit 7 to use the feature amount for registration based on a posture which is suitable for continuous authentication. When the extraction unit 6 extracts the second feature amount, the obtaining unit 5 continuously obtains the fourth image, and the extraction unit 6 extracts a fourth feature amount which becomes a feature amount for reference. At the same time, the switching unit 10 switches comparison processing of a feature amount in the authentication unit 7 from the first feature amount and third feature amount to the second feature amount and fourth feature amount. Due to the above described switching processing in the switching unit 10, it is possible to switch imaging conditions of the imaging unit 3 so as to obtain an image quality suitable for the continuous authentication without stopping the continuous authentication. In this manner, it is possible to provide an image processing device in which availability is improved. Niinuma teaches improving the image quality for authentication. Thus, before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the teachings of Murakami with the teachings of Niinuma [0016] to improve or maintain image quality during continuous authentication. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2014/0247374 A1 to Murakami et al., hereinafter, “Murakami” in view of US 2020/0211184 A1 to Fukuda et al., hereinafter, “Fukuda and further in view of US 2020/0193615 A1 to Goncharov et al., hereinafter, “Goncharov”. Claim 4. Murakami fails to explicitly teach the recording condition is information indicating that an attribute or an accessory of the person whose body region has been detected differs from image information of a body region recorded for the person identified as a result of the face collation processing. However, Goncharov, in the field of monitoring a person in image data (surveillance), teaches wherein the recording condition is information indicating that an attribute or an accessory of the person whose body region has been detected differs from image information of a body region recorded for the person identified as a result of the face collation processing. Goncharov [0168-0181] recognizing the person has changed clothes (understood to be an attribute or accessory) and updating the registration information. Goncharov teaches there is a need for a compact and informative high-level interface for surveillance. Thus, before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the teachings of Murakami with the teachings of Goncharov [0006] because there is a need for a compact and informative high-level interface that would not require the user to carefully watch a video in order to understand what people do in an area under surveillance. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DELOMIA L GILLIARD whose telephone number is (571)272-1681. The examiner can normally be reached 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DELOMIA L GILLIARD/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Mar 31, 2023
Application Filed
May 31, 2025
Non-Final Rejection — §103
Sep 03, 2025
Response Filed
Dec 13, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602805
DATA TRANSMISSION THROTTLING AND DATA QUALITY UPDATING FOR A SLAM DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12602932
SYSTEMS AND METHODS FOR MONITORING USERS EXITING A VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12602796
SYSTEM, DEVICE, AND METHODS FOR DETECTING AND OBTAINING INFORMATION ON OBJECTS IN A VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12602952
IMAGE-BASED AUTOMATED ERGONOMIC RISK ROOT CAUSE AND SOLUTION IDENTIFICATION SYSTEM AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602895
MACHINE LEARNING-BASED DOCUMENT SPLITTING AND LABELING IN AN ELECTRONIC DOCUMENT SYSTEM
2y 5m to grant Granted Apr 14, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+10.2%)
2y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 1089 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month