Prosecution Insights
Last updated: April 19, 2026
Application No. 18/272,522

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY STORAGE MEDIUM

Final Rejection §101§102§103
Filed
Jul 14, 2023
Examiner
GORADIA, SHEFALI DINESH
Art Unit
2676
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
2 (Final)
90%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
534 granted / 595 resolved
+27.7% vs TC avg
Moderate +11% lift
Without
With
+10.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
23 currently pending
Career history
618
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
34.5%
-5.5% vs TC avg
§102
27.5%
-12.5% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 595 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment was filed on 12/30/2025. Claims 2-3 are canceled. Response to Arguments Applicant's arguments filed on 12/30/2025, see Remarks on page 13, regarding rejections under 35 USC 102 and 103 have been fully considered but they are not persuasive. The applicant argues stating: PNG media_image1.png 304 672 media_image1.png Greyscale The Examiner respectfully disagrees. Even though previous office action did not provide an art for claim 3, claim 3 was not brought into claim 1 exactly in the same scope. Claim 3 recited slightly different limitations in “decide,…a timing at which the image to be processed…is earliest is generated” than what it is not recited in amended claim 1. Amended claim 1 now recites “determining…a timing at an earliest image…to be processed”. Therefore, claim 1 is met by NTT and see the rejection below. Applicant's arguments filed 12/30/2025 have been fully considered but they are not persuasive. Applicants on pages 8-11 argues regarding rejections under 35 USC 101 stating: PNG media_image2.png 358 682 media_image2.png Greyscale PNG media_image3.png 394 650 media_image3.png Greyscale The Examiner respectfully disagrees. Claim 1, as amended, recites an abstract idea - mental process under 35 USC 101. The amended claim, at a higher level, recites an apparatus that “determine”, “detect”, and “decide” steps. The ‘determining’ step simply looks at an image and pick an area where a use is present in the image. Further, it ‘detects’ whether that user has been switched with someone else by simply comparing the image area where the use is present with the stored image. It further ‘decides’ whether that person is the same as the compared with in the user area. These are all considered data gathering and mental process that includes observation, evaluation, judgement and opinion. The limitation when it is decided that the user of the terminal is switched when the same person is not identified is insignificant post-solution activity. And, determining timing of an early image of the images to be processed is also considered mental process as any generic sensor that captures the image is also able to provide a time stamp. The rejection below will give in depth details with rejection of the claims. Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/30/2025 and 1/9/2026 have been considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., abstract idea – mental process) without significantly more. Claim 9 is used as an example. Claims 1 and 10 recite an apparatus with a processor/memory and non-transitory storage medium, respectively. The two-part test to identify claims that are directed to a judicial exception (Step 2A) and to then evaluate if additional elements of the claim provide an inventive concept (Step 2B) are: (1) Are the claims directed to a process, machine, manufacture or composition of matter; (2A) Prong One: Are the claims directed to a judicially recognized exception, i.e., a law of nature, a natural phenomenon, or an abstract idea; Prong Two: If the claims are directed to a judicial exception under Prong One, then is the judicial exception integrated into a practical application; (2B) If the claims are directed to a judicial exception and do not integrate the judicial exception, do the claims provide an inventive concept. Claim 9. An image processing method comprising, by a computer: (a) determining, from an image to be processed, a user area being an area where a user of an operation terminal is present; (b) detecting, based on an image of the user area, that the user of the operation terminal is switched detecting that the user of the operation terminal is switched, based on a comparison result between feature data extracted from an image of the user area in the image to be processed and feature data extracted from the image of the user area in an image to be generated before the image to be processed; (c) repeatedly deciding, by using a plurality of images in order as the image to be processed, based on a comparison result between the feature data extracted from the image to be processed and the feature data extracted from the image to be compared, whether a person included in the image of the user area in the image to be processed and a person included in the image of the user area in the image to be compared are a same person; (d) when a same person is not identified in a continuous M images to be processed, determining that the user of the operation terminal is switched, wherein M is an integer greater than or equal to 2; and (e) determining, as a timing of switching of the user of the operation terminal, a timing at an earliest image of the continuous M images to be processed. [emphasis added]. With regard to (1), the instant claims recite an apparatus, a method, and a non-transitory storage medium, therefore the answer is "yes". With regard to (2A), Prong One: Yes. When viewed under the broadest most reasonable interpretation, the instant claims are directed to a Judicial Exception – an abstract idea belonging to the group of mental process – concepts that are practicably performed in the human mind (including an observation, evaluation, judgement, opinion). The steps of (a) and (b) (above in emphasized claim 1) are generically recited and nothing in these steps precludes the steps from practically being performed by a human equipped with an appropriate apparatus. It can be interpreted as merely looking at the data and determining a location of a user in the image. There is nothing in the claim that requires more than an operation that a human, armed with the appropriate apparatus, pen and a paper, cannot perform. The determining step executing, under its broadest reasonable interpretation, covers performance of the limitation in the mind. The claim encompasses the user looking at an operation terminal, like ATM, for example. In step (c) the use is deciding whether the person of interest in an image matches with the one from image to be compared with. This is simply observing and judging repeatedly. When an image is obtained of this environment, it is observing where the location of a use is and if it is the correct user at the terminal. This way, essentially one can present/output information about the section of an image that represents that user area. Step (d) is simply a condition for identification of the person. This merely helps in determining if the person is the same person or the person has been switched. Thus, these limitations are a mental process. With regard to (2A), Prong Two: No. The instant claims do not apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, and therefore does not integrate the judicial exception into a practical application. The use of a system/memory/processor to ‘process’ an image (i.e., “data”) at a high level of generality such that said “data” can be used in the operation of the recited judicial exception. There are no specifics on how the data/image is received. This can be interpreted as “visualization”. Even if this step is by a “processor” that may be, for example, a camera. A camera/sensor is well known in the field, and receiving data from a camera/sensor is also well known. Step (e) simply recites determining timing for an image to be processed. This is something an operator can determine by simply looking at a timestamp of a person that was originally identified versus the timestamp of a switched person. This generic processor limitation is no more than mere instructions to apply the exception using a generic computer component. Accordingly, this does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. In conclusion, the claim as a whole does not provide for “integration” of the abstract idea into a practical application. The claim is directed to the abstract idea. With regard to (2B), as discussed with respect to Step 2A Prong Two, there are no additional element in the claim that amounts to more than mere instructions to apply the exception using a generic computer component. The same analysis applies here, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. The pending claims do not show what is more than a routine in the art presented in the claims, i.e., the additional elements are nothing more than routine and well-known steps. There is no improvement to technology here. There is only steps of (a), (b) and (c), and it has not been shown that the mental process allows the “technology” to do something that it previously was not able to do. Therefore, the claims 1, 9, and 10 are ineligible. With regard to dependent claims 4-8, similar analysis is applied and therefore does not integrate the judicial exception into a practical application – does not provide significant more than the judicial exception. These claims are similarly rejected for the same reasons discussed in view of steps recited in claim 9 and not repeated herewith. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 and 9-10 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by JP2018-92287 to NTT. With regard to claim 1 NTT discloses an image processing apparatus (Fig. 1 with a person image acquisition unit 1, a face authentication unit 2, a tracking unit 3, a motion estimation unit 4, and a person identification unit 5; page 3) comprising: at least one memory (memory 1002, top of page 5) configured to store one or more instructions; and at least one processor (processor 1001, top of page 5) configured to execute the one or more instructions to: determine, from an image to be processed, a user area being an area where a user of an operation terminal is present (acquires images of a person, step S1, page 3; face authentication unit 2); detect that the user of the operation terminal is switched, based on a comparison result between feature data extracted from an image of the user area in the image to be processed and feature data extracted from the image of the user area in an image to be compared generated before the image to be processed (tracking unit 3, motion estimation unit 4 on pages 3-4; person identification unit 5 on page 4 where determining whether the identification target person has been replaced with another person; page 2: comparing the feature amount of a face image registered in advance with the feature amount of the face image of the person to be authenticated at the time of authentication to confirm the authenticity of the person to be authenticated; middle of page 3); repeatedly decide, by using a plurality of images in order as the image to be processed, based on a comparison result between the feature data extracted from the image to be processed and the feature data extracted from the image to be compared, whether a person included in the image of the user area in the image to be processed and a person included in the image of the user area in the image to be compared are a same person (page 2 last paragraph, where it is decided whether the persons are the same, face authenticate unit on page 3); when a same person is not identified in a continuous M images to be processed, determine that the user of the operation terminal is switched, wherein M is an integer greater than or equal to 2 (bottom of page 2 to top of page 3, the images being time-series images are equal to or more than 2 continuous images); and determine, as a timing of switching of the user of the operation terminal, a timing at an earliest image of the continuous M images to be processed (person identification unit on page 4 where “the time that the person can be identified becomes longer than before” when determining if the person has been replaced or not). With regard to claims 9-10, claims 9-10 are rejected same as claim 1 and the arguments similar to that presented above for claim 1 are equally applicable to claims 9-10, and all of the other limitations similar to claim 1 are not repeated herein, but incorporated by reference. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over JP2018-92287 to NTT in combination with WO 2015/025249 to Givon et al. (hereafter, “Givon”). With regard to claim 4, NTT teaches the apparatus of claim 1. However, NTT does not expressly teach determine, based on a first detection result in which a person area is detected from the image to be processed and a second detection result in which a keypoint of a skeleton of a person is detected from the image to be processed, the user area from the image to be processed. Givon teaches determine, based on a first detection result in which a person area is detected from the image to be processed and a second detection result in which a keypoint of a skeleton of a person is detected from the image to be processed, the user area from the image to be processed (paragraphs [0046-0047, 0070], Figs. 14, 20). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify NTT reference to have keypoint of a skeleton of a person of Givon’s reference. The suggestion/motivation for doing so would have been to extrapolate dimensions and relations of body elements, suggested by Givon at paragraphs [0031-0032]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Givon with NTT to obtain the invention as specified in claim 4. With regard to claim 5, NTT in combination with Givon teaches determine, when the keypoint of the skeleton of the person is not detected from the image to be processed, the user area from the image to be processed, based on the keypoint of the skeleton of the person detected from an image to be referred to generated before the image to be processed (Givon: paragraph [0044] where If a match is not found, the IBBE system may record the individual’s biometric profile as a new profile). Claims 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over JP2018-92287 to NTT in combination with JP2010-218392 to Shogo et al. (hereafter, “Shogo”). With regard to claim 6, NTT teaches the apparatus of claim 1. However, NTT does not expressly teach compute, based on the image of the user area, a degree of certainty of taking a predetermined pose by the user of the operation terminal. Shogo teaches compute, based on the image of the user area, a degree of certainty of taking a predetermined pose by the user of the operation terminal (page 8 referring to Fig. 8 and steps S806, S807; degree C). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify NTT reference to have degree of certainty of Shogo’s reference. The suggestion/motivation for doing so would have been to flag for either in-user or unused in determining whether operator is talking on the mobile phone, suggested by Shogo on page 8. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Shogo with NTT to obtain the invention as specified in claim 6. With regard to claim 7, NTT in combination with Shogo teaches detect, for each of a plurality of the images to be processed, the predetermined pose from the image to be processed, and determine the degree of certainty according to a number of continuous detections of the predetermined pose (Shogo: pages 7-8; steps S504, S601-S605; Figures 4-7). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHEFALI D. GORADIA whose telephone number is (571)272-8958. The examiner can normally be reached Monday-Thursday 8AM-6PM, Friday 8AM-12PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. SHEFALI D. GORADIA Primary Patent Examiner Art Unit 2676 /SHEFALI D GORADIA/Primary Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Jul 14, 2023
Application Filed
Sep 26, 2025
Non-Final Rejection — §101, §102, §103
Nov 06, 2025
Applicant Interview (Telephonic)
Nov 06, 2025
Examiner Interview Summary
Dec 30, 2025
Response Filed
Mar 11, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592059
GLOBAL EMBEDDING LEARNING FROM DIFFERENT MODALITIES
2y 5m to grant Granted Mar 31, 2026
Patent 12579843
METHOD AND SYSTEM OF IMAGE PROCESSING FOR ACTION CLASSIFICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12581054
IMAGE BASED LIDAR-CAMERA SYNCHRONIZATION
2y 5m to grant Granted Mar 17, 2026
Patent 12564911
TOOL STATE LEARNING DEVICE, TOOL STATE ESTIMATION DEVICE, CONTROL DEVICE, TOOL STATE LEARNING METHOD, AND TOOL STATE ESTIMATION METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12561798
CONCURRENT DISPLAY OF HEMODYNAMIC PARAMETERS AND DAMAGED BRAIN TISSUE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+10.7%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 595 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month