Prosecution Insights
Last updated: April 19, 2026
Application No. 18/028,980

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM

Non-Final OA §103§112
Filed
Mar 28, 2023
Examiner
WHITE, JOSHUA RAYMOND
Art Unit
2438
Tech Center
2400 — Computer Networks
Assignee
NEC Corporation
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
88 granted / 115 resolved
+18.5% vs TC avg
Strong +36% interview lift
Without
With
+35.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
12 currently pending
Career history
127
Total Applications
across all art units

Statute-Specific Performance

§101
6.8%
-33.2% vs TC avg
§103
55.0%
+15.0% vs TC avg
§102
15.3%
-24.7% vs TC avg
§112
17.8%
-22.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 115 resolved cases

Office Action

§103 §112
DETAILED ACTION This non-final office action is in response to claims 1, 4-12, and 15-16 filed on 01/14/2026 for examination. Claims 1, 4-12, and 15-16 are being examined and are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/14/2026 has been entered. Response to Amendment The amendment filed January 14, 2026 has been entered. Claims 1, 4-12, and 15 remain pending in the application. Claim 16 is new. The claims have been amended. Applicant’s arguments and amendments to the claims are directed to the 35 U.S.C. 103 rejection previously set forth in the Final Office Action mailed October 14, 2025. Claims 1, 12, and 15-16 have necessitated a new ground(s) of rejection in this Office Action. Further, applicant’s arguments regarding claims 1, 4-12, and 15-16 have been fully considered but are not persuasive to differentiate over the prior art. Particularly: Applicant opines that the combination of Brundage et al. (US20050067487) in view of Singal et al. (US20200334431) and Wilder (US20170186760) does not disclose “wherein the one or more processors acquire the face image of the user by the camera while the user maintains the direction of the face in a state that the position of the cursor and the position of the target point on the display coinciding with each other.” Remarks, pgs. 9-10. Examiner disagrees. Examiner directs applicant to Wilder. Particularly, wherein the user guides their face <and by extension, the virtual pointer> at a target for authentication. See, e.g., Wilder at [0028-030], [0013-014], and [0020]. To be verified, the user may be required to maintain their face/pointer on the target for a specific duration of time <i.e., maintaining the direction of a face in the state>. Id. The image of the user is taken while their face is aligning with the target <e.g., their face is captured/displayed with the pointer aligning with the target>. Id. Accordingly, applicant’s associated remarks are unpersuasive. In view of the foregoing, as well as hereinbelow with regards to 35 U.S.C. 103, applicant’s arguments regarding claims 1, 4-12, and 15-16 have been fully considered but are not persuasive to differentiate over the prior art. Claim Objections Claim(s) 1, 12, and 16 is/are objected to because of the following informality: Claim 1 recites “the position of the cursor and the position of the target point on the display coinciding with each other”. For grammatical consistency, examiner suggests amending to, e.g., “the position of the cursor and the position of the target point on the display coincide with each other” or similar, if intended. Claims 12 and 16 recite a similar deficiency, and are objected to under like rationale. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 1, 4-12, and 15-16 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Particularly: Claim 1 recites the limitation "the direction" in lines 14 and 21. There is insufficient antecedent basis for this limitation in the claim. Claims 12 and 16 recite a similar deficiency, and are rejected under like rationale. Claims 4-11 and 15 incorporate the deficiency of their parent claim, and are rejected under like rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4-7, and 10-12, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brundage et al. (US20050067487, hereinafter “Brundage”) in view of Singal et al. (US20200334431, hereinafter “Singal”) and Wilder (US20170185760, hereinafter “Wilder”). Regarding claim 1, Brundage teaches an information processing device comprising: a memory configured to store instructions ([0056-057] – system comprises instructions stored in memory and being executing via processor); and one or more processors configured to execute the instructions ([0056-057] – system comprises instructions stored in memory and being executing via processor) to: acquire a certificate and generate a certificate image ([0032], [0059-062], and [0081] – An identification document <i.e., certificate> is presented <i.e., acquired>. An image is generated based on the identification document <i.e., certificate image generated>); acquire a first hash value from the certificate (Fig. 6, [0081], [0054], and [0061-062]– A watermark is read from the certificate. The watermark comprises a hash value <i.e., first hash value>. When the hash <i.e., first hash value> of the watermark matches a hash calculated based on the certificate image, a match is determined; [0045-046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document); read character information and a face image from the certificate image (Fig. 6, [0054], [0061-062], and [0081] – An image is captured of an identification document. Data from the image may be read via, e.g., OCR <i.e., character information from certificate image> and facial recognition software <i.e., face image read from the certificate image>. A hash is calculated based on the OCR and facial recognition software. When the calculated hash matches the hash of a watermark stored on the identity document, a match is determined; [0046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document); calculate a second hash value using the character information and the face image read from the certificate image (Fig. 6, [0054], [0061-062], and [0081] – An image is captured of an identification document. Data from the image may be read via, e.g., OCR <i.e., character information from certificate image> and facial recognition software <i.e., face image from certificate image>. A hash <i.e., second hash> is calculated based on the OCR and facial recognition software. When the calculated hash matches the hash of a watermark stored on the identity document, a match is determined; [0046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document); compare the second hash value with the first hash value (Fig. 6, [0054], [0061-062], and [0081] – An image is captured of an identification document. Data from the image may be read via, e.g., OCR <i.e., character information from certificate image> and facial recognition software <i.e., face image from certificate image>. A hash <i.e., second hash> is calculated based on the OCR and facial recognition software. When the calculated hash matches the hash of a watermark stored on the identity document, a match is determined; [0046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document); and store information related to the certificate based on (Fig. 6, [0054], [0061-062], and [0081] –When the calculated hash coincides with the hash of a watermark stored on the identity document, a match is determined <i.e., information related to the identity document is stored based on the compared hashes>; [0046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document) [[the position of the cursor and the position of the target point on the display coinciding with each other and]] the first hash value and the second hash value coinciding with each other [[and the acquired face image of the user and the face image read from the certificate image coinciding with each other]] (Fig. 6, [0054], [0061-062], and [0081] –When the calculated hash coincides with the hash of a watermark stored on the identity document, a match is determined <i.e., information related to the identity document is stored based on the compared hashes>; [0046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document). Yet, Brundage appears to fail to specifically disclose the system configured to acquire a face image of a user by a camera; compare the acquired face image of the user with the face image read from the certificate image; display the acquired face image of the user and a target point on a display; display, on the display, a cursor that moves in accordance with the direction of a user's face in the acquired face image of the user; and store information related to the certificate based on the position of the cursor and the position of the target point on the display coinciding with each other and the acquired face image of the user and the face image read from the certificate image coinciding with each other, wherein the one or more processors acquire the face image of the user by the camera while the user maintains the direction of the face in a state that the position of the cursor and the position of the target point on the display coinciding with each other. However, Singal teaches a similar system for verifying identification when presented and identification document (see abstract), comprising acquire a face image of a user by a camera ([0057-061] and [0089] – the user takes a selfie using the client device <i.e., acquires a face image by a camera>); compare the acquired face image of the user with the face image read from the certificate image ([0057-061] and [0089] – the user takes a selfie using the client device <i.e., acquires a face image by a camera>. The face image is compared against the photo visible on the ID document <i.e., the face image read from the certificate image>); store information related to the certificate based on ([0057-061] and [0089] – The face image is compared against the photo visible on the ID document <i.e., the face image read from the certificate image>, and an approval or rejection is stored) the acquired face image of the user and the face image read from the certificate image coinciding with each other ([0057-061] and [0089] – The face image is compared against the photo visible on the ID document <i.e., the face image read from the certificate image>, and an approval or rejection is stored). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Brundage with the teachings of Singal, configured to acquire a face image of a user by a camera; compare the acquired face image of the user with the face image read from the certificate image; and store information related to the certificate based on the first hash value and the second hash value coinciding with each other and the acquired face image of the user and the face image read from the certificate image coinciding with each other, to ensure that the user presenting the ID document is the user represented by the ID document (see, e.g., Singal at [0057-061] and [0089]). Yet, the combination of Brundage and Singal appears to fail to specifically disclose the system configured to display the acquired face image of the user and a target point on a display; display, on the display, a cursor that moves in accordance with the direction of a user's face in the acquired face image of the user; store information related to the certificate based on the position of the cursor and the position of the target point on the display coinciding with each other; and wherein the one or more processors acquire the face image of the user by the camera while the user maintains the direction of the face in a state that the position of the cursor and the position of the target point on the display coinciding with each other. However, Wilder teaches a similar system for capturing an authenticating selfie (see, e.g., Wilder at abstract, [0004]), configured to display the acquired face image of the user and a target point on a display ([0004], [0013-014], and [0020] – a user’s face, a virtual pointer, and a target <i.e., target point> are displayed for facial verification. The user must point their face <and by extension, the virtual pointer> at the target for authentication); display, on the display, a cursor that moves in accordance with the direction of a user's face in the acquired face image of the user (Fig. 1 and [0004], [0013-014], and [0020] – a virtual pointer <i.e., cursor> is displayed on the user’s screen that moves in accordance with the direction of the user’s face image); and store information related to the certificate based on the position of the cursor and the position of the target point on the display coinciding with each other ([0004], [0013-014], and [0020] – a user’s face, a virtual pointer, and a target <i.e., target point> are displayed for facial verification. The user must point their face <and by extension, the virtual pointer> at the target for authentication. When the user successfully aligns their face/virtual pointer <i.e., cursor> with the target, the user may be determined as a valid live user <i.e., information stored>), wherein the one or more processors acquire the face image of the user by the camera while the user maintains the direction of the face in a state that the position of the cursor and the position of the target point on the display coinciding with each other ([0028-030], [0013-014], and [0020] – a user’s face, a virtual pointer, and a target <i.e., target point> are displayed for facial verification. The user’s face is being captured by a camera on the user device, and displayed via a display on the user device. The user must point their face <and by extension, the virtual pointer> at the target for authentication. The user may be required to align their face/pointer/target for a specific duration of time <i.e., maintaining face in the state>. An image is taken of the user while their face is aligned with the target <e.g., their face is captured/displayed with the pointer aligning with the target>. When the user successfully aligns their face/virtual pointer <i.e., cursor> with the target according to the rules, the user may be determined as a valid live user <i.e., information stored>). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the ID authentication system of Brundage and Singal with the teachings of Wilder to further utilize the liveness check when capturing the face image. Particularly configured to: display the acquired face image of the user and a target point on a display; display, on the display, a cursor that moves in accordance with the direction of a user's face in the acquired face image of the user; and store information related to the certificate based on the position of the cursor and the position of the target point on the display coinciding with each other and the first hash value and the second hash value coinciding with each other and the acquired face image of the user and the face image read from the certificate image coinciding with each other, wherein the one or more processors acquire the face image of the user by the camera while the user maintains the direction of the face in a state that the position of the cursor and the position of the target point on the display coinciding with each other, to align the user and to ensure a live, valid user is attempting to be authenticated (see, e.g., Wilder at [0004], [0013-0015], and [0020]; with Singal at [0057-061] and [0089]). Regarding claim 4, the combination of Brundage, Singal, and Wilder teach the information processing device according to claim 1, wherein the first hash value is described on the certificate as code information, and wherein the one or more processors acquire the first hash value by reading the code information from the certificate image (Brundage at Fig. 6, [0054], [0061-062], and [0081] – An image is captured of an identification document. Data from the image may be read via, e.g., OCR and facial recognition software. A hash <i.e., second hash> is calculated based on the OCR and facial recognition software. When the calculated hash matches the hash <i.e., first hash> of a watermark stored on the identity document <i.e., a watermark is code information describing the first hash>, a match is determined; [0046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document). Regarding claim 5, the combination of Brundage, Singal, and Wilder teach the information processing device according to claim 1, to wherein the first hash value is provided in the certificate as an electronic watermark, and wherein the one or more processors acquire the first hash value by reading the electronic watermark from the certificate image (Brundage at Fig. 6, [0054], [0061-062], and [0081] – An image is captured of an identification document. Data from the image may be read via, e.g., OCR and facial recognition software. A hash <i.e., second hash> is calculated based on the OCR and facial recognition software. When the calculated hash matches the hash <i.e., first hash> of a watermark stored on the identity document <i.e., a watermark is code information describing the first hash>, a match is determined; [0046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document; [0038] – a QR code may also be used). Regarding claim 6, the combination of Brundage, Singal, and Wilder teach the information processing device according to claim 1, wherein the certificate includes a storage (Singal at [0005-0006] – the identification document can include an NFC chip/watermark <i.e., storage>; also Brundage at Fig. 6, [0054], [0061-062], and [0081] – When calculated hash of the watermark on the identity document matches a comparison hash <i.e., first hash>, a match is determined; [0046] – the compared hash can be of, e.g., the photograph, birth date, name, and card number located on an identity document; [0038] – a QR code may also be used), wherein the first hash value is stored in the storage (Singal at [0071], [0077], and [0035] – the comparison hash value <i.e., first hash> can be stored on the NFC chip/watermark <i.e., storage>. The hash is read by a scanner to perform a authentication comparison; also Brundage at Fig. 6, [0054], [0061-062], and [0081] – When calculated hash of the watermark on the identity document matches a comparison hash <i.e., first hash>, a match is determined; [0046] – the compared hash can be of, e.g., the photograph, birth date, name, and card number located on an identity document; [0038] – a QR code may also be used), and wherein the one or more processors read the first hash value from the storage of the certificate (Singal at [0071], [0077], and [0035] – the comparison hash value <i.e., first hash> can be stored on the NFC chip <i.e., storage>. The hash is read by a scanner to perform a authentication comparison; with Brundage at Fig. 6, [0054], [0061-062], and [0081] – When calculated hash of the watermark on the identity document matches a comparison hash <i.e., first hash>, a match is determined; [0046] – the compared hash can be of, e.g., the photograph, birth date, name, and card number located on an identity document; [0038] – a QR code may also be used). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the combination of Brundage, Singal, and Wilder with the teachings of Singal, wherein the certificate includes a storage, wherein the first hash value is stored in the storage, and wherein the one or more processors read the first hash value from the storage of the certificate, to securely store the hash on the identification document to retrieve it and ensure the document has not been tampered(see, e.g., Singal at [0035], [0071], and [0077]; with Brundage at [0054], [0038], and [0061-62]). Regarding claim 7, the combination of Brundage, Singal, and Wilder teach the information processing device according to claim 1, wherein the one or more processors generate the second hash value from a data string including the character information and the face image read from the certificate image (Brundage at Fig. 6, [0054], [0061-062], and [0081] – An image is captured of an identification document. Data from the image may be read via, e.g., OCR <i.e., character information from certificate image> and facial recognition software <i.e., face image from certificate image>. A hash <i.e., second hash> is calculated based on the OCR and facial recognition software <i.e., combination/string of both>. When the calculated hash matches the hash of a watermark stored on the identity document, a match is determined; [0046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document). Regarding claim 10, the combination of Brundage, Singal, and Wilder teach the information processing device according to claim 1, wherein the one or more processors store the certificate image in a database (Singal at [0024], [0034], and [0039] – wherein a server is provided the identification document and stores it <i.e., a database> in memory for subsequent analysis). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the combination of Brundage, Singal, and Wilder with the teachings of Singal, wherein the one or more processors store the certificate image in a database, to efficiently and securely perform larger and/or more computationally intensive operations by a trusted party (see, e.g., Singal at [0024], [0034], and [0071]). Regarding claim 11, the combination of Brundage, Singal, and Wilder teach the information processing device according to claim 1, wherein the one or more processors store the character information included in the certificate image and the face image read from the certificate image in a database (Singal at [0024], [0034], [0058-059], and [0039] – wherein a server is provided the identification document and stores it <i.e., a database> in memory for subsequent analysis. OCR’d and face information may be extracted and analyzed). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the combination of Brundage, Singal, and Wilder with the teachings of Singal, wherein the one or more processors store the character information included in the certificate image and the face image read from the certificate image in a database, to efficiently and securely perform larger and/or more computationally intensive operations by a trusted party (see, e.g., Singal at [0024], [0034], and [0071]). Regarding claim 12, Brundage teaches an information processing method comprising: acquiring a certificate and generating a certificate image ([0032], [0059-062], and [0081] – An identification document <i.e., certificate> is presented <i.e., acquired>. An image is generated based on the identification document <i.e., certificate image generated>); acquiring a first hash value from the certificate (Fig. 6, [0081], [0054], and [0061-062]– A watermark is read from the certificate. The watermark comprises a hash value <i.e., first hash value>. When the hash <i.e., first hash value> of the watermark matches a hash calculated based on the certificate image, a match is determined; [0045-046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document); reading character information and a face image from the certificate image (Fig. 6, [0054], [0061-062], and [0081] – An image is captured of an identification document. Data from the image may be read via, e.g., OCR <i.e., character information from certificate image> and facial recognition software <i.e., face image from certificate image>. A hash is calculated based on the OCR and facial recognition software. When the calculated hash matches the hash of a watermark stored on the identity document, a match is determined; [0046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document); calculating a second hash value using the character information and the face image read from the certificate image (Fig. 6, [0054], [0061-062], and [0081] – An image is captured of an identification document. Data from the image may be read via, e.g., OCR <i.e., character information from certificate image> and facial recognition software <i.e., face image from certificate image>. A hash <i.e., second hash> is calculated based on the OCR and facial recognition software. When the calculated hash matches the hash of a watermark stored on the identity document, a match is determined; [0046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document); comparing the second hash value with the first hash value (Fig. 6, [0054], [0061-062], and [0081] – An image is captured of an identification document. Data from the image may be read via, e.g., OCR <i.e., character information from certificate image> and facial recognition software <i.e., face image from certificate image>. A hash <i.e., second hash> is calculated based on the OCR and facial recognition software. When the calculated hash matches the hash of a watermark stored on the identity document, a match is determined; [0046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document); and storing information related to the certificate based on (Fig. 6, [0054], [0061-062], and [0081] –When the calculated hash coincides with the hash of a watermark stored on the identity document, a match is determined <i.e., information related to the identity document is stored based on the compared hashes>; [0046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document) [[the position of the cursor and the position of the target point on the display coinciding with each other and]] the first hash value and the second hash value coinciding with each other [[and the acquired face image of the user and the face image read from the certificate image coinciding with each other]] (Fig. 6, [0054], [0061-062], and [0081] –When the calculated hash coincides with the hash of a watermark stored on the identity document, a match is determined <i.e., information related to the identity document is stored based on the compared hashes>; [0046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document). Yet, Brundage appears to fail to specifically disclose the system configured to acquire a face image of a user by a camera; compare the acquired face image of the user with the face image read from the certificate image; display the acquired face image of the user and a target point on a display; display, on the display, a cursor that moves in accordance with the direction of a user's face in the acquired face image of the user; and store information related to the certificate based on the position of the cursor and the position of the target point on the display coinciding with each other and the acquired face image of the user and the face image read from the certificate image coinciding with each other, wherein the one or more processors acquire the face image of the user by the camera while the user maintains the direction of the face in a state that the position of the cursor and the position of the target point on the display coinciding with each other. However, Singal teaches a similar system for verifying identification when presented and identification document (see abstract), comprising acquiring a face image of a user by a camera ([0057-061] and [0089] – the user takes a selfie using the client device <i.e., acquires a face image by a camera>); comparing the acquired face image of the user with the face image read from the certificate image ([0057-061] and [0089] – the user takes a selfie using the client device <i.e., acquires a face image by a camera>. The face image is compared against the photo visible on the ID document <i.e., the face image read from the certificate image>); store information related to the certificate based on ([0057-061] and [0089] – The face image is compared against the photo visible on the ID document <i.e., the face image read from the certificate image>, and an approval or rejection is stored) the acquired face image of the user and the face image read from the certificate image coinciding with each other ([0057-061] and [0089] – The face image is compared against the photo visible on the ID document <i.e., the face image read from the certificate image>, and an approval or rejection is stored). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Brundage with the teachings of Singal, configured to acquire a face image of a user by a camera; compare the acquired face image of the user with the face image read from the certificate image; and store information related to the certificate based on the first hash value and the second hash value coinciding with each other and the acquired face image of the user and the face image read from the certificate image coinciding with each other, to ensure that the user presenting the ID document is the user represented by the ID document (see, e.g., Singal at [0057-061] and [0089]). Yet, Singal appears to fail to specifically disclose the system configured to display the acquired face image of the user and a target point on a display; display, on the display, a cursor that moves in accordance with the direction of a user's face in the acquired face image of the user; store information related to the certificate based on the position of the cursor and the position of the target point on the display coinciding with each other; and wherein the one or more processors acquire the face image of the user by the camera while the user maintains the direction of the face in a state that the the position of the cursor and the position of the target point on the display coinciding with each other. However, Wilder teaches a similar system for capturing an authenticating selfie (see, e.g., Wilder at abstract, [0004]), configured to display the acquired face image of the user and a target point on a display ([0004], [0013-014], and [0020] – a user’s face, a virtual pointer, and a target <i.e., target point> are displayed for facial verification. The user must point their face <and by extension, the virtual pointer> at the target for authentication); displaying, on the display, a cursor that moves in accordance with the direction of a user's face in the acquired face image of the user (Fig. 1 and [0004], [0013-014], and [0020] – a virtual pointer <i.e., cursor> is displayed on the user’s screen that moves in accordance with the direction of the user’s face image); and storing information related to the certificate based on the position of the cursor and the position of the target point on the display coinciding with each other ([0004], [0013-014], and [0020] – a user’s face, a virtual pointer, and a target <i.e., target point> are displayed for facial verification. The user must point their face <and by extension, the virtual pointer> at the target for authentication. When the user successfully aligns their face/virtual pointer <i.e., cursor> with the target, the user may be determined as a valid live user <i.e., information stored>), wherein the one or more processors acquire the face image of the user by the camera while the user maintains the direction of the face in a state that the position of the cursor and the position of the target point on the display coinciding with each other ([0028-030], [0013-014], and [0020] – a user’s face, a virtual pointer, and a target <i.e., target point> are displayed for facial verification. The user’s face is being captured by a camera on the user device, and displayed via a display on the user device. The user must point their face <and by extension, the virtual pointer> at the target for authentication. The user may be required to align their face/pointer/target for a specific duration of time <i.e., maintaining face in the state>. An image is taken of the user while their face is aligned with the target <e.g., their face is captured/displayed with the pointer aligning with the target>. When the user successfully aligns their face/virtual pointer <i.e., cursor> with the target according to the rules, the user may be determined as a valid live user <i.e., information stored>). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the ID authentication system of Brundage and Singal with the teachings of Wilder to further utilize the liveness check when capturing the face image. Particularly configured to: display the acquired face image of the user and a target point on a display; display, on the display, a cursor that moves in accordance with the direction of a user's face in the acquired face image of the user; and store information related to the certificate based on the position of the cursor and the position of the target point on the display coinciding with each other and the first hash value and the second hash value coinciding with each other and the acquired face image of the user and the face image read from the certificate image coinciding with each other, wherein the one or more processors acquire the face image of the user by the camera while the user maintains the direction of the face in a state that the position of the cursor and the position of the target point on the display coinciding with each other, to align the user and to ensure a live, valid user is attempting to be authenticated (see, e.g., Wilder at [0004], [0013-0015], and [0020]; with Singal at [0057-061] and [0089]). Regarding claim 16, Brundage teaches an information processing device comprising: a memory configured to store instructions ([0056-057] – system comprises instructions stored in memory and being executing via processor); and one or more processors configured to execute the instructions ([0056-057] – system comprises instructions stored in memory and being executing via processor) to: acquire a certificate and generate a certificate image ([0032], [0059-062], and [0081] – An identification document <i.e., certificate> is presented <i.e., acquired>. An image is generated based on the identification document <i.e., certificate image generated>); read a face image from the certificate image (Fig. 6, [0054], [0061-062], and [0081] – An image is captured of an identification document. Data from the image may be read via, e.g., OCR <i.e., character information from certificate image> and facial recognition software <i.e., face image read from the certificate image>. A hash is calculated based on the OCR and facial recognition software. When the calculated hash matches the hash of a watermark stored on the identity document, a match is determined; [0046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document). Yet, Brundage appears to fail to specifically disclose the system configured to acquire a face image of a user by a camera; compare the acquired face image of the user with the face image read from the certificate image; display the acquired face image of the user and a target point on a display; display, on the display, a cursor that moves in accordance with the direction of a user's face in the acquired face image of the user; and store information related to the certificate based on the position of the cursor and the position of the target point on the display coinciding with each other and the acquired face image of the user and the face image read from the certificate image coinciding with each other, wherein the one or more processors acquire the face image of the user by the camera while the user maintains the direction of the face in a state that the position of the cursor and the position of the target point on the display coinciding with each other. However, Singal teaches a similar system for verifying identification when presented and identification document (see abstract), configured to acquire a face image of a user by a camera ([0057-061] and [0089] – the user takes a selfie using the client device <i.e., acquires a face image by a camera>); compare the acquired face image of the user with the face image read from the certificate image ([0057-061] and [0089] – the user takes a selfie using the client device <i.e., acquires a face image by a camera>. The face image is compared against the photo visible on the ID document <i.e., the face image read from the certificate image>); store information related to the certificate based on ([0057-061] and [0089] – The face image is compared against the photo visible on the ID document <i.e., the face image read from the certificate image>, and an approval or rejection is stored) the acquired face image of the user and the face image read from the certificate image coinciding with each other ([0057-061] and [0089] – The face image is compared against the photo visible on the ID document <i.e., the face image read from the certificate image>, and an approval or rejection is stored). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Brundage with the teachings of Singal, configured to acquire a face image of a user by a camera; compare the acquired face image of the user with the face image read from the certificate image; and store information related to the certificate based on the acquired face image of the user and the face image read from the certificate image coinciding with each other, to ensure that the user presenting the ID document is the user represented by the ID document (see, e.g., Singal at [0057-061] and [0089]). Yet, the combination of Brundage and Singal appears to fail to specifically disclose the system configured to display the acquired face image of the user and a target point on a display; display, on the display, a cursor that moves in accordance with the direction of a user's face in the acquired face image of the user; store information related to the certificate based on the position of the cursor and the position of the target point on the display coinciding with each other; and wherein the one or more processors acquire the face image of the user by the camera while the user maintains the direction of the face in a state that the position of the cursor and the position of the target point on the display coinciding with each other. However, Wilder teaches a similar system for capturing an authenticating selfie (see, e.g., Wilder at abstract, [0004]), configured to display the acquired face image of the user and a target point on a display ([0004], [0013-014], and [0020] – a user’s face, a virtual pointer, and a target <i.e., target point> are displayed for facial verification. The user must point their face <and by extension, the virtual pointer> at the target for authentication); display, on the display, a cursor that moves in accordance with the direction of a user's face in the acquired face image of the user (Fig. 1 and [0004], [0013-014], and [0020] – a virtual pointer <i.e., cursor> is displayed on the user’s screen that moves in accordance with the direction of the user’s face image); and store information related to the certificate based on the position of the cursor and the position of the target point on the display coinciding with each other ([0004], [0013-014], and [0020] – a user’s face, a virtual pointer, and a target <i.e., target point> are displayed for facial verification. The user must point their face <and by extension, the virtual pointer> at the target for authentication. When the user successfully aligns their face/virtual pointer <i.e., cursor> with the target, the user may be determined as a valid live user <i.e., information stored>), wherein the one or more processors acquire the face image of the user by the camera while the user maintains the direction of the face in a state that the position of the cursor and the position of the target point on the display coinciding with each other ([0028-030], [0013-014], and [0020] – a user’s face, a virtual pointer, and a target <i.e., target point> are displayed for facial verification. The user’s face is being captured by a camera on the user device, and displayed via a display on the user device. The user must point their face <and by extension, the virtual pointer> at the target for authentication. The user may be required to align their face/pointer/target for a specific duration of time <i.e., maintaining face in the state>. An image is taken of the user while their face is aligned with the target <e.g., their face is captured/displayed with the pointer aligning with the target>. When the user successfully aligns their face/virtual pointer <i.e., cursor> with the target according to the rules, the user may be determined as a valid live user <i.e., information stored>). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the ID authentication system of Brundage and Singal with the teachings of Wilder to further utilize the liveness check when capturing the face image. Particularly configured to: display the acquired face image of the user and a target point on a display; display, on the display, a cursor that moves in accordance with the direction of a user's face in the acquired face image of the user; and store information related to the certificate based on the position of the cursor and the position of the target point on the display coinciding with each other and the acquired face image of the user and the face image read from the certificate image coinciding with each other, wherein the one or more processors acquire the face image of the user by the camera while the user maintains the direction of the face in a state that the position of the cursor and the position of the target point on the display coinciding with each other, to align the user and to ensure a live, valid user is attempting to be authenticated (see, e.g., Wilder at [0004], [0013-0015], and [0020]; with Singal at [0057-061] and [0089]). Claim(s) 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Brundage, Singal, and Wilder, further in view of Shingo et al. (JP2017021603, hereinafter “Shingo”) and Bradley et al. (US20030223584, hereinafter “Bradley”). Regarding claim 8, the combination of Brundage, Singal, and Wilder teach the information processing device according to claim 1, wherein the one or more processors generate the second hash value by combining a |character value| generated from the character information and a |media value| generated from the face image read from the certificate image (Brundage at Fig. 6, [0054], [0061-062], and [0081] – An image is captured of an identification document. Data from the image may be read via, e.g., OCR <i.e., character information from certificate image> and facial recognition software <i.e., face image from certificate image>. A hash <i.e., second hash> is calculated based on the OCR and facial recognition software <i.e., combination/string of both>. When the calculated hash matches the hash of a watermark stored on the identity document, a match is determined; [0046] – the compared watermark can be based on a hash of, e.g., the photograph, birth date, name, and card number located on an identity document). Yet, the combination of Brundage, Singal, and Wilder appears to fail to specifically disclose wherein the one or more processors generate the second hash value by combining a third hash value generated from the character information and a fourth hash value generated from the face image read from the certificate image. However, Shingo teaches a similar system for verifying authenticity of a media signature read from a face image (see, e.g., [0026-032], [0045], [0054-057]), wherein a media signature may be a hash generated from the face image ([0026-032], [0045], [0054-057] – a photo ID is scanned and feature information is extracted from the photo ID image. A hash is generated from the extracted feature information. The hash may be used in authenticating the photo ID). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the media value of the combination of Brundage, Singal, and Wilder with the teachings of Shingo, wherein the one or more processors generate the combined value by combining a media value generated from the character information and a hash value generated from the face image, to provide secure authentication while reducing the combination calculations, and so that the face may be individually authenticated (see, e.g., Brundage at [0058-062], [0081], and [0038]; with Shingo at [0026-032], [0045], [0054-057]). Yet, the combination of Brundage, Singal, Wilder, and Shingo appears to fail to specifically disclose wherein the one or more processors generate the second hash value by combining a third hash value generated from the character information and a fourth hash value generated from the face image read from the certificate image. However, Bradley teaches a similar system for verifying authenticity of a media signature read from a certificate image (see, e.g., Bradley at [0009-010], [0021], [0038-039]), wherein a media signature may be a hash generated from the character information ([0009-010], [0021], [0038-039] – an identification document is scanned and character information is extracted from the certificate image. A hash is generated from the extracted character information. The hash may be used in authenticating the identification document). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the media value of the combination of Brundage, Singal, Wilder, and Shingo with the teachings of Bradley, wherein the one or more processors generate the combined value by combining a third hash value generated from the character information and a hash value generated from the face image, to provide secure authentication while reducing the combination calculations, and so that both the face and the character information may be individually authenticated (see, e.g., Brundage at [0058-062], [0081], and [0038]; with Shingo at [0026-032], [0045], [0054-057]). Regarding claim 9, the combination of Brundage, Singal, Wilder, Shingo, and Bradley teach the information processing device according to claim 8, wherein the one or more processors extract feature information from the face image read from the certificate image and generate the fourth hash value using the feature information (Shingo at [0026-032], [0045], [0054-057] – a photo ID is scanned and feature information is extracted from the photo ID image. A hash is generated from the extracted feature information. The hash may be used in authenticating the photo ID). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Brundage, Singal, Wilder, Shingo, and Bradley with the teachings of Shingo, wherein the one or more processors extract feature information from the face image and generate the hash value using the feature information, to provide secure authentication while reducing the bit-size to be authenticated/embedded into the watermark (see, e.g., Bradley at [0009-011] and [0028-034]; with Shingo at [0026-032], [0045], [0054-057]). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Brundage, Singal, and Wilder, further in view of Takeshi et al. (JP2020064541, hereinafter “Takeshi”). Regarding claim 15, the combination of Brundage, Singal, and Wilder teach the information processing device according to claim 1. Yet the combination of Brundage, Singal, and Wilder appear to fail to specifically disclose wherein the one or more processors are further configured to execute the instructions to: display a first button indicating an authentication by passcode and a second button indicating ana authentication by face; perform the authentication by passcode in a case where the first button is selected by a user and perform the authentication by face based on the second button being selected by the user; and display a registered certificate image based on the display authentication being successful. However, Takeshi teaches a similar system comprising reading information of a photo ID via a scanning system, and authenticating the photo ID (see, e.g., Takeshi at [0050-057]), wherein the one or more processors are further configured to execute the instructions to: display a first button indicating an authentication by passcode and a second button indicating ana authentication by face ([0111-0115] – a first button is presented to the user for authenticating via PIN <i.e., passcode> authentication. A successful verification may be completed when the correct PIN is entered; [0177] and [0131-142] – a second button is presented to the user for authentication via face); perform the authentication by passcode in a case where the first button is selected by a user and perform the authentication by face based on the second button being selected by the user ([0111-0115] – a button is presented to the user for authenticating via PIN <i.e., passcode> authentication. When selected, a successful verification may be completed when the correct PIN is entered; [0177] and [0131-142] – a button is presented to the user for authentication via face. When selected, a successful authentication may be performed when the bearer of the photo ID’s face matches the face of the photo ID); and display a registered certificate image based on the display authentication being successful ([0175], [0111-0115], and [0131-142] – a photo ID is verified based on the user/user’s selections. The verification may be performed, e.g., when opening an account <i.e., registering> with a financial institution. A result of the registration is displayed to the user when an account is successfully opened). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Bradley with the teachings of Takeshi, wherein the one or more processors are further configured to execute the instructions to: display a first button indicating an authentication by passcode and a second button indicating ana authentication by face; perform the authentication by passcode in a case where the first button is selected by a user and perform the authentication by face based on the second button being selected by the user; and display a registered certificate image based on the display authentication being successful, to provide identification flexibility to users when registering with financial institutions (see, e.g., Takeshi at [0175], [0111-0115], and [0131-142]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yeom (KR100455311) teaches a system for generating a combined photo ID hash based on character information and image data, and verifying the photo ID based on the combined hash (see, e.g., abstract, [0025-026]). Rhoads et al. (US20100029380) teaches a system for verifying authenticity of a printed object by embedding watermarks into the printed object, and verifying the printed object using the embedded hash (see, e.g., abstract, [0080], and [0164]). Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA RAYMOND WHITE whose telephone number is (571)272-4365. The examiner can normally be reached Monday-Thursday, & Alternate Fridays. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Taghi Arani can be reached at 5712723787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.R.W./Examiner, Art Unit 2438 /TAGHI T ARANI/Supervisory Patent Examiner, Art Unit 2438
Read full office action

Prosecution Timeline

Mar 28, 2023
Application Filed
May 14, 2025
Non-Final Rejection — §103, §112
Aug 20, 2025
Response Filed
Oct 08, 2025
Final Rejection — §103, §112
Jan 14, 2026
Request for Continued Examination
Jan 25, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587363
METHOD AND APPARATUS FOR IMPROVED VIDEO INFORMATION SECURITY AGAINST UNAUTHORIZED ACCESS
2y 5m to grant Granted Mar 24, 2026
Patent 12526156
MORE EFFICIENT POST-QUANTUM SIGNATURES
2y 5m to grant Granted Jan 13, 2026
Patent 12519616
NOISY TRANSACTION FOR PROTECTION OF DATA
2y 5m to grant Granted Jan 06, 2026
Patent 12506655
PROVISIONING CONTROL APPARATUS AND METHOD FOR PROVISIONING ELECTRONIC COMPONENTS OR DEVICES
2y 5m to grant Granted Dec 23, 2025
Patent 12506627
COMPUTATION OFFLOADING APPROACH IN BLOCKCHAIN-ENABLED MCS SYSTEMS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+35.9%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 115 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month