DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This is in response to the applicant response filed on 12/01/2025. In the applicant’s response, claims 1, 3-4, 6-7, 9, and 11-13 were amended; claims 14-18 were newly added. Accordingly, claims 1-18 are pending and being examined. Claims 1, 12, and 13 are independent form.
Claim Rejections - 35 USC § 102
3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
4. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
5. Claims 1-8, and 11-18 are rejected under 35 U.S.C. 102(a)(1)/102(a)(2) as being anticipated by Tussy et al (US 20200042685, hereinafter “Tussy”).
Regarding claim 1, Tussy discloses an image acquisition apparatus (the video display of imaging in a facial recognition authentication system; see fig.1, figs.12A-B) comprising:
at least one memory configured to store instructions; and at least one processor (these hardware related features are inherence in the facial recognition authentication system shown by fig.1, such as mobile phone 112, server 120, camera 114, database 124, and the like) configured to execute the instructions to:
cause an output unit to output a content; acquire an image generated by an image capture unit positioned such that a face of a person can be captured in a case where the person faces to the output unit, the image including the person, wherein the image is configured for authentication (see the prompted message: “place your face within the oval” in fig.13A; see para.202, lines 3-9: “once enrollment or authentication is begun as described previously, the system causes the user's mobile device 1310 to display a small oval 1320 on the screen 1315 while the mobile device 1310 is imaging the user. Instructions 1325 displayed on the screen 1315 instruct the user to hold the mobile device 1310 so that his or her face or head appears within in the oval 1320.”); and
continue to cause the output unit to output the content until a first condition for authentication is satisfied (until the liveness of the user is imaged and validated at step 1218 of fig.12B, at step 1205 of fig.12B, “the various features are tracked through successive images [i.e., the loop of 1205—>A—>1205 or the loop of 1205[Wingdings font/0xE0]B—>1205 in fig.12B] to obtain two-dimensional vectors characterizing the flow or movement of the features. The movement of the features in this example is caused as the user moves the device to fit his/her face within the oval shown in the exemplary screen displays of FIGS. 13A and 13B.” See para.183, lines 1-7), wherein the content includes at least one of a still image or a moving image of a relative of the person (wherein the oval 1320 is an image for guiding the user’s movement to fit his/her face in; para.183 and para.202).
Regarding claim 2, 15, Tussy discloses, wherein the first condition is a condition related to the face of the person in the image (see para.202, lines 7-9: “Instructions 1325 displayed on the screen 1315 instruct the user to hold the mobile device 1310 so that his or her face or head appears within in the oval 1320.”).
Regarding claim 3, 16, Tussy discloses, wherein the first condition relates to
Regarding claim 4, 17, Tussy discloses, wherein the first condition is satisfied in a case where the authentication is has been successfully completed or in a case where processing related to the authentication has ended (see 1218 of fig.12B, and para.188: “the liveness or three-dimensionality of the user being imaged and authenticated is validated based on the various checks described above.”).
Regarding claim 5, 18, Tussy discloses, wherein the at least one processor is further configured to execute the instructions to change the content according to a predetermined criterion while continuing to cause the output unit to output the content (see para.205: “the device or authentication server generates a series of differently sized ovals within which the user must place his or her face by moving the mobile device held in the user's hand.”).
Regarding claim 6, Tussy discloses the image acquisition apparatus according to claim 5, wherein the at least one processor is further configured to execute the instructions to generate, by processing the image, state information indicating a state of a person included in the image, and the predetermined criterion relates to the state information (see para.203: “The display 1315 may also show corresponding instructions 1335 directing the user to “zoom in” on his or her face to fill the oval 1330 with his or her face. The user does this by bringing the mobile device 1310 closer to his or her face in a generally straight line to the user's face (such as shown in FIGS. 7A and 7B) until the user's face fills the oval 1330 or exceeds the oval.”).
Regarding claim 7, Tussy discloses the image acquisition apparatus according to claim 1, wherein the at least one processor is further configured to: acquire identification information that is information different from the image and is for identifying the person, wherein cause the output unit to output the content selected by using the identification information (see fig.14 and para.228: “In one embodiment, as shown in FIG. 14, a touchscreen 1410 [for fingers] may be divided up into predetermined regions 1420. For example, there may be nine equal, circular, square, or other shaped regions 1420 on the touchscreen 1410 of the mobile device. During enrollment, the user selects one of the regions 1420 of the screen 1410 to touch to initiate authentication.”).
Regarding claim 8, Tussy discloses the image acquisition apparatus according to claim 1, wherein the at least one processor is further configured to perform authentication processing on the person by using the image (see 1218 of fig.12B, and para.188: “the liveness or three-dimensionality of the user being imaged and authenticated is validated based on the various checks described above.”).
Regarding claim 11, Tussy discloses the image acquisition apparatus according to claim 1, wherein the first condition is set based on at least one of time or frequency at which the authentication for the person has been performed (see para.130: “the required match between pre-stored authentication data (enrollment information) and presently received authentication data (authentication information) is elastic in that the required percentage match between path parameters or images my change depending on various factors, such as time of day, location, frequency of login attempt, date, or any other factor.”).
Regarding claims 12. 13, each of them is an inherent variation of claim 1, thus it is interpreted and rejected for the reasons set forth in the rejection of claim 1.
Regarding claim 14, Tussy discloses the image acquisition apparatus according to claim 1, wherein the at least one processor is further configured to execute instructions to output decision-making support information for assisting an operator in determining the content to be output, the decision-making support information indicating a degree of likely interest of the person for each of a plurality of contents (see para.202, lines 7-9: “Instructions 1325 displayed on the screen 1315 instruct the user to hold the mobile device 1310 so that his or her face or head appears within in the oval 1320.” See para.183, lines 1-7, until the liveness of the user is imaged and validated at step 1218 of fig.12B, at step 1205 of fig.12B, “the various features are tracked through successive images [i.e., the loop of 1205—>A—>1205 or the loop of 1205[Wingdings font/0xE0]B—>1205 in fig.12B] to obtain two-dimensional vectors characterizing the flow or movement of the features. The movement of the features in this example is caused as the user moves the device to fit his/her face within the oval shown in the exemplary screen displays of FIGS. 13A and 13B.”).
Claim Rejections - 35 USC § 103
6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
7. Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Tussy.
Regarding claims 9, 10, Tussy does not explicitly disclose using a machine learning model. However, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to know that the facial recognition processing in the method in Tussy is based on machine learning. Machine learning is a well-known technique and widely used in the field of object recognition in images. Paragraph [0081], Tussy clearly teaches “[f]acial recognition processing is known in the art (or is an established process) and as a result, it is not described in detail herein”. Therefore, the claims are unpatentable over Tussy.
Response to Arguments
8. Applicant’s arguments, with respects to claim 1, filed on 12/01/2025, have been fully considered but they are not persuasive.
On pages 6-7 of applicant’s response, applicant argues that Tussy does not disclose “wherein the content includes at least one of a still image or a moving image of a relative of the person”.
The examiner respectfully disagrees with the applicant’s argument. It is because Tussy clearly discloses including displaying the prompted message of “place your face within the oval” and the oval image 1320 for instructing the user to hold the mobile device 1310 so that his or her face or head appears within in the oval image 1320. See fig.13A and para.202. Therefore, the argument is unpersuasive and the examiner maintains rejections.
Conclusion
9. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUIPING LI whose telephone number is (571)270-3376. The examiner can normally be reached 8:30am--5:30pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HENOK SHIFERAW can be reached on (571)272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit https://patentcenter.uspto.gov; https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center, and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RUIPING LI/Primary Examiner, Ph.D., Art Unit 2676