DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The IDS’s filed on 4/05/2024 and 3/06/2025 have been received and considered.
Claims 1, 3 – 7, 10 – 11, 14 – 15, and 17 – 21 were preliminarily amended in the amendment filed on 04/05/2024.
Claims 2, 9, and 16 have been cancelled.
Claims 1, 3 – 8, 10 – 11, 12 – 15, and 17 – 21, all of the remaining claims pending in this application, have been rejected.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3 – 8, 10 – 11, 12 – 15, and 17 – 21 are rejected under 35 U.S.C. 103 as being unpatentable over JP Publication No. 2019/201360 A to YASUSHI et al. (hereinafter YASUSHI) in view of JP Patent No. 6541140 B1 to Nakajima.
Examiner notes the use of Translated Document 1 in reference to YASUSHI.
Examiner notes the use of Translated Document 2 in reference to Nakajima.
Claim 1
Regarding claim 1, an independent apparatus claim, YASUSHI teaches an avatar generation apparatus comprising: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to perform operations comprising: generating an avatar of the target person by using the first image when the first authentication processing is successful ("In order to prevent impersonation, the authentication unit 102 determines whether or not the person shown in the imaging unit 13 is a valid user based on the frame image obtained by the image data acquisition unit 101. For example, the authentication unit 102 includes a feature amount obtained from the face region extracted by the region extraction unit 41 and a feature amount stored in advance in the storage unit 11 as a feature amount of a legitimate user who uses the terminal device 1. It is determined whether or not it is legitimate by checking. When the authentication unit 102 determines that the user is an unauthorized user, the function as the image processing unit 104 may be stopped. The selection receiving unit 103 sets the original image of the person image generated by the image creation unit as the original human subject region itself (live image) extracted by the region extraction unit 41 or stores it in the storage unit 11 in advance. Selection of whether to be a certain user image or an avatar image is accepted.", Translated Document 1, lines 423 -434).
YAUSHI does not explicitly teach an authentication unit that, performing by using a first image including a target person and master information of the target person, performs first authentication processing of the target person included in the first image. Rather, YAUSHI stores an image or avatar in advance at lines 113-117.
However, Nakajima teaches performing by using a first image including a target person and master information of the target person,("The imaging device 200E is, for example, a camera including an individual imaging element such as a CCD (charge coupled device) image sensor or a CMOS (complementary MOS) image sensor. The user U captures a moving image of the personal identification document (hereinafter, also referred to as first imaging data) and a self-shooting moving image of the user U (hereinafter, also referred to as second imaging data) using the imaging device 200E. In addition, it is possible to confirm that the personal identification document is not an image or the like of the personal identification document acquired from the Internet or the like by imaging the moving image, and it is possible to improve the reliability of the personal identification.", Translated Document 2, lines 81 - 89;"The first determination unit 304 is based on the first image data and the second image data (still image data of a face photograph of the personal verification document and still image data of the user U) extracted by the still image data extraction unit 303. It is determined whether or not the person of the face photograph of the personal identification document and the user U are the same person (see FIG. 3B (a)).", Translated Document 2, lines 193 - 197). Paraphrasing, Nakajima teaches that it is desirable to capture the ID of a person to match the image of a person. For clarity of the record, this is the “first authentication”.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of YASUSHI to incorporate using a captured photo of a user compared with a captured image of said user’s personal document (ex. ID card) as a means of authentication, as disclosed by Nakajima. This would result in capturing the ID of the user when the image or avatar of YAUSHI is stored in order to ensure that the modified images of the user (YAUSHI is mostly about synthesizing images to ensure the use is looking at the camera) correspond with the actual user. The suggestion/motivation for doing so would have been to provide a more secure way of talking to another party with both parties being at ease that the person they are talking to is indeed that individual and not an impersonator/scammer.
Claim 3
Regarding claim 3, dependent on claim 1, YASUSHI, in view of Nakajima, teaches the invention as claimed in claim 1.
YASUSHI further teaches the avatar generation apparatus according to claim 1, wherein the operations further comprisecausing information indicating that the first authentication processing is successful to be displayed on a display along with the avatar (" When a terminal device in a video call system exchanges image capturing devices with each other as they are, the user's line of sight when paying attention to one place on the display unit on which the image of the other party or information about the other party is displayed And a deviation from the imaging direction of the imaging unit that images the user.", Translated Document 1, lines 154 - 158), where the party information being displayed is confirmation of authentication.
Claim 4
Regarding claim 4, dependent on claim 1, YASUSHI, in view of Nakajima, teaches the invention as claimed in claim 1.
YASUSHI further teaches the avatar generation apparatus according to claim 1, wherein the operations further compriseperforming second authentication processing of the target person by using the avatar after is generated ("The processing unit 10 receives connection information such as address information of the call destination by the operation unit 15 (step S103), and then receives selection of the original image of the person image created by the image creation unit 44 by the selection receiving unit 103, The selected contents are stored (step S104). At this time, the processing unit 10 may also accept selection of whether the posture / line of sight of the human image is linked to the movement of the user imaged by the imaging unit 13 or the fixed angle is maintained while facing directly. During this time, the processing unit 10 performs authentication by the authentication unit 102 in the background, and when the authentication fails, the function of the image processing unit 104 is stopped and a message is displayed without performing the subsequent processing.", Translated Document 1, lines 456 - 465), where this authentication is done on the selected image by the user which includes the created image (avatar). For clarity of the record, this authentication (stopping processing when a mismatch is detected) is the second authentication.
Claim 5
Regarding claim 5, dependent on claim 4, YASUSHI, in view of Nakajima, teaches the invention as claimed in claim 4.
YASUSHI further teaches wherein the operations further comprise, in the second authentication processing,causing the avatar to perform a first action to be performed by the target person andrecognizing, by processing a second image in which the target person is captured, an action performed by the target person after the avatar performs the first action andperforming the second authentication processing by using the action and the first action (Rejected as applied to claim 4).
Claim 6
Regarding claim 6, dependent on claim 4, YASUSHI, in view of Nakajima, teaches the invention as claimed in claim 4.
YASUSHI further teaches whereinthe operations further compriseacquiring a second image in which the target person is captured for the second authentication processing and, in the second authentication processing,causing information indicating a first action to be performed by the target person to be displayed on a display andcausing the second image to be displayed on the display in a state of the target person captured in the second image being replaced by the avatar (Rejected as applied to claim 4).
Claim 7
Regarding claim 7, dependent on claim 1, YASUSHI, in view of Nakajima, teaches the invention as claimed in claim 1.
YASUSHI further teaches the avatar generation apparatus according toclaim 1, wherein the operations further comprise, in the first authentication processing,acquiring an image of the target person included in a personal identification document of the target person andgenerating the master information by using the image (Rejected as applied to claim 1).
Claim 8, an independent method claim, is rejected for the same reasons as applied to claim 1.
Claims 10 – 14 are rejected for the same reasons as applied to the above claims.
Claim 15, an independent non-transitory computer-readable storage medium claim, is rejected for the same reasons as applied to claim 1.
Claims 17 – 21 are rejected for the same reasons as applied to the above claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ronde Miller whose telephone number is (703) 756-5686 The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Gregory Morse can be reached on (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RONDE LEE MILLER/Examiner, Art Unit 2663
/GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698