Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claim amendments filed on November 13th, 2025 have been entered.
In view of claim amendments to claim 10, the rejection under 35 U.S.C. 101 has been withdrawn.
Response to Arguments
Applicant's arguments filed November 13th, 2025 have been fully considered but they are not persuasive.
In response to Applicant’s argument that Rafferty does not disclose or suggest at least “select the environment information corresponding to the obtained imaging angle information” and that Rafferty describes selecting an office background image as an unlock condition and said selection is performed by the user and not performed using image angle information, the Examiner respectfully disagrees. It is noted that the specification of the instant application refers to the image angle information as “that is information indicating an angle when the image is captured” ([0005]). Every image that is captured has an inherent angle associated with it. Additionally, the system in amended claim 1 does not preclude a user selection making a selection, furthermore Rafferty teaches user defined not necessarily a manual process.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 3, 5-7, 9-11 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Rafferty et al. (US 10,795,984 B1 hereinafter “Rafferty”).
As to claim 1, Rafferty teaches an authentication system (systems disclosed herein describe using machine learning to lock and unlock a device; Abstract) comprising: at least one memory that is configured to store instructions (memory; 215 Fig. 2) and at least one processor that is configured to execute the instructions (processor; 203 Fig. 2) to obtain face information (the unlock condition may verify the user using facial recognition; Fig. 7 and Col. 11 lines 34-36) and environment information of a target (The location may be verified using a background image; Col. 11 lines 39-40), from an image including the target (the user may have to provide an image that includes the user's face and office background; Col 11 lines 62-63); obtain imaging angle information that is information indicating an angle when the image is captured (Each input image inherently has an angle association of the user’s face which is interpreted to be the imaging angle information); and perform an authentication process on the target, on the basis of the obtained face information, the obtained environment information and the obtained imaging angle information (FIG. 8 shows a flow chart of a process for unlocking the device according to one or more aspects of the disclosure; Col. 12 lines 9-15 and Fig. 8); store the face information (trained to recognize one or more features. These features may include user features e.g., facial recognition; Col. 1 lines 45-47), the environment information (trained to recognize one or more environmental features; Col. 1 lines 45-48; training necessarily requires that multiple inputs and therefore it would be considered to be performed at a plurality of times), and the imaging angle information (Each input image inherently has an angle association of the user’s face which is interpreted to be the imaging angle information) that are obtained by performing a plurality of times of imaging by changing an image angle (each image captured will have at least a slight variation associated with the user’s face and therefore different imaging angle information), in advance as registered information (training the machine learning system perform facial and environmental recognition (Col. 1 lines 45-48), since the machine learning system the user device may obtain samples of the first feature from the user until the user device is able to recognize the first feature (Col. 8, lines 20-23)), store a plurality of pieces of environment information and a plurality of pieces of imaging angle information in association with each other, select the environment information corresponding to the obtained imaging angle information (selecting the background image as office; Fig. 7), from the registered information, and determine that the authentication process is successful when the obtained environment information matches the selected environment information and the obtained face information matches registered face information (using trained machine learning model to authenticate a user; Fig. 8).
As to claim 3, Rafferty teaches the authentication system according to claim 1, wherein the at least one processor is configured to execute the instructions to obtain a plurality of pieces of face information from a first imaging unit having a first imaging range (define and train a first model to recognize a first feature of the plurality of first features and the plurality of first features may include user features, such as a biometric identifier (e.g., facial recognition); 310 320 Fig. 3, Col. 7 lines 35-37), and obtains a plurality of pieces of environment information from a second imaging unit having a second imaging range that is different from the first imaging range, obtain the imaging angle information when the first imaging unit and the second imaging unit capture the image (obtain a plurality of secondary features and train a second model to recognize a second feature of the plurality of secondary features and the forward facing camera may obtain one or more features of the user's background ; 330 340 Fig. 3, Col. 8, lines 48-50), and perform the authentication process, on the basis of the obtained face information and the obtained environment information, and the obtained imaging angle information (using trained machine learning model to authenticate a user; Fig. 8).
As to claim 5, Rafferty teaches the authentication system according to claim 1, wherein the at least one processor is configured to execute the instructions to obtain the face information and the environment information from a plurality of time series images captured in a time series, obtain the imaging angle information when each of the time series images is captured (the user device may obtain samples of the first feature from the user until the user device is able to recognize the first feature (Col. 8, lines 20-23)), and perform the authentication process, on the basis of the face information, a change in the environment information in the time series, and a change in the imaging angle information in the time series (the authentication process must be performed in sequence of time series since time is linear and each input image is at a different time).
As to claim 6, Rafferty teaches the authentication system according to claim 2, wherein the at least one processor is configured to execute the instructions to obtain position information when the image is captured, store the positional information as the registered information, in addition to the face information, the environment information, and the imaging angle information, and perform the authentication process, on the basis of the obtained face information, the obtained environment information, the obtained imaging angle information, the obtained positional information, and the registered information (geographic location of the computing device may be determined and used at the time of unlock event to determine whether it satisfies the unlock condition; Col. 6, lines 22-35).
As to claim 7, Rafferty teaches the authentication system according to claim 1, wherein the at least one processor is configured to execute the instructions to notify the target to change a condition of capturing the image, when the obtained face information matches registered face information, and the obtained environment information or the obtained imaging angle information does not match registered environment information or registered imaging angle information in the authentication process (Fig. 9B shows an example of failed unlock attempt according to one or more aspects of the disclosure).
As to claim 10, it is the method of the system of claim 1 and it is addressed the same. Please see claim 1 for detailed mapping.
As to claim 11, it differs from claim 1 in that it teaches a non-transitory recording medium on which a computer program that allows a computer to execute an authentication method is recorded. Rafferty and Hamami teach RAM and ROM within the computing device (Rafferty Fig. 2)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Rafferty in view of Hamami et al. (2022/0139109 A1 with provisional (62/811,978) filing date Feb. 28, 2019 hereinafter “Hamami”).
As to claim 4, Rafferty teaches the authentication system according to claim 1, wherein the at least one processor is configured to execute the instructions to obtain the face information and the environment information from a first image captured at a first timing (enrollment process; Fig. 3) and a second image captured at a second timing (image captured during unlock event; Fig. 8), obtain the imaging angle information when the first image and the second image are captured (Each input image inherently has an angle association of the user’s face which is interpreted to be the imaging angle information), and perform the authentication process, on the basis of the face information and the environment information obtained from the first image and the second image (authenticate user based on a first and second authentication parameter which was previous addressed in the above action as being the face and environment information). Rafferty does not explicitly teach a difference between the imaging angle information for the first image and the imaging angle information for the second image used for authentication process. Hamami teaches using facial recognition to authenticate a user of the computing device while the computing device is in a locked state ([0001]) wherein a plurality of images of a face of a known user in a variety of different poses ([0002]) and wherein each of the one or more images of the face of the known user is included in at least one pose bucket from a plurality of pose buckets and Each pose bucket from the plurality of pose buckets is associated with a respective range of pitch angles of the face of the known user and a respective range of yaw angles of the face of the known user ([0005]). Therefore, Hamami teaches using facial angle difference to authenticate a user. It would have been obvious for one ordinary skilled in the art to combine Hamami with Rafferty in order to improve user authentication to a locked device since the user may hold the computing device below the level of his or her face, may rotate the computing device, or may tilt or turn his or her head relative to the computing device when the computing device captures the image of the unknown user. In such situations, the computing device may have difficulty determining whether the unknown user is a known user (Hamami [0001]).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Lau et al. (US 9,082,235 B2 hereinafter “Lau”)
As to claim 8, Rafferty teaches the authentication system according to claim 7, however does not explicitly teach wherein the at least one processor is configured to execute the instructions to determine whether or not a face of the target is stereoscopic, by using the face information obtained before the target is notified and the face information obtained after the target is notified. Lau teaches using facial data for device authentication (Title), wherein detection is performed by using a depth camera on the mobile device that can determine whether the face is a three-dimensional object (Col. 22, lines 63-65) (stereoscopic) and one or more different head movements can be requested of the user and the resulting images (after the target is notified) can be compared to corresponding enrolled images (before target is notified). It would have been obvious for one ordinary skilled in the art to combine Lau with Rafferty in order to determine whether the face is of a real human (Lau Col. 22 lines 57-60).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CLAIRE X WANG whose telephone number is (571)270-1051. The examiner can normally be reached M-F 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Mallari can be reached at (571) 272-4729. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
CLAIRE X. WANG
Supervisory Patent Examiner
Art Unit 1774
/CLAIRE X WANG/Supervisory Patent Examiner, Art Unit 1774