DETAILED ACTION
1. Claims 1, 3-12, and 14-15, 17-23 are pending in this examination.
Notice of Pre-AIA or AIA Status
2.1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2.2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Continued Examination Under 37 CFR 1.114
3. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission has been entered.
Response to Arguments
4. Applicant's arguments have been considered but are moot in view of the new ground(s) of rejection.
Claim Rejections - 35 USC § 103
5.1. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
5.2. Claims 1, 3-12, and 14-15, 17-23 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application No. 20240378274 to Tanski et al (“Tanski”) in view of US Patent Application No. 20200364721 to Pickering et al (“Pickering”), further in view of US Patent Application No. 20180189550 to McCombe et al (“McCombe”).
As per claim 1, Tanski discloses a method, comprising: receiving, by a device and from a user device, a custom pose of a user of an environment; generating, by the device, a pose information ([0018]-[0019] As shown in FIG. 1A, and by reference number 110, the user device 102 may collect a set of reference biometric measurements. For example, the user device 102 may initiate an enrollment procedure or a programming mode, in which a set of reference biometric measurements are captured of a user to enable generation or training of a multi-modal artificial intelligence model. In some implementations, the user device 102 may collect the set of reference biometric measurements based on receiving an instruction. For example, the user device 102 may detect a user interaction with an input element of the user device 102, such as a user interface element, and may interpret the user interaction with the input element as a command to initiate the enrollment procedure. Additionally, or alternatively, the user device 102 may receive a command from the authentication system 104. For example, the authentication system 104 may determine to initiate enrollment of the user for biometric authentication and may transmit a command to the user device 102 to cause the user device to capture the set of reference biometric measurements) also see figs. 1a-1c and associated texts, also see [0029], [0059]-[0060], [0015]).
receiving, by the device and from the user device, a custom signature of the user in a three-dimensional space; generating, by the device, a signature information [0015]-[0019], Kinetic biometric data may include biometric data that is associated with movement or other changes. For example, an authentication system may receive a set of biometric measurements from a virtual reality device, which includes accelerometers and/or three-dimensional (3D) sensors to determine a gesture, a hand motion, a gait, a posture, a pupil dilation, an eye movement, or a facial movement, which may be compared against a reference to determine whether, for example, a measured facial movement matches a reference facial movement. also see [0058]);
authenticating, by the device, the user to access the environment based on at least the pose identifier ([0025]-[0026], ) also see figs. 1a-1c and associated texts, also see [0071], [0034]).
Tanski does not explicitly disclose however in the same field of endeavor, Pickering discloses pose/signature identifier; authenticating, by the device, the user to perform an action in an environment, wherein the action includes initiating a transaction within the environment based on at authenticating the user using the pose identifier and the signature identifier ([0028], In one aspect, the present embodiment may be implemented in a payment authentication environment in which a user may be prompted to provide a movement-based signature (i.e., a signature move) for authentication. In response to the prompt, the user may make a signature move. A signature move may comprise a plurality of user poses, or a sequence of continuous user movements (such as, e.g., a dance movement). Using one or more sensors, a motion capture system may capture the user movements in two-dimensional (2D) and/or three-dimensional (3D) images. User movement patterns may be identified from the images and may be compared to a unique electronic signature representing expected movement patterns. The expected movement patterns constituting the unique electronic signature may be kept a secret. However, even if a rogue party knows of the expected patterns and closely mimics the expected movement patterns for authentication, a successful replication may be impossible due to varying physical dimensions between individuals. The movement-based authentication may be used in conjunction with other types of biometric authentication methods, such as face recognition, fingerprint recognition, etc., to facilitate a multifactor authentication in one, seamless process. The combination of biometrics authentication and movement-based authentication creates a robust authentication system suitable for a wide range of uses cases, also see [0037], [0042]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Tanski with the teaching of Pickering by including the feature of signature, in order for Tanski’s system for movement-based signature authentication. One method comprises determining one or more features associated with a user based on one or more two-dimensional images and determining one or more body points associated with the user based on one or more three-dimensional images. A movement pattern of each body point of the user is determined based on the one or more three-dimensional images. The one or more determined features are compared to corresponding one or more stored features associated with the user. If the one or more detected features match the one or more stored features, the one or more determined movement patterns are compared to a unique electronic signature associated with the user. Upon determining that the one or more determined movement patterns match the unique electronic signature, the user is authenticated for electronic transaction.
Tanski and Pickering do not explicitly disclose however in the same field of endeavor, McCombe discloses provide a representation of a three dimensional
space for display to a user device receive, from the user device, a custom pose of a user within the provided representation of the three-dimensional space (abstract, [0142], [0146]-[0149], also see [0734], Still another aspect of the invention includes prompting the user to present multiple distinct facial poses or head positions, and utilizing a depth detection system to scan the multiple facial
poses or head positions across a series of image frames, so as to increase protection against forgery of the facial signature. [0147] In another aspect, generating a unique facial signature further includes executing a enrollment phase, which includes prompting the user to present to the cameras a plurality of selected head movements or positions, or a series of selected facial poses, and collecting image frames from a plurality of head positions or facial poses for use in generating the unique facial signature representative of the user. [0149] The enrollment phase can include generating an enrolled facial signature containing data corresponding to multiple image scans of a user's face, the multiple image scans corresponding to a plurality of the user's head positions or facial poses; and the matching phase can include requiring at least a minimum number of captured image frames corresponding to different facial or head positions matching the multiple scans within the enrolled signature.);
receive, from the user device, a custom signature of the user within the provided representation of the three-dimensional space ([0136] Another aspect of the invention includes generating a facial signature, based on images of a human user's or subject's face, for enabling accurate, reliable identification or authentication of a human subject or user of a system or resource, in a secure, difficult to forge manner. This aspect of the invention relates to methods, systems and computer software/program code products that enable generating a facial signature for use in identifying a given human user).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Tanski with the teaching of Pickering/ McCombe by including the feature of three-dimensional space, in order for Tanski’s system for generating a facial signature, based on images of a human user's or subject's face, for enabling accurate, reliable identification or authentication of a human user or subject, in a secure, difficult to forge manner. Methods, systems and computer program products (“software”) enable a virtual three-dimensional visual experience (referred to herein as “V3D”) in videoconferencing and other applications; the capturing, processing and displaying of images and image streams; and generation of a facial signature based on images of a given human user's or subject's face, or lace and head, for accurate, reliable identification or authentication of a human user or subject, in a secure, difficult to forge manner (McCombe, Abstract).
As per claim 3, the combination of Tanski, Pickering and McCombe discloses the method of claim 1, wherein authenticating the user to access the environment based on at least the pose identifier comprises: prompting the user to perform a pose during a login process to the environment and authenticating the user to access the environment based on the pose substantially matching the pose identifier (Tanski, [0046], [0022],[0034]-[0035]).
As per claim 4, the combination of Tanski, Pickering and McCombe discloses the method of claim 1, further comprising: providing the three-dimensional space for display to the user device, wherein the user provides the custom signature to the user device via the three- dimensional space (Tanski, [0015]).
As per claim 5, the combination of Tanski, Pickering and McCombe discloses the method of claim 1, wherein the custom signature is enhanced by a sensor that measures an intensity of gestures of the user when the custom signature is generated (Tanski, [0020]).
As per claim 6, the combination of Tanski, Pickering and McCombe discloses the method of claim 1, wherein the custom pose of the user is captured by the user device over a time period (Tanski, [0023], [0034]).
As per claim 7, the combination of Tanski, Pickering and McCombe discloses the method of claim 1, wherein authenticating the user to access the environment based on at least the pose identifier comprises: utilizing a pose detection model to analyze and validate a pose of the user compared to the pose identifier (Tanski, [0025]-[0027]).
Claim 8, is rejected for similar reasons as stated above, and claims 1
As per claim 9, the combination of Tanski, Pickering and McCombe discloses the device of claim 8, wherein the pose identifier includes a sequence of poses of the user within a predetermined time frame (Tanski, [0034]).
As per claim 10, the combination of Tanski, Pickering and McCombe discloses the device of claim 8, wherein the signature identifier includes a three-dimensional matrix generated based on hand movement tracking and depth sensing (Pickering, [0035]). The motivation regarding the obviousness of claim 1 is also applied to claim 10.
As per claim 11, the combination of Tanski, Pickering and McCombe discloses the device of claim 8, wherein the one or more processors are further configured to: prompt the user to perform a pose during a login process to the immersive environment; and authenticate the user to access the immersive environment based on the pose substantially matching the pose identifier (Tanski, [0046], [0022],[0034]-[0035]).
As per claim 12, the combination of Tanski, Pickering and McCombe discloses the device of claim 8, wherein the one or more processors are further configured to: prompt the user to provide a signature for performing the action in the immersive environment; and authenticate the user to perform the action in the immersive environment based on the signature substantially matching the signature identifier (Tanski, [0022],[0034]-[0035], [0045]).
As per claim 14, the combination of Tanski, Pickering and McCombe discloses the device of claim 8, wherein the action includes enabling execution of an agreement within the immersive environment (Tanski, [0058]-[0059]).
Claim 15, is rejected for similar reasons as stated above, and claims 1.
As per claim 17, the combination of Tanski, Pickering and McCombe discloses the non-transitory computer-readable medium of claim 15, wherein the one or more instructions further cause the device to: prompt the user to provide a signature for executing an action
As per claim 18, the combination of Tanski, Pickering and McCombe discloses the non-transitory computer-readable medium of claim 15, wherein the one or more instructions further cause the device to: store the pose identifier and the signature identifier in a data structure (Tanski, [0035]-[0039]).
As per claim 19, the combination of Tanski, Pickering and McCombe discloses the non-transitory computer-readable medium of claim 15, wherein the one or more instructions, that cause the device to authenticate the user to access the immersive environment based on at least the pose identifier, cause the device to: utilize a pose detection model to analyze and validate a pose of the user compared to the pose identifier (Tanski, [0025]-[0027]).
As per claim 20, the combination of Tanski, Pickering and McCombe discloses the non-transitory computer-readable medium of claim 15, wherein the immersive environment includes one of a virtual reality environment, an augmented reality environment, or a mixed reality environment (Tanski, [0059]).
As per claim 21, the combination of Tanski, Pickering and McCombe discloses the method of claim 1, wherein the pose identifier includes a sequence of poses of the user within a predetermined time frame (Tanski, [0023], [0034]).
As per claim 22, the combination of Tanski, Pickering and McCombe discloses the method of claim 1, wherein the signature identifier includes a three- dimensional matrix generated based on hand movement tracking and depth sensing (Pickering, [0035], also see [0038], [0040]). The motivation regarding the obviousness of claim 1 is also applied to claim 22.
As per claim 23, the combination of Tanski, Pickering and McCombe discloses the method of claim 1, wherein the action includes enabling execution of an agreement within the environment (Tanski, [0058]-[0059]).
6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure as the prior art discloses many of the claim features (See PTO-form 892).
6.2. a). US Patent No. 9355236 issued to Kratz et al., discloses a method authenticates users. During user enrollment, a computing device records 3D gesture samples captured by a depth sensor as performed by a first user. Each recorded gesture sample includes a temporal sequence of locations for multiple specified body parts. The device computes an average gesture, and selects an error tolerance. These are stored as a gesture template for the first user. A second user performs a gesture for authentication. The depth sensor captures a 3D gesture from the second user, where the captured 3D gesture includes a temporal sequence of locations for the multiple body parts. The device computes the distance between the captured 3D gesture and the average gesture. When the distance is less than the error tolerance, the second user is authenticated as the first user, and the device grants access to some secured features. Otherwise, the second user is not authenticated.
b). US Patent Application No. 20160267265 to Waltermann et al., discloses [0061] Thus, in one aspect, present principles leverage the differences in the signals from different people to make and/or create a unique identifier for the person being authenticated and/or to be authenticated after an initial calibration based on their particular signal. In some embodiments, the identifier may also be combined with a unique pattern of muscle movement (e.g. a gesture in free space) that a person can choose to also perform as part of the authentication. Also in some embodiments, the EMG authentication may be combined with other methods of authentication such as e.g. password entry using a keyboard, using eye tracking, etc.
Conclusion
7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HARUNUR RASHID whose telephone number is (571)270-7195. The examiner can normally be reached 9 AM to 5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni A. Shiferaw can be reached at (571) 272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
HARUNUR . RASHID
Primary Examiner
Art Unit 2497
/HARUNUR RASHID/Primary Examiner, Art Unit 2497