1-Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
Response to Arguments
Applicant’s arguments filed January 19, 2026 have been fully considered. After further consideration, a new ground(s) of rejection is presented due to Applicant’s amended claim language.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claims 1, 4 – 8 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over NIST (NIST: Zero Trust Architecture, 2020) in view of Malton (US Pub. No. 20210012020) in view of Jung (US Pub. No. 2021/0006558) in view of Atick (6111517).
Per claim 1, NIST (NIST: Zero Trust Architecture, 2020) is relied upon to teach a security system for controlling server access and command execution (reads on all resource authentication and authorization to resources is determined by dynamic policy, see NIST section 2.1) through facial recognition of a server user (reads on a subject/client, see NIST Figure 2 and Section 2.2), the security system being equipped with a security proxy server (reads on the Policy Enforcement Point, Policy Engine and Policy Administrator, gateway combination, see NIST Figures 1 – 3, Section 3.1.2 and Section 3.2.1) that relays and secures data communication between (reads on all communication is secure regardless of network location, see NIST Section 2.1) a computer terminal (reads on the terminal of NIST Figure 1) and a security target server (reads on the Data Resource, see NIST Figure 1 and Figure 3), the security system comprising: a secure access agent including (reads on a software device agent, see NIST Figure 1 and Section 3.2.1) a face recognition module configured to repeatedly collect and transmit facial information of a user who is permitted to access (reads on continually revaluating trust via biometric attributes and possible reauthentication the observable state of client identity, as defined by enforced policy, before access is allowed, see NIST Section 2.1, Section 2.2 and Section 6.2) the security target server (reads on the Data Resource, see NIST Figure 1 and Figure 3) and is accessing the security target server (reads on the Data Resource, see NIST Figure 1 and Figure 3) at a designated time point or in a designated situation (reads on when the subject wishes to connect to the Data Resource, see NIST Section 3.2.1), and a notification module configured to output a situation of data communication with the security target server, and installed on the terminal and configured to be executed based on an operating system (OS) of the terminal (reads on the implied if not necessary teaching of NIST that the Agent resides on the subject device, see NIST Figures 1 – 3); and the security proxy server (reads on the Policy Enforcement Point, Policy Engine and Policy Administrator, gateway combination, see NIST Figures 1 – 3, Section 3.1.2 and Section 3.2.1) including a user information storage module configured to store user information (reads on one of several data sources that provide input and policy rules used by the policy engine when making access decisions, see NIST Section 3 and Section 3.3 Subsection Subject Database), a security policystorage module configured to store security policies for each user (reads on policy database, see NIST Section 3.3), a relay module configured to relay data communication between the secure access agent and the security target server (reads on the PEP that operates as a relay/gateway passing data between the agent on the user device and the resource/server/service if policy permits, see NIST Section 3 and 3.2.1), and a security processing module configured to check whether a facial image of the facial information received from the face recognition module matches a facial image of the user information through a comparison between them (reads on using biometric attributes as authentication factors, see NIST Section 6.2) and to control the relay module to collectively block access to (reads on based on the outcome of authentication and policy checks blocking all or partial access based on policy, see NIST Section 2.1, 3, 3.2 and 6.2) the security target server or block only designated data communication according to security policies corresponding to the user information (reads on all resource authentication and authorization are dynamic and strictly enforced by policy before access is allowed, see NIST Section 2.1), wherein the user information storage module, the security policy storage module, the relay module, and the security processing module are installed to be executed based on a server OS (reads on the implied if not necessary teaching of NIST that the Agent resides on the subject device, see NIST Figures 1 – 3). The prior art of record is silent on explicitly stating facial recognition of a server user; a face recognition module configured to repeatedly collect and transmit facial information of a user; a notification module configured to output a situation of data communication with the security target server; a usage state detection module configured to detect at least one change in a state of the user selected from among a change in a posture of the user within a photographing range of a photographing means; security processing module configured to check whether a facial image of the facial information received from the face recognition module matches a facial image of the user information through a comparison between them; control the relay module to collectively block access to the security target server or block only designated data communication according to security policies corresponding to the user information; wherein the security processing module updates the facial image of the user information, stored in the user information storage module, to the facial image of the facial information when it is determined that the facial image of the facial information received from the face recognition module matches the facial image of the user information.
PNG
media_image1.png
1508
526
media_image1.png
Greyscale
PNG
media_image2.png
1134
798
media_image2.png
Greyscale
PNG
media_image3.png
1578
604
media_image3.png
Greyscale
PNG
media_image4.png
1538
808
media_image4.png
Greyscale
PNG
media_image5.png
578
798
media_image5.png
Greyscale
Malton (US Pub. No. 20210012020) is relied upon to teach
controlling command execution (reads on the ability to perform an exemplary request to read, delete , change or access a document, see Malton para 0061) through facial recognition of (reads on a biometric re-authentication, see Malton para 0013, 0061 and 0064) a server user (reads on the user of Malton Figure 3 block 301); a notification module configured to output a situation of data communication (reads on access determination status, see Malton para 0014 and 0064 – 0065) with the security target server (reads on document management server, see Malton para 0014, 0017 – 0018, 0052, 0059 and 0065), security processing module configured to (reads on the context-based access control system, see Malton Figure 3 and para 0060 – 0065) check whether a facial image of the facial information received from the face recognition module matches a facial image of the user information through a comparison between them (reads on determine if biometric data is authenticated, see Malton Fgiure 3 block 320); control the relay module to collectively block access to the security target server or block only designated data communication according to security policies corresponding to the user information (reads on if the biometric authentication is not valid, the access request is rejected, see Malton Figure 3 block 320 and para 0064).
[0002] Users may access documents in almost any context with the availability of wireless communication devices. Access control to document management systems are typically based a role-based access control model in which access to documents is controlled based on permissions assigned to particular documents and users. Role-based access control (RBAC) systems are typically based on pre-authentication of the user with authentication being performed by an independent system, such as an authentication system of the operating system of the host device. A drawback to this approach is that it exposes the document management systems to unauthorized access if a user or system is able to bypass or breach the authentication system. For this and other reasons, there remains a need for document management systems having improved access control.
[0013] The document management system and related methods of the present disclosure may leverage machine learning to identify hidden patterns that may exist within a document management system to make intelligent inferences by analyzing user behavior to flag potentially malicious and suspicious events. Insider threats are a reality in large industries, and these threats have defining properties that may be clustered into suspicious behavior categories for predictive and in-depth features vulnerability analysis. With large amounts of daily user interactions in a document management system, patterns may emerge when analyzing when actions are performed, on which documents, in collaboration with how many other users, and the amount of performed changes within a specific time period, and so forth. These interactions may be used for continuous authentication purposes and may be combined with biometrics to increase security in performing access control to a document management system. There is a trade-off between user work efficiency and user access security in access control systems. For example, although highly robust security measures are desirable, the access control system should not unduly interfere with user behavior. The use of machine learning techniques user access security without unduly interfering with user behavior and user work efficiency.
[0052] The document management server 110 may be connected to the document database 114, either directly or through the communications network 112. The document database 114 stores a plurality of documents, and may be physically located either locally or remotely from the document management server 110. The document database 114 may be a module of the document management server 110. The document management server 110 provides administrative control and management capabilities over the documents stored within the document database 114.
[0060] Referring to FIG. 3, an example embodiment of a context-based access control system 300 of the present disclosure will be described. The context-based access control system 300 comprises context-information 301, a smart enterprise access control (SEAC) system 303 and an RBAC system 305. The RBAC 305 is connected to the document management server 110 and document database 114.
[0061] At 312, a user submits a document access request to perform an action in relation to a document stored in the document database 114. An action may be, for example, a request to delete the document, a request to change the contents of a document, a request to access the document, and so forth. In some example embodiments, available actions are categorized as read (R), write (W), delete (X), set (S) (an action to set a permission designation), and clear (C) (an action to clear a permission designation), collectively referred to as R/W/X/S/C actions.
[0064] The determination of whether the document access request matches access criteria for granting the document access request (e.g., whether the document access request is suspicious) may be performed using machine learning techniques that analyze contexts which caused certain actions to either be accepted or rejected. In response to a determination that the document access request does not match access criteria for granting the document access request (e.g., the document access request is determined to be suspicious), an authentication request is generated and presented as a prompt on a communication device 201 of the respective user at 316. FIG. 8 shows an example prompt user interface screen (or window) 800 for requesting authentication. The authentication request is typically a re-authentication request performed during a session. The prompt may request a resubmission of (i) authentication information previously submitted at the start of or recommencement of a session (for example, after a security timeout or lockout) such as a username and password or PIN, (ii) additional or different authentication information or provide multifactor authentication such as a biometric information (e.g., fingerprint, iris scan, etc.), or (iii) a combination of previously submitted authentication information and additional or different authentication information. In response to a determination that the document access request matches access criteria for granting the document access request (e.g., the document access request is determined to be unsuspicious), the document access request is forwarded to the RBAC system 305 at 318.
[0065] At 320, input received from the communication device 201 of the respective user in response to the authentication prompt is evaluated. In response to a failed authentication of the user, the document access request is rejected at 320. This typically comprises logging the failed document access request and its context by the SEAC system 303, and generating and presenting a notification of the failed authentication and/or failed document access request on communication device 201 of the respective user at 316. The failed document access request may also be logged by the RBAC system 305. In response to an authentication of the user, the document access request is forwarded to the RBAC system 305 at 322.
PNG
media_image6.png
884
850
media_image6.png
Greyscale
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify access control teachings of the prior art of record (see NIST Figures 1 – 3) by integrating the access control teachings of Malton (see Malton para 0013 – 0018 and Figure 3 and associated text) to realize the instant limitation. One or more of the underpinning rational(s), as discussed in KSR international Co, v, Teleflex inc,s etai,s 550 U,S. 398 (2007) U.S.P.Q.2d 1385, also see MPEP § 2141 {IN), are used to support this conclusion of obviousness. Accordingly, one of ordinary skill in the art would have recognized that applying the known biometric re-authentication when certain commands are requested teachings (see Malton para 0013, 0061, 0064 and Figure 3) would have yielded predictable results and resulted in an improved system. It would have been recognized that implementing the ability to determine if a command request warranted reauthentication within the system of the prior art of record would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such reauthentication features into similar systems, resulting in an improved system that uses all available known in the art techniques to improve the ability to respond to unwanted users and events. The motivation to combine the references is applied to all claims below this heading.
Jung (US Pub. No. 2021/0006558) is relied upon to teach
a secure access agent (reads on a dedicated application in order to perform face authentication, see Jung para 0101 and 0104) including a face recognition module configured to (reads on the Jung Figure 6 block 620 module that may generate face information by capturing image of user, see Jung para 0101 and 0123) repeatedly collect (reads on the application is executed to collect face information when a specific event occurs, see Jung para 0105, 0123, 0227 – 0228 and Figure 6 block 620) and transmit (reads on the arrow of Jung Figure 6 block 630) facial information of (reads on authentication request information which includes the captured face information, see Jung Figure 6 block 620 and para 0123 and 0133) a user who is permitted to access (reads on the user, see Jung para 0010) the security target server (reads on the combination of the target device and management server of Jung that together form a complete server system with both protected resource and management functionality, see Jung para 0039, 0044 and 0171), and a usage state detection module configured to detect a change in a state of the user (reads on mobile terminal may receive an application execution signal when the mobile terminal enters a specific authentication area, see Jung para 0116 – 0118); a security processing module configured to check whether (reads on the face authentication server may perform face authentication for the user by comparing the face information included in the authentication request with the feature points of the face used in the registration information, see Jung para 0136 – 0138) a facial image of the facial information received from (reads on the face information included in the authentication request, see Jung para 0136 – 0138) the face recognition module (reads on the Jung Figure 6 block 620 module that may generate face information by capturing image of user, see Jung para 0101 and 0123) matches a facial image of the user information (reads on the feature points of the face of the user acquired from the registration information pertaining to the user, see Jung para 0136 - 0138) through a comparison between them (reads on through comparison, see Jung para 0136), wherein the security processing module updates the facial image of the user information, stored in the user information storage module, to the facial image of the facial information when it is determined that the facial image of the facial information received from the face recognition module matches the facial image of the user information (reads on when the user’s face authentication is successful then generating a thumbnail image for the user using the authentication request information and replacing the current registration information pertaining to the user with the thumbnail image for the next face authentication information, see Jung para 0136, 0321 and 0324).
[0010] In an aspect, there is provided an authentication method in which the mobile terminal of a user performs authentication for the user by operating in conjunction with a face authentication and control system. The authentication method may include generating face information for the user by capturing an image of the user, transmitting authentication request information including the face information to a face authentication server, and receiving face authentication result information from the face authentication server, the face authentication result information representing the result of face authentication performed for the user.
[0035] FIG. 1 illustrates a face authentication and control system according to an embodiment.
[0036] In FIG. 1, the components of the face authentication and control system 100 are illustrated, and the relationship therebetween is illustrated using an arrow. The arrow represents communication between the components and exchange or sharing of information therebetween.
[0037] Hereinbelow, the face authentication and control system 100 may be simply referred to as the system 100.
[0038] In FIG. 1, a hub center and a computer center are illustrated. The hub center may indicate a place, an area, a building, or the like in which the user of the system 100 or the like is located. The computer center may indicate a place, an area, a building, or the like in which servers for providing the face authentication and control service of the system 100 are located.
[0039] The system 100 may include a mobile terminal 110, a face authentication server 120, and a management server 130.
[0040] The mobile terminal 110 may be a device used by a user, other than the system 100. That is, the mobile terminal 110 may be regarded as a separate device that is not included in the system 100.
[0041] The face authentication server 120 may perform authentication for the user of the mobile terminal 110.
[0042] In order to provide information to the face authentication server 120 and to store information used in the face authentication server 120, the system 100 may further include at least some of a biometric authentication database (DB) and a personnel DB. The face authentication server 120 may perform face authentication for a user using the biometric authentication DB and the personnel DB. The personnel DB may provide basic information about users (for example, employees).
[0043] The management server 130 may perform user management using the result of face authentication. For example, the management server 130 may perform time and attendance management for users using the result of face authentication.
[0044] Also, the management server 130 may control other devices of the system 100 using the result of face authentication. As such devices controlled by the management server 130, the system 100 may further include at least some of an access control device 140, a printer 150, a speaker 160, and another user device 170.
[0101] It may be required to install a dedicated application in the mobile terminal 110 in order to perform face authentication.
[0114] FIG. 6 is a flowchart of a method for performing face authentication for a user according to an embodiment.
[0115] Step 420, described above with reference to FIG. 4, may include the following steps 610, 615, 620, 625, 630, 640, 645, 650, 655, 660, 665, and 670.
[0116] At step 610, the mobile terminal 110 may receive an application execution signal.
[0117] For example, the application execution signal may be a beacon signal output from a beacon 190.
[0118] For example, the user of the mobile terminal 110 may move to a specific place in which the beacon 190 is installed. In the corresponding place, the beacon 190 may transmit a beacon signal. When the mobile terminal 110 enters a specific authentication area in which it is possible to receive a beacon signal, the mobile terminal 110 may receive a beacon signal from the beacon 190.
[0119] At step 615, upon receiving the application execution signal, the mobile terminal 110 may execute an application.
[0120] Upon receiving the application execution signal, the mobile terminal 110 may transmit an application execution message to the application.
[0121] For example, the application execution message may be a push message that is pushed to the application.
[0122] Through the execution message of the application, the application may be executed, and the application execution message may be transmitted to the application.
[0123] At step 620, the application may generate face information by capturing an image of the user.
[0124] The application may recognize the face of the user using the capture unit of the mobile terminal 110, and may generate an image or video by capturing the image of the face of the user using the capture unit.
[0133] The authentication request information may include the captured image or video.
[0134] At step 630, the mobile terminal 110 may transmit the authentication request information to the face authentication server 120. The face authentication server 120 may receive the authentication request information from the mobile terminal 110.
[0135] At step 640, the face authentication server 120 may perform face authentication for the user using the authentication request information.
[0136] For example, the face authentication server 120 may perform face authentication for the user by comparing the face information included in the authentication request information with the registration information pertaining to the user. The face authentication server 120 may verify whether the face information included in the authentication request information represents the face of the user through comparison with the registration information pertaining to the user.
[0142] The face authentication result information may include indication information. The indication information may be information to be output to the mobile terminal 110 of the user depending on whether face authentication for the user succeeds or fails.
[0156] For example, the processing request information may be information for requesting to process specific management for the user when face authentication for the user has succeeded.
[0157] For example, the processing request information may be information for requesting a specific device of the system 100, which is controlled by the management server 130, to perform a specific operation when face authentication for the user has succeeded. Here, the processing request information may indicate the specific device and the specific operation.
[0171] For example, the management server 130 may control a specific device of the system 100 so as to perform a specific operation using the processing request information.
[0172] For example, the management server 130 may control the speaker 160 of the system 100 so as to output a message using the processing request information. For example, the message may be a message for requesting the user to take a specific action (for example, to enter or exit through the access control device 140), or may be a guidance announcement about an organization.
[0224] Execution of Application Based on Entry into Authentication Area
[0227] As described above with reference to FIG. 4, the mobile terminal 110 may receive an application execution signal from a beacon 190 when it enters a specific authentication area.
[0228] In another embodiment, the application execution signal may be a GPS signal output from a Global Positioning System (GPS).
[0229] The mobile terminal 110 may determine whether the mobile terminal 110 is located within a specific authentication area using the received GPS signal. When it is determined based on the GPS signal that the mobile terminal 110 is located within the specific authentication area, the mobile terminal 110 may execute the application.
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the authentication teachings of the prior art of record (see Malton Figure 3 and NIST Figures 1 – 3) by integrating the authentication teachings of Jung (see Jung para 0101, 0104, 0105, 0123, 0133, 0227 – 0228) to realize the instant limitation. One or more of the underpinning rational(s), as discussed in KSR international Co, v, Teleflex inc,s etai,s 550 U,S. 398 (2007) U.S.P.Q.2d 1385, also see MPEP § 2141 {IN), are used to support this conclusion of obviousness. Accordingly, one of ordinary skill in the art would have recognized that applying the known authentication teachings of Jung would have yielded predictable results and resulted in an improved system. It would have been recognized that implementing the ability to implement face authentication teachings within the biometric authentication system of the prior art of record would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such face authentication features into similar systems, resulting in an improved system that uses all available known in the art techniques to authenticate via facial features. The motivation to combine the references is applied to all claims below this heading.
Atick (US Patent No. 6111517) suggests a usage state detection module configured to detect at least one change in a state of the user selected from among a change in a posture of the user within a photographing range of a photographing means (reads on the system continuously tracks the authorized individual while the individual remains within the field of view of video camera and determines if the individual looks down or away from the video camera, see Atick col. 6 lines 32 – 37 and col. 7 lines 17 – 37. The Examiner asserts determining the individual looks down or away from the video camera is the same as determining a change in a posture of the user within a photographing range of photographing means); security processing module configured to check whether a facial image of the facial information received from the face recognition module matches a facial image of the user information through a comparison between them (see Atick col. 5 line 54 – col. 6 line 25); control the relay module to collectively block access to the security target server or block only designated data communication according to security policies corresponding to the user information (reads on grant access tailored to the authorization level of the individual, see Atick col. 6 lines 13 – 25); wherein the security processing module updates the facial image of the user information (reads on the enrollment program periodically adds an updated image of the authorized individual to image memory, see Atick col. 12 lines 53 – 65), stored in the user information storage module (reads on the image memory, see Atick col. 12 lines 53 – 65), to the facial image of the facial information when it is determined that the facial image of the facial information received from the face recognition module matches the facial image of the user information (reads on periodically adds an updated image of the authorized individual to image memory, see Atick col. 12 lines 53 – 65).
[col. 5 lines 54 - 67]
If a face is detected in step 310 (e.g., if an individual sits down to work at terminal 130 and thus enters the field of view of video camera 150), decision step 315 succeeds and the system proceeds to sub-mode two of the not-tracking mode. In sub-mode two, the system constructs a face template of the detected face. Thus, in step 325 the system extracts the detected face from the video signal provided by video camera 150. This step corresponds to alignment step 220 and normalization step 230 described above in connection with FIG. 2. After alignment and normalization have been performed, the system proceeds to step 330 where it converts the facial image into a facial representation or template as described above in connection with representation step 240 of FIG. 2.
[col. 6 lines 1 - 12]
At this point, the system enters sub-mode three of the not-tracking mode which comprises matching the acquired facial representation against the stored facial representations of individuals authorized to use computer system 100. As shown in FIG. 3, steps 335-350 comprise a loop which successively compares the acquired representation with each of the stored representations of authorized individuals until a match is found or until all of the stored representations have been examined. As noted above, the stored representations are generated from the images of authorized individuals stored in image memory 170 and are maintained in face templates memory 140.
[col. 6 lines 13 – 25]
Continuing with FIG. 3, if no match is found in steps 335-350, the system returns to step 310 of sub-mode one. If, on the other hand, a match is found, decision step 340 succeeds and the individual in the field of view of video camera 150 is granted access to computer system 100 as indicated in step 355. In one preferred embodiment, this grant of access consists simply of enabling both the keyboard and screen of terminal 130. In a second preferred embodiment, the grant of access may be tailored to the authorization level of the individual. For example, a person with a particular authorization level might be granted access to only certain data stored in memory 120 or might be permitted to run only certain application programs.
[col. 6 line 32 – line 37]
Once an individual has been granted access to computer system 100, the system enters the tracking mode. In this mode, the system continuously tracks the authorized individual and continues to permit access to computer system 100 only while the individual remains within the field of view of video camera 150.
[col. 6 line 38 – 49]
In particular, once an individual is granted access to computer system 100 in step 355 of FIG. 3, the system immediately proceeds to step 410 of FIG. 4 where it registers the authorized individual's current head position, shape, size, color and facial representation and stores this in memory 120 as a new tracking path. The data stored in the tracking path can be generated using commercially available software such as the Facelt Developer Kit, a copy of which may be found in microfiche appendix A of this application. This tracking path is used in subsequent searches to determine whether the authorized user remains in the field of view of video camera 150.
[col. 6 lines 50 – 65]
Specifically, in step 415 the system retrieves the current head location of the authorized user from memory 120 and searches for a face in the vicinity of that location. If a face is found, the system converts the newly acquired facial image to a facial representation and compares that representation to the one stored in the tracking path. As is well known in the art, this comparison may be performed through template matching using a normalized correlator. A match is declared to exist if the normalized correlator is larger than a preset threshold value. In a preferred embodiment, computer system 100 may also compare the newly acquired representation to the facial representations stored in the face template database. As described below, when these comparisons sufficiently confirm the continued presence of the authorized individual, continued access to computer system 100 is provided.
[col. 6 line 66 – col. 7 line 7]
Thus, in decision step 420, the system determines whether the acquired representation matches the facial representation stored in the tracking path. If decision step 420 succeeds, then access to computer system 100 is continued, and the system proceeds to step 425 where the information stored in the tracking path is updated in accordance with the latest acquired representation. From step 425, the system loops back to step 415, and a new search is begun. In this way, the identity of the authorized user is repeatedly confirmed.
[col. 7 lines 18 – 37]
At times, however, decision step 420 may fail even when the authorized individual continues to sit before terminal 130. This may happen, for example, if the individual looks down or away from the screen of terminal 130 (and thus is not facing video camera 150) or if his facial features are temporarily partially blocked. Therefore, as described below, when the system is unable to identify the facial features of an individual in the field of view, it proceeds to a second order identification scheme to confirm the continuing presence of the authorized individual.
Specifically, if decision step 420 fails, the system proceeds to decision step 430 where the system attempts to confirm the continuing presence of the authorized individual on the basis of other recorded features such as head location, shape, color, and size which are stored as part of the tracking path. In a preferred embodiment, step 430 may be composed of two sub-steps. In the first sub-step, the system retrieves the most recent head-location of the authorized individual from the tracking path and determines whether the field of view of video camera 150 now contains a head-shaped object in or near that location. If a head-shaped object is identified, the system proceeds to sub-step two and determines whether other features of the detected head-shaped object such as its shape, size and color, match the features stored as part of the tracking path. A score is assigned to the results of this matching process, and if the score is above a predetermined threshold, decision step 430 succeeds and access to computer system 100 is continued. In that event, the stored tracking path is updated in step 425, and the system returns to step 415 to repeat the tracking.
[col. 12 lines 53 - 65]
In this way, image memory 170 is periodically updated to reflect changes in the appearance of the authorized individual. As a result, the system can continue to recognize the authorized individual even as his appearance changes over time.
Preferably, the initial images stored at the time of enrollment are never erased, but the additional images added periodically may be replaced during subsequent periodic updates.
It should be recognized that this preferred adaptive enrollment embodiment conveniently updates the stored images of an authorized individual without requiring the individual to participate in a new enrollment procedure.
Before the effective filing date of the invention it would have been obvious to one of ordinary skill in the art to modify the authentication and face collection teachings of the prior art of record (see Malton para 0013, Figure 3 and NIST Figures 1 – 3 and Section 2.2 bullet 3 and Jung para 0101, 0104, 0105, 0123, 0133, 0227 – 0228) by integrating the face collection teachings of Atick (see Atick col. 6 line 32 – col. 7 line 37 and col. 12 lines 53 – 65) to realize the instant limitation. One or more of the underpinning rational(s), as discussed in KSR international Co, v, Teleflex inc,s etai,s 550 U,S. 398 (2007) U.S.P.Q.2d 1385, also see MPEP § 2141 {IN), are used to support this conclusion of obviousness. Accordingly, one of ordinary skill in the art would have recognized that applying the face collection teachings Atick would have yielded predictable results and resulted in an improved system. It would have been recognized that implementing continuous facial monitoring and image updating into the NIST zero trust architecture authentication system of the prior art of record would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such facial updating features into similar systems, resulting in an improved system that uses all available known in the art techniques to authenticate via facial features. The motivation to combine the references is applied to all claims below this heading.
Per claim 4, the prior art of record further suggests the security proxy server relays data communication between a specific application installed on the terminal (reads on the face authentication server may receive authentication request information from the mobile terminal and the face authentication server may transmit the face authentication result information to the mobile terminal and/or the management server, see Jung para 0134 and 0137) and the security target server and executes a security process (reads on perform face authentication for the user by comparing the face information included in the authentication request information with the registration information, see Jung para 0126 and Malton Figure 3 and associated text); and the notification module of the secure access agent outputs a situation of data communication of the specific application (reads on the application may output the indication information based on the result of face authentication to the output unit of the mobile terminal using the face authentication result information, see Jung para 0162 and Malton Figure 3 and associated text).
Per claim 5, the prior art of record further suggests the security processing module transmits a notification signal when access to the security target server is collectively blocked or when only designated data communication is blocked (reads on the combination of the face authentication result may include system control information for controlling a specific device or performing a specific action and in response to a determination that the access request does not match access criteria an authentication request is generated and presented as a prompt, see Malton para 0064 and Jung para 0147 – 0148); and the notification module outputs guide data related to recollection of facial information in response to the notification signal (reads on the application may output via the output unit a message instructing the user to make a specific motion, see Jung para 0272).
Per claim 6, the prior art of record further suggests the usage state detection module checks and transmits task traffic information associated with (reads on application may generate authentication request information including the face information and the authentication requestion information may include the identifier of the user, see Jung para 0130 – 0132) the security target server (reads on the combination of the target device and management server of Jung that together form a complete server system with both protected resource and management functionality, see Jung para 0039, 0044 and 0171) generated during an operation of the terminal after access to the security target server (reads on performing a re-authentication procedure when predetermined actions are identified, see Malton para 0064, Figure 3 and associated text and Jung para 0130 – 0132); and the security processing module searches the security policy storage module for a command identified through an analysis of the task traffic information (reads on when it is determined a specific action/command that is being attempted does not match access criteria then reauthentication is prompted, see Malton para 0064), and, when it is identified as a command out of authority, collectively blocks access to the security target server, blocks only designated data communication, or blocks an execution of the command out of authority according to a security policy for the command out of authority (reads on when it is determined the access request does not match access criteria for granting the access request access is not provided and reauthentication is prompted, see Malton para 0064 and Jung para 0062).
Per claim 7, the prior art of record further suggests the usage state detection module identifies a command by analyzing task traffic information associated with the security target server generated during an operation of the terminal after access to the security target server, and transmits the command (reads on when it is determined during an authenticated session a specific action/command that is being attempted does not match access criteria then reauthentication is prompted, see Malton para 0064); and the security processing module searches the security policy storage module for the command, and, when it is identified as a command out of authority, collectively blocks access to the security target server, blocks only designated data communication, or blocks an execution of the command out of authority according to a security policy for the command out of authority (reads on when it is determined during an authenticated session a specific action/command that is being attempted does not match access criteria then reauthentication is prompted, see Malton para 0064).
Per claim 8, the prior art of record further suggests wherein the security processing module first checks whether the command is a command out of authority before the comparison of the facial information of the user, and blocks data communication with the security target server or an execution of the command out of authority (reads on when it is determined during an authenticated session a specific action/command that is being attempted does not match access criteria then reauthentication is prompted, see Malton para 0064).
Per claim 10, the prior art of record further suggests wherein the security processing module first checks whether the command is a command out of authority before the comparison of the facial information of the user, and blocks data communication with the security target server or an execution of the command out of authority (reads on when it is determined during an authenticated session a specific action/command that is being attempted does not match access criteria then reauthentication is prompted, see Malton para 0064).
Conclusion
Applicant’s amendment necessitates the new ground(s) of rejection presented in this action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Brian Shaw whose telephone number is (571)270-5191. The examiner can normally be reached on Mon-Thurs from 6:00 AM-3:30 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeff Nickerson can be reached on (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 703-872-9306.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRIAN F SHAW/
Primary Examiner, Art Unit 2432