Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 120 as follows:
The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994).
The disclosure of the prior-filed application, Application No. 14/675764, US 61/973855, Application No. 14/566723 (US Patent 9071969), Application No. 13/922271 (US Patent 8938787), Application No. 13/877676 (US Patent 9069942), International Application No. PCT/IL2011/000907, US 61/417479, Application No. 14/320653 (US Patent 9275337), US 61/843915, Application No. 14/320656 (US Patent 9665703), Application No. 14/325393 (US Patent 9531733), Application No. 14325394 (US Patent 9547766), Application No. 14/325395 (US Patent 9621567), Application No. 14/325396, Application No. 14/325397 (US Patent 9450971), Application No. 14/325398 (US Patent 9477826) fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. For example, claims 1 and 20 refer to “increasing an attack-relatedness score of said usage session”, “performing a modification to the attack-relatedness score if the typing rhythm is constant”, “performing a different modification to the attack-relatedness score if the typing rhythm is non-constant”, “if the attack-relatedness score of said usage session is greater than a particular threshold value…”, where none of these limitations appear to predate the earliest filing date of Application No. 15/885819, and thus the filing date of 2/1/2018 is being applied for purposes of prior art rejection.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/10/2025 has been entered.
Response to Amendment
This is in response to the amendments filed on 12/10/2025. Claims 1, 8, and 20 have been amended. Claims 1, 8, 19, and 20 are currently pending and have been considered below.
Response to Arguments
Applicant's arguments filed on 12/10/2025 have been fully considered but they are not persuasive. On page 6 of Remarks, Applicant contends that Mehta fails to disclose or suggest, “any ability to distinguish between a First Human User (attacker) and a Second Human User (legitimate user)”. The examiner respectfully disagrees.
As was previously cited, the examiner relied upon Col. 5, lines 31-41 of Mehta to disclose an implementation where a server security component could utilize a threshold number of attributes in a web service request to distinguish whether a mobile application is being operated by a legitimate human user or not. The examiner then cited to Col. 1, lines 42-45 to disclose automation techniques being exploited by malicious users to launch various attacks on the web service and Col. 2, lines 41-48 to further disclose automation toolkits and software being exploited by malicious users to launch attacks on the web service (emphasis added). Col. 4, lines 28-33 further gives an example where, “an automation may navigate through desired pages in a predetermined way … as the mobile application views are previously learned by the malicious user running the automation” (emphasis added). In other words, Mehta clearly teaches two different types of users: a legitimate user and a malicious user (that controls automation software).
Applicant appears to instead focus on that Mehta “only tries to detect that an Automated Code is operating the smartphone, and not a human user” (see page 6 of Remarks) in order to rationalize why Mehta cannot teach the claimed amendment, however the claimed amendment does not necessarily require its “human attacker” to be actively operating an electronic device. Instead, the claim merely requires that the system distinguish between a “human attacker” and a “legitimate human user” (based on a number of deletion operations that a “user” performed), where neither human need be necessarily directly actively operating the electronic device. Thus, Mehta’s disclosure of detecting an automation toolkit or software controlled by a malicious user is sufficient to read on the “human attacker” limitation, as the claim is silent as to how the “human attacker” is directly interacting with the electronic device itself. In other words, the examiner is interpreting the automation toolkit or software as a direct extension of Mehta’s malicious user, where if Mehta detects the automation toolkit or software then Mehta also detects the malicious user, thereby “distinguishing” between a human attacker and a legitimate user.
Therefore, the examiner asserts that Mehta’s disclosure of distinguishing between automation software controlled by a malicious user or a legitimate user of a mobile application fully teaches and suggests the ability to “distinguish between a First Human User (attacker) and a Second Human User (legitimate user)”, and thus the rejection is sustained as below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 8, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over “Singh” (US 2016/0188862) in view of “Wardman” (US 2019/0207975) in further view of “Mehta” (US 10554677).
Regarding Claim 1:
Singh teaches:
A system (Fig. 1(a)) comprising:
one or more processors (¶0030), that are configured to execute code;
wherein the one or more processors are operably associated with one or more memory units that are configured to store code (¶0030);
wherein the one or more processors are configured to perform a process comprising:
(a) monitoring input-unit interactions of a user (¶0031, “The sensing module 102 of the system 100 is adapted to continuously sense at least one biometric input of a user of the device with the help of at least one biometric sensor associated with the device. The input may include but not limit to an image or any biometric information of a person by which presence of a security threat or unauthorized access may be determined”), who utilizes during a usage session (¶0031, “The disclosure encompasses that the sensing may occur periodically and continuously, where the period limit of sensing may be fixed by either the system 100 or the user at the time of configuration or is dynamically updated at any time in future”) one or more input units of an electronic device (¶0031, “The sensing module 102 of the system 100 is adapted to continuously sense at least one biometric input of a user of the device with the help of at least one biometric sensor associated with the device”) …
(b) detecting a particular average typing speed of said user in said usage session (¶0023, “Behavioral biometric information is related to the pattern of behavior of a person, including but not limited to typing … speed”); and if said particular average typing speed matches one or more average typing speeds that are pre-defined as average typing speeds of attackers (¶0039, … “say the threshold value for typing speed of the authorized user is 180-200 characters per minute”; i.e., establish a threshold value that represents typing speeds over 180-200 characters per minute would be deemed unauthorized), then increasing an attack-relatedness score of said usage session (¶0039, “However, when the authorized user is unwell or the user is an unauthorized individual, a higher threat value will cause the trigger to be generated so as to initiate an authentication procedure”);
(c) checking whether a typing rhythm exhibited by said user in said usage session (¶0023, “Behavioral biometric information is related to the pattern of behavior of a person, including but not limited to typing rhythm…”) …
(d) if the attack-relatedness score of said usage session is greater than a particular threshold value (Fig. 4, step 412), then: determining that said input-unit interactions are part of an attack (Fig. 4, step 414), and initiating one or more mitigation operations (Fig. 4, step 416; ¶0056, “On the contrary, where the threat value generated is higher than the threshold value, as depicted in step 414 of the figure, a trigger is generated by the trigger generating unit of the engine, thereby initiating authentication of user owing to the plausible threat to device security”);
Singh does not disclose:
one or more input units of an electronic device to fill-out data in a fillable form of a computerized service;
…
(c) checking whether a typing rhythm exhibited by said user in said usage session is constant or non-constant; performing a modification to the attack-relatedness score if the typing rhythm is constant; performing a different modification to the attack-relatedness score if the typing rhythm is non-constant;
…
wherein the monitoring of step (a) further comprises: tracking, via a data-entry deletion tracker, deletions of characters during data entry across one or more fields; and analyzing a number of deletion operations that the user performed during data entry; and comparing said number of deletion operations that the user performed to one or more threshold values to distinguish between a human attacker and a legitimate human user;
wherein the determining of step (d), of whether said input-unit interactions are part of an attack, takes into account said tracking and said analyzing and said comparing of deletion operations that the user performed.
Wardman teaches:
one or more input units of an electronic device to fill-out data in a fillable form of a computerized service (¶0068, “In some examples, communication security manager 132 may compare an elapsed time and rhythm of a user typing their email address in an online message based on previous observations and/or averages stored in the user's communication profile”);
…
(c) checking whether a typing rhythm exhibited by said user in said usage session is constant or non-constant (¶0042, “In some examples, communication security manager 132 collects and stores user typing statistics associated with keystroke patterns and typing rhythms of a user… Communication security manager 132 also may track a typing rhythm or cadence within, between, and/or throughout one or more particular words, phrases, and/or sentences”; ¶0067, “In various examples, communication security manager 132 analyzes incoming online messages associated with a user, for example, to determine if typing speed, typing rhythm, capitalization, grammar, sentence structure, and/or particular regional language previously seen in prior communications of the user are consistent or within acceptable statistical variations”); performing a modification to the attack-relatedness score if the typing rhythm is constant; performing a different modification to the attack-relatedness score if the typing rhythm is non-constant (¶0056, “On the other hand, aspects of the new online messages that are inconsistent with previously collected data and statistics associated with the user (e.g., outside of acceptable statistical variation) generally may be given no weight and/or used to reduce an identity trust score”; i.e., increase a trust score if the typing rhythm is constant with the user and decrease the trust score if the typing rhythm is not-constant with the user);
Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to modify Singh’s biometric authentication of smart devices by enhancing Singh’s threat value to include calculations based on if a user’s typing rhythm is consistent or not with previous typing rhythms of the user, as taught by Wardman, in order to detect increasingly sophisticated bots.
The motivation is to incorporate additional typing metrics, such as typing rhythm, into the calculation of a trust score for a message in order to better profile authorized users versus unauthorized users and/or bots (Wardman, ¶0004).
Singh in view of Wardman does not disclose:
wherein the monitoring of step (a) further comprises: tracking, via a data-entry deletion tracker, deletions of characters during data entry across one or more fields; and analyzing a number of deletion operations that the user performed during data entry; and comparing said number of deletion operations that the user performed to one or more threshold values to distinguish between a human attacker and a legitimate human user;
wherein the determining of step (d), of whether said input-unit interactions are part of an attack, takes into account said tracking and said analyzing and said comparing of deletion operations that the user performed.
Mehta teaches:
wherein the monitoring of step (a) further comprises: tracking, via a data-entry deletion tracker, deletions of characters during data entry across one or more fields (Col. 4, lines 55-57, “For example, the client security component of the mobile application could track the use of the delete or backspace keys”; Col. 3, lines 44-48, “The web service request is typically generated by the application in response to some user input, such as a user … entering data into a form field on the application”); and analyzing a number of deletion operations that the user performed during data entry (Col. 4, lines 50-55, “The client security component of the mobile application could also monitor for erroneous input provided by the user. An automation typically interacts with an application in a predetermined and flawless manner due to its algorithmic programming, whereas a human operator is prone to occasionally make mistakes”; i.e., analyze for greater than zero erroneous inputs provided by a user via tracking of the deletions made); and comparing said number of deletion operations that the user performed to one or more threshold values to distinguish between a human attacker and a legitimate human user (Col. 5, lines 31-41, “In some implementations, a threshold number of user behavior attributes could be used to determine when enough of the attributes exist to determine that the mobile application is being used by a real, human operator. For example, the server security component could compare a total number of the user behavior attributes received in the web service request to a threshold number of attributes to determine that the mobile application is being operated by a human user when the total number of the user behavior attributes exceeds the threshold number”; Col. 1, lines 42-45, “Unfortunately, these automation techniques may be exploited by malicious users to launch various security attacks on the web service” & Col. 2, lines 41-48, “Various kinds of toolkits and software may be used to automate user interactions on mobile devices, but unfortunately these automation techniques may be exploited by malicious users to launch various security attacks using mobile applications. Therefore, determining that a mobile application is being operated by a real human user is a good indicator that the mobile application is being used for legitimate purposes”; Col. 4, lines 28-33 further gives an example where, “an automation may navigate through desired pages in a predetermined way … as the mobile application views are previously learned by the malicious user running the automation”; i.e., Mehta discloses distinguishing between automation toolkits or software operated by malicious users or legitimate users of a mobile application. The examiner notes that the claim does not specify how the “human attacker” is interacting with the system, and thus a malicious user operating/controller an automation toolkit or software is deemed sufficient in disclosing the “human attacker”);
wherein the determining of step (d), of whether said input-unit interactions are part of an attack, takes into account said tracking and said analyzing and said comparing of deletion operations that the user performed (Col. 6, lines 10-16, “The user behavior attributes included in the web service request enables the server security component of the web service to ensure that only genuine native applications with legitimate user behavior are allowed to use the web service, and any possible exploitation of the mobile API to perform malicious actions can be blocked”; i.e., determine whether the user is a malicious entity or a legitimate user via the previous tracking, analyzing, and comparing steps).
Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to modify Singh in view of Wardman’s biometric authentication of smart devices by enhancing Singh in view of Wardman’s method of determining an authorized user or an unauthorized user to track, analyze, and compare deletion operations via data entry into a form, as taught by Mehta, in order to efficiently detect usage indicative of an authorized user.
The motivation is to efficiently determine whether a malicious entity is operating on a device by monitoring and analyzing inputs to a form-field for mistakes that require a deletion operation to be performed. Such mistakes made are atypical of an automated entity, thus making detection of a legitimate user or a malicious entity quick to determine (Mehta, Col. 4, lines 50-56, “The client security component of the mobile application could also monitor for erroneous input provided by the user. An automation typically interacts with an application in a predetermined and flawless manner due to its algorithmic programming, whereas a human operator is prone to occasionally make mistakes”; Col. 6, lines 27-32, “In this manner, the web service has a higher degree of confidence in the legitimacy of the web service request if the web service determines that the mobile application is being operated by a human user, and possible exploitation of the mobile API to perform malicious actions can be blocked in the alternative”).
Regarding Claim 8:
The system of wherein Singh in view of Wardman in further view of Mehta further teaches the process further comprises:
generating a determination that either (I) analyzed input-unit interactions indicate that the user exhibits Data Familiarity, relative to data that he is entering, at a first level that is equal to or greater than a pre-defined data-familiarity threshold value (Wardman, ¶0084, “In other examples, an online security challenge may ask a user to type and/or retype one or more familiar words or strings of text (e.g., one or more subcomponents of an email address, one or more sub components of an address, a city, a state, etc.)”; ¶0085, “In various examples, online identity verification system 130 adjusts an identity trust score based on analyzing results of one or more security challenges. For example, online identity verification system 130 may increase or decrease an identity trust score and/or any one or more sub-components of an identity trusts score based on one or more evaluated aspects of a response … Online identity verification system 130 then may continue generating and issuing one or more security challenges until identity trust score and/or any associated sub-component meet or exceed acceptable levels as defined by one or more thresholds”)); or (II) analyzed input-unit interactions indicate that the user exhibits Data Familiarity, relative to data that he is entering, at a second level that is smaller than said pre-defined data-familiarity threshold value;
based on said determination, distinguishing between the legitimate human user and human attackers (Wardman, ¶0086, “In addition, provided scores may be displayed and updated in real-time or near-real time and/or associated with one or more descriptive labels and colors (e.g. trusted—green, suspicious, —yellow, bot—orange, criminal—red, etc.)”).
The motivation to reject claim 8 by applying Wardman to Singh is the same motivation recited within the rejection of claim 1 above.
Regarding Claim 19:
The system of claim 1, wherein Singh in view of Wardman in further view of Mehta further teaches the input-unit interactions of said user comprise at least one of:
user interactions via a computer mouse,
user interactions via a touch-screen, user interactions via a touch-pad (Singh, ¶0025),
user interactions via a physical keyboard (Wardman, ¶0044),
user interactions via an on-screen keyboard.
Regarding Claim 20:
Method claim 20 corresponds to system claim 1 and contains no further limitations. Therefore claim 20 is rejected by applying the same rationale used to reject claim 1 above.
Conclusion
All claims are identical to or patentably indistinct from, or have unity of invention with claims in the application prior to the entry of the submission under 37 CFR 1.114 (that is, restriction (including a lack of unity of invention) would not be proper) and all claims could have been finally rejected on the grounds and art of record in the next Office action if they had been entered in the application prior to entry under 37 CFR 1.114. Accordingly, THIS ACTION IS MADE FINAL even though it is a first action after the filing of a request for continued examination and the submission under 37 CFR 1.114. See MPEP § 706.07(b). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL B POTRATZ whose telephone number is (571)270-5329. The examiner can normally be reached on M-F 10 A.M. - 6 P.M. CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Korzuch can be reached on 571-272-7589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL B POTRATZ/Primary Examiner, Art Unit 2491