Prosecution Insights
Last updated: April 19, 2026
Application No. 17/173,186

Integrated Quality Assessment for a Passive Authentication System

Non-Final OA §103§112
Filed
Feb 10, 2021
Examiner
FARAMARZI, GITA
Art Unit
2496
Tech Center
2400 — Computer Networks
Assignee
TruU, Inc.
OA Round
5 (Non-Final)
53%
Grant Probability
Moderate
5-6
OA Rounds
3y 4m
To Grant
75%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
40 granted / 75 resolved
-4.7% vs TC avg
Strong +22% interview lift
Without
With
+21.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
33 currently pending
Career history
108
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
56.6%
+16.6% vs TC avg
§102
5.0%
-35.0% vs TC avg
§112
29.4%
-10.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 75 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 08/11/2025 has been entered. Status of Claims Claims 1-21 are pending, of which claims 1, 8, and 15 are in independent form. Response to Arguments Applicant’s argument filed August 11, 2025 have been fully considered but they are not persuasive. On Pages 15-17 of remarks by Applicant, Applicant argues that “Abdelaziz does not disclose instructions that cause a processor to "input characteristic data collected for the user to an identity confidence model to generate an identity confidence value... corresponding to a likelihood that characteristic data collected by the sensor was collected for the user" where the characteristic data comprises "motion data collected for each user of the plurality of users prior to requesting access to the operational context" as recited in claim 1.”. The examiner disagrees with the applicant and has a different view of prior art teachings and claim interpretation. The examiner is relying on Sheller to teach the cited limitations. In Sheller, the sensors continuously gather motion data before any explicit access request for passive authentication, see paragraph [0022]. Moreover, Sheller teaches a classifier may generate a “true” or “false” output indicative of whether the computing device 102 is in a pocket of the user, in a container, or other enclosure, in paragraph [0034], and the authentication confidence is determined based on a False Accept Rate (FAR) and a False Reject Rate (FRR) of authentication factors (e.g., classification data based on sensor data input, in paragraph [0016]. Further, in paragraph [0060] Sheller discloses a computing device for user authentication…determine a plurality of authentication factors based on the sensor data, authenticate, by use of a fused function, a user of the computing device based on the authentication factors, wherein the fused function is to generate an authentication result as a function of the plurality of authentication factors. Therefore, a machine learning identity algorithm trained based on the user’s motions, which generates an identity confidence score reflecting the likelihood that the motion data caught by the sponsors belongs to a specific user. As to the dependent claims 2-7, 9-14, and 16-21 these claims remain rejected by virtue of dependency to their independent claims. Therefore, the examiner maintains the rejection under 35 USC § 103. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) ELEMENT IN CLAIM FOR A COMBINATION. — An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as "configured to" or "so that"; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an identity computation module”, “a user-specific sensor module” and “an optimization module” in claims 15-21. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-21 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites “the characteristic data comprising motion data collected for each user of the plurality of users prior to requesting accessing to the operational context;”. Claim 1 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites “the characteristic data comprising motion data collected for each user of the plurality of users prior to requesting accessing to the operational context;”. (i.e., prior to communicating an identity confidence value to the identity combination module 240, the identity computation module 230 communications a single identity confidence value determined for a particular identity block directly to the confidence evaluation module 250. If the confidence evaluation module 250 determines the identity confidence is above an operational security threshold, the confidence evaluation module 250 confirms the identity of the target user and provides instructions for the target user to be granted access to the operational context. Alternatively, if the identity confidence value is below the operational security threshold, the confidence evaluation module 250 does not confirm the identity of the target user and, instead, communicates a request to the secondary authentication module 260 to implement a secondary authentication mechanism, see paragraph [0055]). The full scope of the claim covers motion data collected for each user of the plurality of users prior to requesting accessing to the operational context. However, the specification does not provide enough description as to when the motion data was collected for each user, prior to requesting accessing to the operational context or after requesting accessing to the operational context. The level of detail required to satisfy the written description requirement varies depending on the nature and scope of the claims and on the complexity and predictability of the relevant technology. Ariad, 598 F.3d at 1351, 94 USPQ2d at 1172; Capon v. Eshhar, 418 F.3d 1349, 1357-58, 76 USPQ2d 1078, 1083-84 (Fed. Cir. 2005). Computer-implemented inventions are often disclosed and claimed in terms of their functionality. For computer-implemented inventions, the determination of the sufficiency of disclosure will require an inquiry into the sufficiency of both the disclosed hardware and the disclosed software due to the interrelationship and interdependence of computer hardware and software. The critical inquiry is whether the disclosure of the application relied upon reasonably conveys to those skilled in the art that the inventor had possession of the claimed subject matter as of the filing date. Vasudevan Software, Inc. v. MicroStrategy, Inc., 782 F.3d 671, 682. 114 USPQ2d 1349, 1356 (citing Ariad Pharm., Inc. V. Eli Lilly & Co, 598 F.3d 1336, 1351, 94 USPQ2d 1161, 1172 (Fed. Cir. 2010) in the context of determining possession of a claimed means of accessing disparate databases). Same reasons apply to the independent claims 8 and 15. As to the dependent claims 2-7, 9-14, and 16-21 these claims remain rejected by virtue of dependency to their independent claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 6, 8-9, 13, 15-16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Deore et al, (US 2021/0240808 A1), hereinafter Deore and further in view of Abdelaziz et al. (US 2020/0089848 A1), hereinafter Abdelaziz, and further in view of Sheller et al. (US 2015/0363582 A1, hereinafter Sheller. In regards to claim 1, Deore discloses a non-transitory computer_readable medium comprising stored computer-readable instructions that, when executed by a processor, cause the processor to (Deore, Para. 0045, the processor 1010 is capable of processing instructions stored in the memory): and for each instance where a user of the plurality of users requests access to an operational context (Deore, Para. 0029, at a user validation phase, a previously registered user may use computing system 110 or another computing system connected to network 130 to login. The login process may be for the purpose of accessing content or service provided via computing system 110), classify the authentication result as one of (Deore, Para. 0024): a true positive authentication, a false positive authentication, a true negative authentication, or a false negative authentication (Deore, Para. 0024, false acceptance rate (FAR) (i.e., the percentage of identification instances in which unauthorized persons are incorrectly accepted) and false rejection rate (FRR) (i.e., the percentage of identification instances in which authorized persons are incorrectly rejected) may be adjusted for the machine leaning model to ensure a robust operation and effective authentication); for the identity confidence model, update a false acceptance rate and a false rejection rate for the identity confidence model based on the classification of the authentication result (Deore, Para. 0024, False acceptance rate (FAR) (i.e., the percentage of identification instances in which unauthorized persons are incorrectly accepted) and false rejection rate (FRR) (i.e., the percentage of identification instances in which authorized persons are incorrectly rejected) may be adjusted for the machine leaning model to ensure a robust operation and effective authentication), wherein the false acceptance rate describes a frequency with which the authentication result incorrectly granted users access to the operational context (Deore, Para. 0024, False acceptance rate (FAR) (i.e., the percentage of identification instances in which unauthorized persons are incorrectly accepted)) and the false rejection rate describes a frequency with which the authentication result incorrectly denied users access to the operational context (Deore, Para. 0024, false rejection rate (FRR) (i.e., the percentage of identification instances in which authorized persons are incorrectly rejected)); and Deore does not explicitly teach access characteristic data collected by a sensor for a plurality of users, wherein the plurality of users have requested access to an operational context during a period of time; re-train the identity confidence model based on the updated false acceptance rate and the updated false rejection rate, wherein retraining the model adjusts one or more parameters of the identity confidence model to increase accuracy of the identity confidence model. However, Abdelaziz teaches access characteristic data collected by a sensor for a plurality of users (Abdelaziz, Para. 0091, The detectors may also comprise any combination of hardware and software sensors) and (Para. 0032, it is possible to utilize the disclosed embodiments to dynamically respond to detected user/sign-in data for a first user and to reach back in time to reapply/re-evaluate the impact of previously stored user/sign-in data for that first user and to generate and/or modify a user identity risk score for that user, as well as to trigger the generation and/or modification of a different user identity risk score for a different user), wherein the plurality of users have requested access to an operational context during a period of time (Abdelaziz, Fig. 3, Item. 320); re-train the identity confidence model based on the updated false acceptance rate and the updated false rejection rate (Abdelaziz, Para. 0083, the computing system generates a label data file that contains a listing of the corresponding user risk profiles and that specifies each corresponding user risk profile in the listing as either a false positive, false negative, true positive or true negative, based on the one or more user risk reports and the user input (act 660). The machine learning engine is then improved/modified by causing the machine learning tools utilized by the machine learning engine to learn/apply the labeled data in the label data file to the machine learning engine (act 670) and in such a way that the machine learning engine/tools are enabled to generate or modify a risk profiles report that is used by a risk assessment engine to generate user risk scores (act 680), wherein retraining the model adjusts one or more parameters of the identity confidence model to increase accuracy of the identity confidence model (Abdelaziz, Para. 0083, the computing system generates a label data file that contains a listing of the corresponding user risk profiles and that specifies each corresponding user risk profile in the listing as either a false positive, false negative, true positive or true negative, based on the one or more user risk reports and the user input (act 660). The machine learning engine is then improved/modified by causing the machine learning tools utilized by the machine learning engine to learn/apply the labeled data in the label data file to the machine learning engine (act 670) and in such a way that the machine learning engine/tools are enabled to generate or modify a risk profiles report that is used by a risk assessment engine to generate user risk scores (act 680). These user risk scores are then used to generate or update the one or more user risk reports automatically and dynamically in real-time and/or on demand). Deore and Abdelaziz are both considered to be analogous to the claim invention because they are in the same field of identifying and authorizing a user based on the classification of characteristic data collected for the same user. Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Deore to incorporate the teachings of Abdelaziz to include access characteristic data collected by a sensor for a plurality of users (Abdelaziz, Para. 0091), wherein the plurality of users have requested access to an operational context during a period of time (Abdelaziz, Fig. 3, Item. 320); re-train the identity confidence model based on the updated false acceptance rate and the updated false rejection rate (Abdelaziz, Para. 0083), wherein retraining the model adjusts one or more parameters of the identity confidence model to increase accuracy of the identity confidence model (Abdelaziz, Para. 0083). Doing so would aid to improve both the overall precision and recall effectiveness of computer security systems that are based on the use of user risk scores. This is accomplished, in some instances, by iteratively applying multiple machine learning tiers to stored user/sign-in data to generate and/or modify the user identity risk scores (Abdelaziz, Para. 0031). Deore and Abdelaziz do not explicitly disclose the characteristic data comprising motion data collected for each user of the plurality of users prior to requesting accessing to the operational context ; input characteristic data collected for the user to an identity confidence model to generate an identity confidence value, wherein the identity confidence model is trained to process characteristic data collected by the sensor and the identity confidence value corresponding to a likelihood that characteristic data collected by the sensor was collected for the user; determine an authentication result based on the identity confidence value, the authentication result indicative of whether the user was granted access to the operation context; However, Sheller teaches the characteristic data comprising motion data collected for each user of the plurality of users prior to requesting accessing to the operational context (Sheller, Para. 0022, the motion sensors 122 may be embodied as any type of sensor(s) capable of generating data indicative of a motion of the computing device 102 including, but not limited to, gyroscope sensor, an accelerometer, an inertial motion unit, a combination of motion sensors, and/or other motion sensors. In use, the motions sensors 122 may be configured to generate sensor data indicative of how the user interacts with the computing device 102 while performing certain tasks or function on the device. For example, the generated motion data may provide an indication of whether the user holds the phone more horizontally or more vertically when dialing a number, whether the user tends to hold the phone in landscape or portrait orientation when taking a picture, whether the user tends to set the phone down during a call, and so forth. Of course, any type of passive motion sensor data may be used to determine user authentication in other embodiments) and ; input characteristic data collected for the user to an identity confidence model to generate an identity confidence value (Sheller, Para. 0016, the determined authentication confidence may be embodied as a confidence value or score indicative of the probability that a given user is currently at the computing device 102 (e.g., operating the computing device 102). The authentication confidence is determined based on a False Accept Rate (FAR) and a False Reject Rate (FRR) of authentication factors (e.g., classification data based on sensor data input)) and (Sheller, Para. 0034, an associated training set of data. Each classifier may generate binary or n-ary data classification output indicative of a particular context, condition, or action. For example, a classifier may generate a “true” or “false” output indicative of whether the computing device 102 is in a pocket of the user, in a container, or other enclosure. To do so, such classifier may receive sensor data from, for example, a light sensor and sensor data from a proximity sensor and determine the classification condition based on such data), wherein the identity confidence model is trained to process characteristic data collected by the sensor and the identity confidence value corresponding to a likelihood that characteristic data collected by the sensor was collected for the user (Sheller, Para. 0060, includes a computing device for user authentication, the computing device comprising a plurality of authentication sensors; a sensor aggregation module to receive sensor data from the plurality of authentication sensors; and an authentication module to (i) determine a plurality of authentication factors based on the sensor data, (ii) authenticate, by use of a fused function, a user of the computing device based on the authentication factors, wherein the fused function is to generate an authentication result as a function of the plurality of authentication factors, (iii) determine a false accept rate and a false reject rate for the authentication of the user, and (iv) determine an authentication confidence for the authentication of the user based on the determined false accept rate and false reject rate); determine an authentication result based on the identity confidence value (Sheller, Para. 0079, determining a false accept rate and a false reject rate for the authentication of the user; and determining an authentication confidence for the authentication of the user based on the determined false accept rate and false reject rate), the authentication result indicative of whether the user was granted access to the operation context (Sheller, Para. 0057, If the user is determined to be authenticated in block 524 (either passively using the fusing function or actively in block 528), the method 500 advances to block 526 in which the computing device determines whether the authentication confidence is less than a threshold amount. If so, the method 500 advances to block 528 to again perform a security action, such as an active user authentication action. In this way, even though the user has been successfully authenticated, the computing device 102 may perform a security action, such as locking the user from the computing device 102, locking an application, requiring the user to actively authenticate, etc); Deore, Abdelaziz and Sheller are all considered to be analogous to the claim invention because they are in the same field of identifying and authorizing a user based on the classification of characteristic data collected for the same user. Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Deore and Abdelaziz to incorporate the teachings of Sheller to include the characteristic data comprising motion data collected for each user of the plurality of users prior to requesting accessing to the operational context (Sheller, Para. 0022) and; input characteristic data collected for the user to an identity confidence model to generate an identity confidence value (Sheller, Para. 0016), wherein the identity confidence model is trained to process characteristic data collected by the sensor and the identity confidence value corresponding to a likelihood that characteristic data collected by the sensor was collected for the user (Sheller, Para. 0060); determine an authentication result based on the identity confidence value (Sheller, Para. 0079), the authentication result indicative of whether the user was granted access to the operation context (Sheller, Para. 0057). Doing so would aid to assist pattern data in creation of behavior classifiers that may be used to train the passive authentication factor algorithms. Additionally, user-settable controls to further fine-tune and personalize tolerances (Sheller, Para. 0016). In regards to claim 2, The combination of Deore and Abdelaziz in view of Sheller teaches the non-transitory computer readable medium of claim 1, wherein the sensor is one of: a sensor that continuously collects characteristic data for a single user of the plurality of users (Sheller, Para. 0040, such authentication may occur periodically (e.g., based on the sensor sample rate of block 502), continually, or continuously. In the illustrative embodiment, the authentication module 204 of the computing device 102 is configured to authenticate a user using multiple authentication factors (e.g., multiple classifiers and/or sensor data)); or a sensor that continuously collects characteristic data simultaneously for the plurality of users (Sheller, Para. 0030, the authentication module 204 is configured to authenticate the user of the computing device 102 based on the sample sensor data received from the sensor aggregation module 202 and one or more fused authentication templates stored in the fused template database 220. As discussed in more detail below, the particular fused authentication template used to continuously, continually, and/or periodically authenticate the user may be based on a determined authentication confidence associated with each fused authentication template). Deore, Abdelaziz and Sheller are all considered to be analogous to the claim invention because they are in the same field of identifying and authorizing a user based on the classification of characteristic data collected for the same user. Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Deore, and Abdelaziz to incorporate the teachings of Sheller to include wherein the sensor is one of: a sensor that continuously collects characteristic data for a single user of the plurality of users (Sheller, Para. 0040); or a sensor that continuously collects characteristic data simultaneously for the plurality of users (Sheller, Para. 0030). Doing so would aid to assist pattern data in creation of behavior classifiers that may be used to train the passive authentication factor algorithms. Additionally, user-settable controls to further fine-tune and personalize tolerances (Sheller, Para. 0016). In regards to claim 6, the combination of Deore and Abdelaziz in view of Sheller teaches the non-transitory computer readable medium of claim 5, wherein the instructions to generate the authentication result for the individual user further comprises instructions that when executed cause the processor to: determine a sensor-assigned probability value for the characteristic data collected for the individual user (Sheller, Para. 0016, a confidence value or score indicative of the probability that a given user is currently at the computing device 102 (e.g., operating the computing device 102)), wherein the sensor-assigned probability value describes a likelihood that the characteristic data was collected for the individual user (Sheller, Para. 0016, collection may use a statistically significant population of individuals who participate in data collection by annotating sensor data with characteristic behaviors. Characteristic behavior data helps during data analysis to recognize patterns indicative of the expected behavior. Pattern data assist in creation of behavior classifiers that may be used used to train the passive authentication factor algorithms); and update the false acceptance rate and the false rejection rate specific to the individual user based on the authentication result and the sensor-assigned probability value (Sheller, Para. 0016, update a base false accept rate and a false reject rate associated with the fusion function based on the authentication result). Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Deore and Abdelaziz to incorporate the teachings of Sheller to include determine a sensor-assigned probability value for the characteristic data collected for the individual user (Sheller, Para. 0016), wherein the sensor-assigned probability value describes a likelihood that the characteristic data was collected for the individual user (Sheller, Para. 0016); and update the false acceptance rate and the false rejection rate specific to the individual user based on the authentication result and the sensor-assigned probability value (Sheller, Para. 0016). Doing so would aid to assist pattern data in creation of behavior classifiers that may be used to train the passive authentication factor algorithms. Additionally, user-settable controls to further fine-tune and personalize tolerances (Sheller, Para. 0016). As per system claim 8 that includes the same or similar claim limitations as non - transitory medium claim 1, and is similarly rejected. As per system claim 9 that includes the same or similar claim limitations as non - transitory medium claim 2, and is similarly rejected. As per system claim 13 that includes the same or similar claim limitations as non - transitory medium claim 6, and is similarly rejected. In regards to claim 15, Deore discloses a system comprising: for each instance where a user of the plurality of users requests access to an operational context (Deore, Para. 0029, at a user validation phase, a previously registered user may use computing system 110 or another computing system connected to network 130 to login. The login process may be for the purpose of accessing content or service provided via computing system 110), a user-specific sensor module configured to (Deore, Para. 0007): classify the authentication result as one of a true positive authentication, a false positive authentication, a true negative authentication, or a false negative authentication (Deore, Para. 0024, false acceptance rate (FAR) (i.e., the percentage of identification instances in which unauthorized persons are incorrectly accepted) and false rejection rate (FRR) (i.e., the percentage of identification instances in which authorized persons are incorrectly rejected) may be adjusted for the machine leaning model to ensure a robust operation and effective authentication); for the identity confidence model, update a false acceptance rate and a false rejection rate for the identity confidence model based on the classification of the authentication result (Deore, Para. 0024, False acceptance rate (FAR) (i.e., the percentage of identification instances in which unauthorized persons are incorrectly accepted) and false rejection rate (FRR) (i.e., the percentage of identification instances in which authorized persons are incorrectly rejected) may be adjusted for the machine leaning model to ensure a robust operation and effective authentication), wherein the false acceptance rate describes a frequency with which the authentication result incorrectly granted users access to the operational context (Deore, Para. 0024, False acceptance rate (FAR) (i.e., the percentage of identification instances in which unauthorized persons are incorrectly accepted)) and the false rejection rate describes a frequency with which the authentication result incorrectly denied users access to the operational context (Deore, Para. 0024, false rejection rate (FRR) (i.e., the percentage of identification instances in which authorized persons are incorrectly rejected)); Deore fails to disclose a sensor configured to collection characteristic data for a plurality of users, wherein the plurality of users requested access to an operational context during a period of time, an optimization module configured to: re-train the identity confidence model based on the updated false acceptance rate and the updated false rejection rate, wherein retraining the model adjusts one or more parameters of the identity confidence model based on a performance of the identity confidence model characterized by the false acceptance rate and the false rejection rate. However, HUANG teaches a sensor configured to collection characteristic data for a plurality of users (Abdelaziz, Para. 0091, The detectors may also comprise any combination of hardware and software sensors) and (Para. 0032, it is possible to utilize the disclosed embodiments to dynamically respond to detected user/sign-in data for a first user and to reach back in time to reapply/re-evaluate the impact of previously stored user/sign-in data for that first user and to generate and/or modify a user identity risk score for that user, as well as to trigger the generation and/or modification of a different user identity risk score for a different user), wherein the plurality of users requested access to an operational context during a period of time (Abdelaziz, Fig. 3, Item. 320), the characteristic data comprising motion data collected for each user of the plurality of users prior to requesting accessing to the operational context (Abdelaziz, Para. 0059, this machine learning engine may apply a tiered approach generate/modify risk scores by quantifying the relative risks associated with a user and/or sign-in event based on the crowdsourcing feedback and/or other third-party data and/or other supplemental user behavior analysis data (e.g., two factor authentication data. requested user feedback to challenge questions, subsequent actions or lack of actions from a user, etc.)); an identity computation module configured to: access characteristic data collected by the sensor; input characteristic data collected for the user to an identity confidence model to generate an identity confidence value (Abdelaziz, Para. 0050, the risk assessment engine 122 to identify attributes/patterns in a risk profiles report 138 to determine an appropriate user risk score and/or sign-in event risk score (140) to associated with the corresponding user and/or sign-in event(s). This may require the generating of a new score or modifying existing scores 140, by the risk assessment engine 122, in real-time (dynamically based on detecting new user/sign-in data as it is received), or on demand, and by comparing the detected user/sign-in data 116 to the profiles/telemetry in the risk profiles report 138), wherein the identity confidence model is trained to process characteristic data collected by the sensor (Abdelaziz, Para. 0091, The detectors may also comprise any combination of hardware and software sensors) and (Abdelaziz, Para. 0032, it is possible to utilize the disclosed embodiments to dynamically respond to detected user/sign-in data for a first user and to reach back in time to reapply/re-evaluate the impact of previously stored user/sign-in data for that first user and to generate and/or modify a user identity risk score for that user, as well as to trigger the generation and/or modification of a different user identity risk score for a different user)and the wherein the identity confidence model is trained to process characteristic data collected by the sensor and the identity confidence value corresponding to a likelihood that characteristic data collected by the sensor was collected for the user (Abdelaziz, Para. 0060, when a particular user/sign-in data element is labeled with a risky label/value and user input is subsequently received to indicate that the label is wrong (by challenging, changing or dismissing the risk label/value), a false positive indicator is generated and associated with that user/event in the label data contained in the one or more corresponding label data files (132). When fed into the machine learning engine, this will decrease the likelihood that similar user/event patterns will generate a risk level of the same magnitude as before for the same user/sign-in events, as well as for other users/sign-in events having the same or similar user/sign-in data); determine an authentication result based on the identity confidence value(Abdelaziz, Para. 0058, generating label data comprising indicators of false positives, true positives, false negatives and true negatives for the different user accounts and sign-in events. This label data is used as crowdsourcing feedback by one or more machine learning models/algorithms to further train the machine learning and to generate a new or updated risk profiles report that maps the various user patterns and sign-in patterns to different risk valuations), the authentication result indicative of whether the user was granted access to the operation context (Abdelaziz, Para. 0150, their risk scores 140 are created and modified, these user accounts are tracked in a full user report 142. Likewise, the sign-in attempts (which may comprise any sign-in event, whether successful or not), may be monitored and logged in a full sign-in report 144 (also called a full sign on report)); an optimization module configured to: re-train the identity confidence model based on the updated false acceptance rate and the updated false rejection rate (Abdelaziz, Para. 0083, the computing system generates a label data file that contains a listing of the corresponding user risk profiles and that specifies each corresponding user risk profile in the listing as either a false positive, false negative, true positive or true negative, based on the one or more user risk reports and the user input (act 660). The machine learning engine is then improved/modified by causing the machine learning tools utilized by the machine learning engine to learn/apply the labeled data in the label data file to the machine learning engine (act 670) and in such a way that the machine learning engine/tools are enabled to generate or modify a risk profiles report that is used by a risk assessment engine to generate user risk scores (act 680), wherein retraining the model adjusts one or more parameters of the identity confidence model based on a performance of the identity confidence model characterized by the false acceptance rate and the false rejection rate (Abdelaziz, Para. 0083, the computing system generates a label data file that contains a listing of the corresponding user risk profiles and that specifies each corresponding user risk profile in the listing as either a false positive, false negative, true positive or true negative, based on the one or more user risk reports and the user input (act 660). The machine learning engine is then improved/modified by causing the machine learning tools utilized by the machine learning engine to learn/apply the labeled data in the label data file to the machine learning engine (act 670) and in such a way that the machine learning engine/tools are enabled to generate or modify a risk profiles report that is used by a risk assessment engine to generate user risk scores (act 680). These user risk scores are then used to generate or update the one or more user risk reports automatically and dynamically in real-time and/or on demand). Deore and Abdelaziz are both considered to be analogous to the claim invention because they are in the same field of identifying and authorizing a user based on the classification of characteristic data collected for the same user. Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Deore to incorporate the teachings of Abdelaziz to include a sensor configured to collection characteristic data for a plurality of users (Abdelaziz, Para. 0091), wherein the plurality of users requested access to an operational context during a period of time (Abdelaziz, Fig. 3, Item. 320), the characteristic data comprising motion data collected for each user of the plurality of users prior to requesting accessing to the operational context (Abdelaziz, Para. 0059); an identity computation module configured to: access characteristic data collected by the sensor; input characteristic data collected for the user to an identity confidence model to generate an identity confidence value (Abdelaziz, Para. 0050), wherein the identity confidence model is trained to process characteristic data collected by the sensor (Abdelaziz, Para. 0091)and the wherein the identity confidence model is trained to process characteristic data collected by the sensor and the identity confidence value corresponding to a likelihood that characteristic data collected by the sensor was collected for the user (Abdelaziz, Para. 0060); determine an authentication result based on the identity confidence value (Abdelaziz, Para. 0058), the authentication result indicative of whether the user was granted access to the operation context (Abdelaziz, Para. 0150); an optimization module configured to: re-train the identity confidence model based on the updated false acceptance rate and the updated false rejection rate (Abdelaziz, Para. 0083), wherein retraining the model adjusts one or more parameters of the identity confidence model based on a performance of the identity confidence model characterized by the false acceptance rate and the false rejection rate (Abdelaziz, Para. 0083). Doing so would aid to improve both the overall precision and recall effectiveness of computer security systems that are based on the use of user risk scores. This is accomplished, in some instances, by iteratively applying multiple machine learning tiers to stored user/sign-in data to generate and/or modify the user identity risk scores (Abdelaziz, Para. 0031). Deore and Abdelaziz do not explicitly disclose the characteristic data comprising motion data collected for each user of the plurality of users prior to requesting accessing to the operational context; an identity computation module configured to: access characteristic data collected by the sensor; input characteristic data collected for the user to an identity confidence model to generate an identity confidence value, wherein the identity confidence model is trained to process characteristic data collected by the sensor and the wherein the identity confidence model is trained to process characteristic data collected by the sensor and the identity confidence value corresponding to a likelihood that characteristic data collected by the sensor was collected for the user; determine an authentication result based on the identity confidence value, the authentication result indicative of whether the user was granted access to the operation context; However, Sheller teaches the characteristic data comprising motion data collected for each user of the plurality of users prior to requesting accessing to the operational context (Sheller, Para. 0022, the motion sensors 122 may be embodied as any type of sensor(s) capable of generating data indicative of a motion of the computing device 102 including, but not limited to, gyroscope sensor, an accelerometer, an inertial motion unit, a combination of motion sensors, and/or other motion sensors. In use, the motions sensors 122 may be configured to generate sensor data indicative of how the user interacts with the computing device 102 while performing certain tasks or function on the device. For example, the generated motion data may provide an indication of whether the user holds the phone more horizontally or more vertically when dialing a number, whether the user tends to hold the phone in landscape or portrait orientation when taking a picture, whether the user tends to set the phone down during a call, and so forth. Of course, any type of passive motion sensor data may be used to determine user authentication in other embodiments) and; an identity computation module configured to: access characteristic data collected by the sensor; input characteristic data collected for the user to an identity confidence model to generate an identity confidence value (Sheller, Para. 0016, the determined authentication confidence may be embodied as a confidence value or score indicative of the probability that a given user is currently at the computing device 102 (e.g., operating the computing device 102). The authentication confidence is determined based on a False Accept Rate (FAR) and a False Reject Rate (FRR) of authentication factors (e.g., classification data based on sensor data input)) and (Sheller, Para. 0034, an associated training set of data. Each classifier may generate binary or n-ary data classification output indicative of a particular context, condition, or action. For example, a classifier may generate a “true” or “false” output indicative of whether the computing device 102 is in a pocket of the user, in a container, or other enclosure. To do so, such classifier may receive sensor data from, for example, a light sensor and sensor data from a proximity sensor and determine the classification condition based on such data), wherein the identity confidence model is trained to process characteristic data collected by the sensor and the wherein the identity confidence model is trained to process characteristic data collected by the sensor and the identity confidence value corresponding to a likelihood that characteristic data collected by the sensor was collected for the user (Sheller, Para. 0060, includes a computing device for user authentication, the computing device comprising a plurality of authentication sensors; a sensor aggregation module to receive sensor data from the plurality of authentication sensors; and an authentication module to (i) determine a plurality of authentication factors based on the sensor data, (ii) authenticate, by use of a fused function, a user of the computing device based on the authentication factors, wherein the fused function is to generate an authentication result as a function of the plurality of authentication factors, (iii) determine a false accept rate and a false rejec
Read full office action

Prosecution Timeline

Feb 10, 2021
Application Filed
May 19, 2023
Non-Final Rejection — §103, §112
Sep 26, 2023
Response Filed
Dec 01, 2023
Final Rejection — §103, §112
Apr 08, 2024
Request for Continued Examination
Apr 09, 2024
Response after Non-Final Action
Jun 26, 2024
Non-Final Rejection — §103, §112
Jan 02, 2025
Response Filed
Jan 28, 2025
Final Rejection — §103, §112
Aug 11, 2025
Request for Continued Examination
Aug 14, 2025
Response after Non-Final Action
Oct 31, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12339997
ENTITY FOCUSED NATURAL LANGUAGE GENERATION
2y 5m to grant Granted Jun 24, 2025
Patent 12316648
Data value classifier
2y 5m to grant Granted May 27, 2025
Patent 12301564
VIRTUAL SESSION ACCESS MANAGEMENT
2y 5m to grant Granted May 13, 2025
Patent 12256022
BLOCKCHAIN TRANSACTION COMPRISING RUNNABLE CODE FOR HASH-BASED VERIFICATION
2y 5m to grant Granted Mar 18, 2025
Patent 12242613
AUTOMATED EVALUATION OF MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
53%
Grant Probability
75%
With Interview (+21.5%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 75 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month