Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This is a final rejection. Claims 1-3, 6-16, and 19-24 are pending.
Information Disclosure Statement (IDS)
The information disclosure statement(s) filed on 09/13/2023 comply with the provisions 37 CFR 1.97, 1.98, and MPEP 609 and is considered by the Examiner.
Status of Claims
Applicant’s response date 02/12/2026. Amending claims 1, 10, and 20. Cancelling claims 5 and 17. Adding new claims 23-24.
Response to Amendment
The previously pending rejection under 35 USC 101, will be maintained. The 101 rejection is updated in light of the amendments.
With regard to the rejection under 35 USC 103- No art rejection has been put forth in the rejection for the reason found in the “Allowable Subject Matter” section found below. Also, See applicant remarks pages 8-9 07/23/2025.
Response to Arguments
Applicant's arguments filed 02/12/2026 have been fully considered but they are not persuasive.
Response to Arguments under 35 USC 101:
Applicant argues (Page 2 of the remarks):
Applicant notes that similar features were previously recited in now-cancelled claims 5 and 17, except that claims 1, 10, and 20 have further been amended to recite that the re-training of the machine learning model is done "automatically."
With respect to the features "assess[ing] a data source used to train the machine learning model to determine whether a threshold associated with data drift is satisfied; and in response to the threshold being satisfied, ... re-train[ing] the machine learning model with the data source to reduce an amount of data drift", the Examiner refers to these as merely "embellishments" (pages 15-16 of the Office Action) and asserts that they "are nonetheless directed towards fundamentally ... abstract ideas." While Applicant respectfully disagrees with the Examiner's remarks, and maintains that the claims are not directed to abstract idea under Step 2A Prong 1, for the sake of brevity, Applicant's instant remarks will be directed to the eligibility of the amended claims at Step 2A Prong 2.
Examiner respectfully disagrees:
With regard to an abstract idea, Independent Claim, when “taken as a whole,” are directed to the abstract idea and substantially recite the limitations:
A device for assessing actions of authenticated entities within an enterprise system, the device comprising:
a processor;
a memory coupled to the processor, the memory storing computer executable instructions that when executed by the processor cause the device to:
receive a request from a user to perform an action with an enterprise computing resource, wherein the user providing the request is authenticated according to one or more authentication criteria;
process the request with an anomaly detector to generate a risk assessment, the anomaly detector generating the risk assessment using a machine learning model trained to predict a likelihood that an adverse event will result from completion of actions by authenticated users, the machine learning model having been trained with a curated set of examples comprising accurate risk assessment examples and inaccurate risk assessment examples, wherein those of the inaccurate risk assessment examples requiring unwarranted remediation actions are given an elevated importance in the curated set;
assess the generated risk assessment with a remediation tool to determine whether to serve any remediation actions to evaluate the request; and
responsive to the remediation tool determining from the risk assessment that the request is sufficiently risky to require at least one remediation action, have the at least one remediation action automatically executed,
wherein the at least one remediation action comprises a further authentication or an actionable event from a user other than the user associated with the request;
assess a data source used to train the machine learning model to determine whether a threshold associated with data drift is satisfied; and
in response to the threshold being satisfied, automatically re-train the machine learning model with the data source to reduce an amount of data drift.
The Applicant's Specification titled " System and Method for Assessing Actions of Authenticated Entities Within an Enterprise System". In summary, the present disclosure relates to methods and systems for generating risk assessment based on a request from a user, determine remediation actions for the user" (Spec. fig. 7).
As the bolded claim limitations above demonstrate, independent claims 1, 10 and 20 are recites the abstract idea for generating risk assessment based on a request from a user, determine remediation actions for the user. which is considered certain methods of organizing human activity because the bolded claim limitations pertain to (i) fundamental economic principles or practices and (ii) commercial or legal interactions. See MPEP §2106.04(a)(2)(II).
Applicant's claims as recited above provide a business solution of generating risk assessment based on a request from a user, determine remediation actions for the user. Applicant's claimed invention pertains to fundamental economic principles or practices because the limitations recite generate a risk assessment based on a request received from a user. which pertain to " including hedging, insurance, mitigating risk " expressly categorized under fundamental economic principles or practices. See MPEP §2106.04(a)(2)(II).
Applicant's claims as recited above provide a business solution of generating risk assessment based on a request from a user, determine remediation actions for the user. Applicant's claimed invention pertains to fundamental economic principles or practices because the limitations recite determine remediation actions for the user. which pertain to " agreements in the form of contracts; legal obligations; advertising; marketing or sales activities or behaviors; business relations " expressly categorized under commercial or legal interactions. See MPEP §2106.04(a)(2)(II).
Applicant argues (Pages 2-3 of the remarks):
From a computing perspective, having the anomaly detector operable on a user device and provide the generated risk assessment to the remediation tool hosted on the enterprise platform improves the functionality of the associated system in general. For example, as noted in paragraph [0076] of the instant application, with reference to the configuration in FIG. 3B, "In this way, the security platform 20 latency may be improved, as the distributed computing power of the devices 12 is employed as compared to a centralized anomaly detector 38 within the platform 20. In addition to potentially increased speed, the use of local instances of the anomaly detector 38 can reduce the amount of data trafficked between the proxy server 36 and the platform 20 (e.g., the server 36 may only provide the risk assessment to the platform 20), thereby making the communication more robust by disclosing less potentially sensitive information, decreasing latency, etc." (emphasis added). Applicant respectfully submits that the
features recited in claims 13 and 21 clearly amount to a practical application of any alleged abstract idea, since the claimed configuration distributes computing power to increase speed and decrease latency. For at least this reason, Applicant respectfully submits that claims 13 and 21 are eligible at Step 2A - Prong 2.
Examiner respectfully disagrees:
In prong two of step 2A, an evaluation is made whether a claim recites any additional element, or combination of additional element, that integrate the exception into a practical application of that exception. An “additional element” is an element that is recited in the claim in addition to (beyond) the judicial exception (i.e., an element/limitation that sets forth an abstract idea is not an additional element). The phrase “integration into a practical application” is defined as requiring an additional element or a combination of additional elements in the claim to apply, rely on, or use exception, such that it is more than a drafting effort designed to monopolize the exception.
The claims recite the additional limitation a device, a system, a processor, a memory, computing resource, a non-transitory computer readable medium and a machine learning model are recited in a high level of generality and recited as performing generic computer functions routinely used in computer applications. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp. 134 S. Ct, at 2360,110 USPQ2d at 1984 (see MPEP 2106.05(f).
The additional elements of a “machine learning model”. This language merely requires execution of an algorithm that can be performed by a generic computer component and provides no detail regarding the operation of that algorithm. As such, the claim requirement amounts to mere instructions to implement the abstract idea on a computer, and, therefore, is not sufficient to make the claim patent eligible. See Alice, 573 U.S. at 226 (determining that the claim limitations “data processing system,” “communications controller,” and “data storage unit” were generic computer components that amounted to mere instructions to implement the abstract idea on a computer); October 2019 Guidance Update at 11–12 (recitation of generic computer limitations for implementing the abstract idea “would not be sufficient to demonstrate integration of a judicial exception into a practical application”). Such a generic recitation of “machine learning model” is insufficient to show a practical application of the recited abstract idea.
The Examiner has therefore determined that the additional elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claim(s) is/are directed to an abstract idea (step 2A-prong two: NO).
Applicant argues (Page 2 of the remarks):
As outlined above, Applicant has amended claims 1, 10, and 20 to further capture
aspects of the inventive concept. Claims 1, 10, and 20 as amended include " ... assess[ing] a data source used to train the machine learning model to determine whether a threshold associated with data drift is satisfied; and in response to the threshold being satisfied, automatically re-train[ing] the machine learning model with the data source to reduce an amount of data drift." Applicant respectfully submits that these features cannot be considered "merely well-understood, routine, and conventional activit(ies)" as implied on page 24 of the Office Action. Applicant contends that claims 1, 10, and 20 as amended (as well as claims dependent therefrom) do indeed amount to significantly more than any alleged abstract idea.
In view of the above remarks, Applicant respectfully submits that the claims comply with 35 U.S.C. 101 and withdrawal of the rejection under 35 U.S.C. 101 is therefore requested.
Examiner respectfully disagrees:
First, referring back to applicant specification with regard to threshold and retrain, in ¶[0094], “a threshold (e.g., a drift parameter 30) and in ¶[0062], “drift parameter(s) 30 … include, but is not limited to, a change, here, based on applicant specification the retraining step could be based on any change received.
The Alice framework, step 2B (Part 2 of Mayo) determine if the claim is sufficient to ensure that the claim amounts to “significantly more” than the abstract idea itself. These additional elements recite conventional computer components and conventional functions of:
Independent claims do not include my limitations amounting to significantly more than the abstract idea, along. The claims include various elements that are not directed to the abstract idea. These elements include a device, a system, a processor, a memory, computing resource, a non-transitory computer readable medium and a machine learning model.
Examiner asserts that a device, a system, a processor, a memory, computing resource, a non-transitory computer readable medium and a machine learning model are a generic computing element performing generic computing functions. (See MPEP 2106.05(f))
Further, with regard to mining (i.e., searching over a network), receiving, processing, storing data, and parsing (i.e. extract, transform data), the courts have recognized the following computer functions as well-understood, routing, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (i.e. “receiving, processing, transmitting, storing data”, etc.) are well-understood, routine, etc. (MPEP 2106.05(d))
Therefore, the claims at issue do not require any nonconventional computer, network, or display components, or even a “non-conventional and non-generic arrangement of know, conventional pieces,” but merely call for performance of the claimed on a set of generic computer components” and display devices.
In addition, figure 1, of the specifications detail any combination of a generic computer system program to perform the method. Generically recited computer elements do not add a meaningful limitation to the abstract idea because the Alice decision noted that generic structures that merely apply abstract ideas are not significantly more than the abstract ideas.
Claim Rejections 35 USC §101
35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3, 6-16, and 19-24 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter, specifically an abstract idea without a practical application or significantly more than the abstract idea.
Under the 35 U.S.C. §101 subject matter eligibility two-part analysis, Step 1 addresses whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. See MPEP §2106.03. If the claim does fall within one of the statutory categories, it must then be determined in Step 2A [prong 1] whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea). See MPEP §2106.04. If the claim is directed toward a judicial exception, it must then be determined in Step 2A [prong 2] whether the judicial exception is integrated into a practical application. See MPEP §2106.04(d). Finally, if the judicial exception is not integrated into a practical application, it must additionally be determined in Step 2B whether the claim recites "significantly more" than the abstract idea. See MPEP §2106.05.
Examiner note: The Office's 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG) is currently found in the Ninth Edition, Revision 10.2019 (revised June 2020) of the Manual of Patent Examination Procedure (MPEP), specifically incorporated in MPEP §2106.03 through MPEP §2106.07(c).
Regarding Step 1
Claims 1-3, 6-9, and 21-23 are directed toward a device (machine). Claims 10-17, 19, and 24 are directed to a method (process) and Claim 20 is directed to a non-transitory (machine). Thus, all claims fall within one of the four statutory categories as required by Step 1.
Regarding Step 2A [prong 1]
Claims 1-3, 6-16, and 19-24 are directed toward the judicial exception of an abstract idea. Independent claims 10, and 20 recites essentially the same abstract features as claim 1, thus are abstract for the same reasons as claim 1.
Regarding independent claim 1, the bolded limitations emphasized below correspond to the abstract ideas of the claimed invention:
Claim 1. A device for assessing actions of authenticated entities within an enterprise system, the device comprising:
a processor;
a memory coupled to the processor, the memory storing computer executable instructions that when executed by the processor cause the device to:
receive a request from a user to perform an action with an enterprise computing resource, wherein the user providing the request is authenticated according to one or more authentication criteria;
process the request with an anomaly detector to generate a risk assessment, the anomaly detector generating the risk assessment using a machine learning model trained to predict a likelihood that an adverse event will result from completion of actions by authenticated users, the machine learning model having been trained with a curated set of examples comprising accurate risk assessment examples and inaccurate risk assessment examples, wherein those of the inaccurate risk assessment examples requiring unwarranted remediation actions are given an elevated importance in the curated set;
assess the generated risk assessment with a remediation tool to determine whether to serve any remediation actions to evaluate the request; and
responsive to the remediation tool determining from the risk assessment that the request is sufficiently risky to require at least one remediation action, have the at least one remediation action automatically executed,
wherein the at least one remediation action comprises a further authentication or an actionable event from a user other than the user associated with the request;
assess a data source used to train the machine learning model to determine whether a threshold associated with data drift is satisfied; and
in response to the threshold being satisfied, automatically re-train the machine learning model with the data source to reduce an amount of data drift.
The Applicant's Specification titled " System and Method for Assessing Actions of Authenticated Entities Within an Enterprise System". In summary, the present disclosure relates to methods and systems for generating risk assessment based on a request from a user, determine remediation actions for the user" (Spec. fig. 7).
As the bolded claim limitations above demonstrate, independent claims 1, 10 and 20 are recites the abstract idea for generating risk assessment based on a request from a user, determine remediation actions for the user. which is considered certain methods of organizing human activity because the bolded claim limitations pertain to (i) fundamental economic principles or practices and (ii) commercial or legal interactions. See MPEP §2106.04(a)(2)(II).
Applicant's claims as recited above provide a business solution of generating risk assessment based on a request from a user, determine remediation actions for the user. Applicant's claimed invention pertains to fundamental economic principles or practices because the limitations recite generate a risk assessment based on a request received from a user. which pertain to " including hedging, insurance, mitigating risk " expressly categorized under fundamental economic principles or practices. See MPEP §2106.04(a)(2)(II).
Applicant's claims as recited above provide a business solution of generating risk assessment based on a request from a user, determine remediation actions for the user. Applicant's claimed invention pertains to fundamental economic principles or practices because the limitations recite determine remediation actions for the user. which pertain to " agreements in the form of contracts; legal obligations; advertising; marketing or sales activities or behaviors; business relations " expressly categorized under commercial or legal interactions. See MPEP §2106.04(a)(2)(II).
Dependent claims 2-3, 6-9, 11-16, 19, and 21-24 further reiterate the same abstract ideas with further embellishments, such as
claim 2 wherein the device is a proxy server positioned between a user device associated with the user and an enterprise platform hosting the enterprise computing resource, and wherein the anomaly detector processes the request after the request is provided to the proxy server.
claim 3 in response to the remediation action being successfully completed, enable completion of the action.
claim 4 Canceled
claim 5 Cancelled
claim 6 retrieve the machine learning model from a container image registry, the container image registry comprising a plurality of machine learning models in a form of container packages.
claim 7 wherein the enterprise resources are cloud computing resources.
claim 8 wherein the remediation tools assess the generated risk assessment based on at least one of risk acceptability parameters and intrusiveness parameters.
claim 9 wherein the remediation tools assess the generated risk assessment based on at least one of functionality associated with the action, a role associated with the user, the requested action, and the enterprise computing resource.
claim 11 wherein the request is received by a proxy server positioned between a user device associated with the user and the enterprise platform hosting the enterprise computing resource, and wherein the anomaly detector processes the request after the request is provided to the proxy server.
claim 12 in response to the remediation action being successfully completed, enabling access via the proxy server to the enterprise computing resource to complete the action.
claim 13 wherein the anomaly detector is operable on a user device associated with the user, and the anomaly detector provides the generated risk assessment to the remediation tool hosted on the enterprise platform.
claim 14 generating, within the enterprise platform, an updated machine learning model;
packaging the updated machine learning model into a container image; and
updating the machine learning model of the user device with the updated machine learning model in the container image.
claim 15 wherein the remediation action is served to the user device via the proxy server.
claim 16 wherein the remediation action is served to a user other than the user associated with the request, the remediation action being served via a channel separate from the proxy server.
claim 17 Cancelled
claim 18 Canceled
claim 19 wherein the remediation tool assesses the generated risk assessment based on at least one of risk acceptability parameters, intrusiveness parameters, and a baseline acceptable risk.
claim 21 wherein the anomaly detector is operable on a user device associated with the user, and the anomaly detector provides the generated risk assessment to the remediation tool hosted on the enterprise platform.
claim 22 generate, within the enterprise platform, an updated machine learning model; package the updated machine learning model into a container image; and update the machine learning model of the user device with the updated machine learning model in the container image.
Claim 23 wherein the computer executable instructions further cause the device to: package and store the retrained machine learning model in a container for deployment.
Claim 24 further comprising: packaging and storing the retrained machine learning model in a container for deployment.
which are nonetheless directed towards fundamentally the same abstract ideas as indicated for independent claims 1, 10 and 20.
Regarding Step 2A [prong 2]
Claims 1-3, 6-16, and 19-24 fail to integrate the abstract idea into a practical application. Independent claims 1, 10, and 20 include the following additional elements which do not amount to a practical application:
Claim 1.
A device, system, the device comprising:
a processor;
a memory coupled to the processor, the memory storing computer executable instructions that when executed by the processor cause the processor to:
computing resource
a machine learning model train/trained
Claim 10. an enterprise system, a machine learning model train/trained, a computing resource
Claim 15 A non-transitory computer readable medium for assessing actions of authenticated entities within an enterprise system, the computer readable medium comprising computer executable instructions a machine learning model trained, a computing resource
The bolded limitations recited above in independent claims 1, 10 and 20 pertain to additional elements which merely provide an abstract-idea-based-solution implemented with computer hardware and software components, including the additional elements of a device, a system, a processor, a memory, computing resource, a non-transitory computer readable medium and a machine learning model which fail to integrate the abstract idea into a practical application because there are (1) no actual improvements to the functioning of a computer, (2) nor to any other technology or technical field, (3) nor do the claims apply the judicial exception with, or by use of, a particular machine, (4) nor do the claims provide a transformation or reduction of a particular article to a different state or thing, (5) nor provide other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment, in view of MPEP §2106.04(d)(1) and §2106.05 (a-c & e-h), (6) nor do the claims apply the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, in view of MPEP §2106.04(d)(2). The Specification provides a high level of generality regarding the additional elements claimed without sufficient detail or specific implementation structure so as to limit the abstract idea, for instance, (fig. 1). Nothing in the Specification describes the specific operations recited in claim 1 (Similarly claims 10 and 20) as particularly invoking any inventive programming, or requiring any specialized computer hardware or other inventive computer components, i.e., a particular machine, or that the claimed invention is somehow implemented using any specialized element other than all-purpose computer components to perform recited computer functions. The claimed invention is merely directed to utilizing computer technology as a tool for solving a business problem of data analytics. Nowhere in the Specification does the Applicant emphasize additional hardware and/or software elements which provide an actual improvement in computer functionality, or to a technology or technical field, other than using these elements as a computational tool to automate and perform the abstract idea. See MPEP §2106.05(a & e).
The additional elements of a “machine learning model”. This language merely requires execution of an algorithm that can be performed by a generic computer component and provides no detail regarding the operation of that algorithm. As such, the claim requirement amounts to mere instructions to implement the abstract idea on a computer, and, therefore, is not sufficient to make the claim patent eligible. See Alice, 573 U.S. at 226 (determining that the claim limitations “data processing system,” “communications controller,” and “data storage unit” were generic computer components that amounted to mere instructions to implement the abstract idea on a computer); October 2019 Guidance Update at 11–12 (recitation of generic computer limitations for implementing the abstract idea “would not be sufficient to demonstrate integration of a judicial exception into a practical application”). Such a generic recitation of “machine learning model” is insufficient to show a practical application of the recited abstract idea.
The relevant question under Step 2A [prong 2] is not whether the claimed invention itself is a practical application, instead, the question is whether the claimed invention includes additional elements beyond the judicial exception that integrate the judicial exception into a practical application by imposing a meaningful limit on the judicial exception. This is not the case with Applicant's claimed invention which merely pertains to steps for generating risk assessment based on a request from a user, determine remediation actions for the user and the additional computer elements a tool to perform the abstract idea, and merely linking the use of the abstract idea to a particular technological environment. See MPEP §2106.04 and §21062106.05(f-h). Alternatively, the Office has long considered data gathering, analysis and data output to be insignificant extra-solution activity, and these additional elements do not impose any meaningful limits on practicing the abstract idea. See MPEP §2106.04 and §2106.05(g). Thus, the additional elements recited above fail to provide an actual improvement in computer functionality, or to a technology or technical field. See MPEP §2106.04(d)(1) and §2106§2106.05 (a & e).
Instead, the recited additional elements above, merely limit the invention to a technological environment in which the abstract concept identified above is implemented utilizing the computational tools provided by the additional elements to automate and perform the abstract idea, which is insufficient to provide a practical application since the additional elements do no more than generally link the use of the abstract idea to a particular technological environment. See MPEP §2106.04. Automating the recited claimed features as a combination of computer instructions implemented by computer hardware and/or software elements as recited above does not qualify an otherwise unpatentable abstract idea as patent eligible. Alternatively, the Office has long considered data gathering and data processing as well as data output recruitment information on a social network to be insignificant extra-solution activity, and these additional elements used to gather and output recruitment information on a social network are insignificant extra-solution limitations that do not impose any meaningful limits on practicing the abstract idea. See MPEP §2106.05(g). The current invention generate risk assessment based on a request from a user, determine remediation actions for the user. When considered in combination, the claims do not amount to improvements of the functioning of a computer, or to any technology or technical field. Applicant's limitations as recited above do nothing more than supplement the abstract idea using additional hardware/software computer components as a tool to perform the abstract idea and generally link the use of the abstract idea to a technological environment, which is not sufficient to integrate the judicial exception into a practical application since they do not impose any meaningful limits.
Dependent claims 2-3, 6-9, 11-16, 19, and 21-24 merely incorporate the additional elements recited above, along with further embellishments of the abstract idea of independent claims 1, 10 and 20 respectively, for example, claims 2, 7, 11, 13, and 21-22 “proxy server, cloud computing resources, a user device, and machine leaning” but these features only serve to further limit the abstract idea of independent claims 1, 10 and 20, furthermore, merely using/applying in a computer environment such as merely using the computer as a tool to apply instructions of the abstract idea do nothing more than provide insignificant extra-solution activity since they amount to data gathering, analysis and outputting. Furthermore, they do not pertain to a technological problem being solved in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, and/or the limitations fail to achieve an actual improvement in computer functionality or improvement in specific technology other than using the computer as a tool to perform the abstract idea.
Therefore, the additional elements recited in the claimed invention individually, and in combination fail to integrate the recited judicial exception into any practical application.
Regarding Step 2B
Claims 1-3, 6-16, and 19-24 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional element(s) as described above with respect to Step 2A Prong 2, the additional element of claims 1, 10 and 20 include a device, a system, a processor, a memory, computing resource, a non-transitory computer readable medium and a machine learning model. The displaying interface and storing data merely amount to a general purpose computer used to apply the abstract idea(s) (MPEP 2106.05(f)) and/or performs insignificant extra-solution activity, e.g. data retrieval and storage, as described above (MPEP 2106.05(g)) which are further merely well-understood, routine, and conventional activit(ies) as evidenced by MPEP 2106.06(05)(d)(II) (describing conventional activities that include transmitting and receiving data over a network, electronic recordkeeping, storing and retrieving information from memory, electronically scanning or extracting data from a physical document, and a web browser’s back and forward button functionality). Therefore, similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that the claims amount to significantly more than the abstract idea directed to generating risk assessment based on a request from a user, determine remediation actions for the user.
Claims 1-3, 6-16, and 19-24 is accordingly rejected under 35 USC 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea(s)) without significantly more.
Allowable Subject Matter
Claims 1-3, 5-17, and 19-24 are allowable over the prior art, however, these claims remain rejection under 35 USC 101.
Closest prior art to the invention include Venkataraman et al. US 2023/0334145 disclose secure modular machine learning platform, Jeffords et al US 2022/0345457 disclose Anomaly-based mitigation of access request risk, Zimmermann et al. US 2020/0137097 system and method for securing an enterprise computing environment and Lawrence et al. US 2022/0405694 Method, apparatus, and computer-readable medium for managing workforces with rotating shifts. None of the prior art of record, taken individually or in combination, teach, inter alia, teaches the claimed invention as detailed in the independent claims, “process the request with an anomaly detector to generate a risk assessment, the anomaly detector generating the risk assessment using a machine learning model trained to predict a likelihood that an adverse event will result from completion of actions by authenticated users, the machine learning model having been trained with a curated set of examples comprising accurate risk assessment examples and inaccurate risk assessment examples, wherein those of the inaccurate risk assessment examples requiring unwarranted remediation actions are given an elevated importance in the curated set ” The reason for withdrawn the 35 USC 102/103 rejection of claims 1-3, 5-17, and 19-24 in the instant application is because the prior art of record fails to teach the overall combination as claimed. Therefore, it would not have been obvious to one of ordinary skill in the art to modify the prior art to meet the combination above without unequivocal hindsight and one of ordinary skill would have no reason to do so. Upon further searching the examiner could not identify any prior art to teach these limitations. The prior art on record, alone or in combination, neither anticipates, reasonably teaches, not renders obvious the Applicant’s claimed invention.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Singh US 2024/0273211: Treat mitigation system and method.
Austraat US 2024/0232765: Audio signal processing and dynamic natural language understanding.
Gandhi et al. US 2021/0218744: Extending secondary authentication for fast roaming between service provider and enterprise network.
Swirsky WO 2023/095053: Enforcement of enterprise browser use.
Pronski et al. US 2022/0222737: System and method for optimized transfer of digital assets.
Vlahovic et al. US 2021/0352097: Third-party application risk assessment in an authorization service.
Butler US 2021/0203668: Systems and methods for malicious client detection through property analysis.
Abdelaziz et al. US 2020/0089848: Supervised learning system for identity compromise risk computation.
Wang et al. US 2019/0116193: Risk assessment for network access control through data analytics.
Duchin et al. US 9,690,937: Recommending a set of malicious activity detection rules in an automated, data-driven manner.
Wang, Ding, et al. "The request for better measurement: A comparative evaluation of two-factor authentication schemes." Proceedings of the 11th ACM on Asia conference on computer and communications security. 2016.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAMZEH OBAID whose telephone number is (313)446-4941. The examiner can normally be reached M-F 8 am-5 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached at (571) 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAMZEH OBAID/Primary Examiner, Art Unit 3624