DETAILED ACTION
Claims 1-20 are pending and have been examined.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 1 is objected to because of the following informalities: The claim fails to define the acronym “ML”. Appropriate correction is required. For examination purposes the Examiner will assume the acronym represents machine learning, as seen in claim 12.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims are directed to an abstract idea without significantly more.
Here, under step 1 of the Alice analysis, system claims 1-11 and 20 are directed to a plurality of engines; and computer program product claims 12-19 are directed to executable instructions. Thus the claims are directed to a machine and manufacture, respectively.
Under step 2A Prong One of the analysis, the claimed invention is directed to an abstract idea without significantly more. The claims recite risk reduction, including monitoring, receiving, generating, reducing steps.
The limitations of monitoring, receiving, generating, reducing, are a process that, under its broadest reasonable interpretation, covers organizing human activity concepts, but for the recitation of generic computer components.
Specifically, the claim elements recite monitoring one or more authorized communications channels and detect references in communications that use the one or more authorized communications channels to uses of unauthorized communications channels by the employees to communicate with the one or more other parties; receiving electronic alerts to detections of references by employees in communications over authorized communications channels to uses of one or more of unauthorized communications channels by the employees to communicate with the one or more other parties; and generating, for each of a plurality of the communications that includes one of the detected references: a first risk score based on one or more first risk-related factors that are related to the respective one of the communications and are identified by the scoring engine for risk scoring of unauthorized communications channels usage; and a second risk score for the respective one of the communications that is based on one or more second risk-related factors that relate to employee specific information that is specific to a respective one of the employees who engaged in the respective one of the communications; wherein the authorized communications channels comprise a first group of communications channels that the entity allows the employees to use for entity-related communications, and the unauthorized communications channels comprise a second group of communications channels that the entity does not authorize the employees to use for entity- related communications; and reducing risk when the first risk score or the second risk score is at a predetermined risk level or is within a predetermined risk score range.
That is, other than reciting a plurality of engines and a processor on a first computer system, the claim limitations merely cover managing personal behavior or relationships and interactions between people, thus falling within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
Under Step 2A Prong Two, the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This judicial exception is not integrated into a practical application. The claims include a plurality of engines and a processor on a first computer system. The plurality of engines and processor on a first computer system in the steps is recited at a high-level of generality, such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As a result, the claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of a plurality of engines and a processor on a first computer system amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept.
None of the dependent claims recite additional limitations that are sufficient to amount to significantly more than the abstract idea. Claims 2 and 3 further describe generating the first risk score or the second risk score. Claims 4-6 further describe the one or more first risk-related factors on which the first risk score is based, one or both of the first risk score and the second risk score, and the one or more second risk-related factors that relate to the employee specific information. Claims 7 and 8 further describe the first or second risk score, and one or more of the unauthorized communications channels. Claims 9-11 further describe the operation to reduce the risk, the entity and the risk, and the first risk score or the second risk score. Similarly, dependent claims 13-19 recite additional details that further restrict/define the abstract idea. A more detailed abstract idea remains an abstract idea.
Under step 2B of the analysis, the claims include, inter alia, a plurality of engines and a processor on a first computer system.
As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
There isn’t any improvement to another technology or technical field, or the functioning of the computer itself. Moreover, individually, there are not any meaningful limitations beyond generally linking the abstract idea to a particular technological environment, i.e., implementation via a computer system. Further, taken as a combination, the limitations add nothing more than what is present when the limitations are considered individually. There is no indication that the combination provides any effect regarding the functioning of the computer or any improvement to another technology.
In addition, as discussed in paragraphs 0054 and 0055 of the specification, “FIG. 1 shows an illustrative block diagram of system 100 that includes computer 101. Computer 101 may alternatively be referred to herein as an "engine," "server" or a "computing device." Computer 101 may be any computing device described herein, such as the computing devices running on a computer, smart phones, smart cars, smart cards, and any other mobile device described herein. Elements of system 100, including computer 101, may be used to implement various aspects of the systems and methods disclosed herein. Computer 101 may have a processor 103 for controlling the operation of the device and its associated components, and may include RAM 105,ROM 107,input/output circuit 109, and a non-transitory or non-volatile memory 115. Machine-readable memory may be configured to store information in machine-readable data structures. Other components commonly used for computers, such as EEPROM or Flash memory or any other suitable components, may also be part of the computer 101.”
As such, this disclosure supports the finding that no more than a general purpose computer, performing generic computer functions, is required by the claims.
Viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. See Alice Corporation Pty. Ltd. v. CLS Bank Int’l et al., No. 13-298 (U.S. June 19, 2014).
Claims 1-11 and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Independent claims 1 and 20 include a system defined merely by one or more engines, which is deemed software, with no accompanying hardware components (e.g., a physical system including inter alia, processor, server, GUI, etc.). Dependent claims 2-11 are rejected based upon the same rationale.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Miyake et al (US 20240121242 A1).
As per claim 1, Miyake et al disclose an artificial intelligence (AI) communications risk reduction system for reducing risk to an entity caused by employees of the entity communicating with one or more other parties using unauthorized communications channels (i.e., system 202 which is configured with insider risk management software 302 to provide functionality 210, ¶ 0046), wherein the system comprises:
a monitoring engine that is configured to monitor one or more authorized communications channels and detect references in communications that use the one or more authorized communications channels to uses of unauthorized communications channels by the employees to communicate with the one or more other parties (i.e., an impact risk 214 of an authorized user 104 of the managed computing system, and adjusting 504 a cybersecurity characteristic 304 of the managed computing system based on at least the impact risk. In a variation, the digital memory is not external to the managed computing system. In some embodiments, the impact risk includes a digital value which represents an impact 212 of unauthorized activity 410 of the authorized user or future unauthorized activity 410 of the authorized user or both, ¶ 0055. Tools 122 include software apps on mobile devices 102 or workstations 102 or servers 102, as well as APIs, browsers, or webpages and the corresponding software for protocols, ¶ 0035, wherein 122 tools and applications, e.g., version control systems, cybersecurity tools, software development tools, office productivity tools, social media tools, diagnostics, browsers, games, email and other communication tools, ¶ 0296); and
a scoring engine that is configured to use AI/ML (i.e., Some embodiments include a machine learning model 526 or a statistical model, or both. In some, machine learning model features or statistical model features or both include a count of unique files accessed by the users, ¶ 0099) to generate, for each of a plurality of the communications that includes one of the detected references:
a first risk score based on one or more first risk-related factors that are related to the respective one of the communications and are identified by the scoring engine for risk scoring of unauthorized communications channels usage (i.e., the impact risk includes a digital value which represents an impact 212 of unauthorized activity 410 of the authorized user or future unauthorized activity 410 of the authorized user or both, ¶ 0055); and
a second risk score for the respective one of the communications that is based on one or more second risk-related factors that relate to employee specific information that is specific to a respective one of the employees who engaged in the respective one of the communications (i.e., the impact risk is computed 502 based on at least an authorized user influence pillar value 318, 310 and an authorized user access pillar value 308, 310. In some, the authorized user influence pillar value 318 (also referred to as the influence pillar) represents an extent of influence 316 of the authorized user within the managed computing system 216 or within an organization 424 which utilizes the managed computing system, ¶ 0056);
wherein the authorized communications channels comprise a first group of communications channels that the entity allows the employees to use for entity-related communications, and the unauthorized communications channels comprise a second group of communications channels that the entity does not authorize the employees to use for entity-related communications (i.e., social media influence data is Boolean (e.g., user is or is not authorized to post on behalf of the organization), and in some it is more fine-grained (e.g., user has N thousand followers, user has posted on behalf of the organization an average of K times in the past 30 days, user has sent or received N communications in the past week, user mentioned the organization by name N times in the past month in public postings not necessarily speaking on behalf of the organization, user mentioned the organization or an organization officer or an organization product by name N times in the past six months, etc.), ¶ 0115); and
wherein the system is configured to cause an operation to be performed to reduce risk when the first risk score or the second risk score is at a predetermined risk level or is within a predetermined risk score range (i.e., marking 1004 the authorized user with a potential high impact user designation 408 based on the impact risk 214 exceeding a specified threshold 430, and persisting 1016 the designation after the impact risk is below the specified threshold, ¶ 0078).
As per claim 2, Miyake et al disclose the scoring engine is configured to further use behavioral analytics to generate the first risk score or the second risk score (i.e., anomaly detection algorithms automatically determine 518 concerning behavior despite whatever normal looks like for each tenant, ¶ 0184).
As per claim 3, Miyake et al disclose the scoring engine is configured to further use sentiment analysis to generate the first risk score or the second risk score (i.e., The approach in some embodiments is also different from approaches that look for rapid changes in user behavior, e.g., a detector that alerts because on the last day of work a user copies 5000 files to USB drive might not detect 518 exfiltration activity of two or three files every day or two over different channels (email, USB, shared drive) that occurs for several months before termination of the user's account, ¶ 0184).
As per claim 4, Miyake et al disclose the one or more first risk-related factors on which the first risk score is based comprise one or more of a type of one of the unauthorized communications channels that has been referenced, a first risk history at the entity for the type of the unauthorized communications channels, a previous escalation for the type of the unauthorized communications channels, a participant count for the one or more other parties to the respective one of the communications, or information regarding the one or more other parties (i.e., Historical attack information about the user, e.g., whether the user was a target of phishing or other attacks. In some embodiments, this attack information is Boolean (has or has not been a target), and in some it is more fine-grained (e.g., has been a target N times in past 6 months, or has been a target N times since the most recent job role change), ¶ 0114).
As per claim 5, Miyake et al disclose one or both of the first risk score and the second risk score that are generated are further based on one or more of a time of day, a day of a week, month or year, or a reporting cycle for the entity (i.e., some reuse sensitivity label and sensitivity type aggregates, and re-aggregate them over a period, e.g., the past 30 days. In a particular example, an embodiment filters 1010 and reads only activities that have sensitivity label or sensitivity type information for all users in the last 30 days, ¶ 0133).
As per claim 6, Miyake et al disclose the one or more second risk-related factors that relate to the employee specific information comprises one or more of a job title, a job description, job responsibilities, a job location, years of service, seniority, a regulated status of the respective employee at the entity, employee access to confidential information, or a history of interactions by the respective employee with the one or more other parties (i.e., the influence signal 320 represents at least one of the following: a position 1218 of the authorized user within a hierarchy of an organization; a title or a role 1218 of the authorized user within an organization, ¶ 0067).
As per claim 7, Miyake et al disclose the first risk score is further based on the employee specific information or the second risk score is further based on the first risk score (i.e., identify users in high-risk roles who can cause more harm to an organization due to the nature of their role and access. More generally, some embodiments drive detection and prioritization of the riskiest activity in an organization by enhancing context around users, ¶ 0028).
As per claim 8, Miyake et al disclose one or more of the unauthorized communications channels comprises one of an unauthorized email account, a personal employee telephone, a text, a chat service, an instant messaging service, or social media (i.e., social media influence data is Boolean (e.g., user is or is not authorized to post on behalf of the organization), and in some it is more fine-grained (e.g., user has N thousand followers, user has posted on behalf of the organization an average of K times in the past 30 days, user has sent or received N communications in the past week, user mentioned the organization by name N times in the past month in public postings not necessarily speaking on behalf of the organization, user mentioned the organization or an organization officer or an organization product by name N times in the past six months, etc.), ¶ 0115).
As per claim 9, Miyake et al disclose the operation comprises one or more of sending an alert to the respective employee or to a manager of the employee, halting a trade that was arranged by the respective employee, blocking further communications by the respective employee, or sending a reminder to the respective employee about a policy of the entity regarding use of unauthorized communications channels (i.e., automatically adjusting 504 a cybersecurity characteristic based on at least the impact risk by doing at least one of the following: automatically boosting 536, 508 a risk score 426 in a cybersecurity tool 122 which has alerting functionality; automatically disabling 528, automatically suspending 528, or automatically deleting 528 an account 532 in a computing environment, ¶ 0073).
As per claim 10, Miyake et al disclose the entity is a financial institution, and the risk comprises a regulatory risk (i.e., Within a given organization, there are users that have roles that provide them with more authorized access to highly sensitive information or more powerful privileges than an average user. Some examples of these types of roles include members of highly confidential projects (e.g., tented projects), users who have access to pre-release financial information that is highly regulated, ¶¶ 0023-0024. Here, as elsewhere herein, it is presumed that appropriate privacy and regulatory compliance mechanisms are in place and properly utilized, ¶ 0118).
As per claim 11, Miyake et al disclose the first risk score or the second risk score is generated as follows:
PNG
media_image1.png
39
218
media_image1.png
Greyscale
where fi is the value of the ith risk factor that represents the risk determined for the risk factor, Wi is the weight given to the risk factor, Wifi is the value of the weight Wi multiplied by the value fi, and SUM Wi fi for i=0 to n is the value of the respective risk score being generated (i.e., calculating 512 a weighted combination by calculating a mean risk score of each algorithm (signal or pillar or both) by a tenant or organization or other entity, for each users' score finding a distance from the mean, normalizing the distance from mean, for each risk score against a user, assigning a weight, calculating a weighted risk score, computing an average of weighted risk scores, ¶ 0076, wherein comparison results from multiple peer groups are combined by summation, by weighted combination, by individual comparison to a threshold, or otherwise, in a given embodiment, ¶ 0146).
Claims 12-19 are rejected based upon the same rationale as the rejection of claims 1-6, 8, 9, and 11, respectively, since they are the computer program product claims corresponding to the system claims.
Claim 20 is rejected based upon the same rationale as the rejection of claims 1 and 9, since it is the corresponding substantially similar system claim.
Conclusion
The prior art made of record and not relied upon, listed in the PTO-892, considered pertinent to applicant's disclosure, discloses employee risk analysis.
-Roy et al (Sustainable response system building against insider-led cyber frauds in banking sector: a machine learning approach) disclose a conceptual framework that can be used to ensure a sustainable cyber fraud mitigation ecosystem.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDRE D BOYCE whose telephone number is (571)272-6726. The examiner can normally be reached M-F 10a-6:30p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao (Rob) Wu can be reached at (571) 272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDRE D BOYCE/Primary Examiner, Art Unit 3623 December 26, 2025