DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to the applicant’s filing on 07/12/2024. Claims 1-20 are pending. Claims 1, 9, and 17 are independent.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 07/12/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 3-6, 9-11, 13, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lambotte (US PGPub No. 2024/0265114; hereinafter “Lambotte”), in view of KARPOVSKY et al. (US PGPub No. 2026/0006039; hereinafter “KARPOVSKY”).
As per claim 1: Lambotte discloses a method, comprising:
[generating, using an artificial intelligence (AI) model, a first object embedding] of a first threat intelligence (TI) data object (entity data 112 and/or any data/information described in this disclosure may be present as a vector. As used in this disclosure, a "vector" is a data structure that represents one or more quantitative values and/or measures of home resource data. A vector may be represented as an n-tuple of values, where n is one or more values, as described in further detail below; a vector may alternatively or additionally be represented as an element of a vector space [Lambotte ¶ 0042]), wherein the first TI data object comprises first one or more cybersecurity attributes of a business entity ("cybersecurity" is a practice of protecting systems, networks, and/or programs from "cyber-attacks," which are actions from a malicious entity that aimed at accessing, changing, or otherwise destroying sensitive information, properties, or otherwise business processes from the systems, networks, and/or programs [Lambotte ¶ 0021, Fig. 1, Fig. 7]; With continued reference to FIG. 1, processor 104 is configured to receive entity data 112 from an entity 116. As used in this disclosure, "entity data" are data related to an entity 116. "Entity," as described herein, is an independent existence … In a non-limiting example, entity 116 may include a team, small business, organization, company, and the like thereof. [Lambotte ¶ 0037]; entity data 112 of entity 116 may include personal information. In a non-limiting example, personal information may include identity information of entity 116 such as, without limitation, name, age, gender, address, social security number, and the like thereof. In another non-limiting example, personal information may include account information of entity 116 such as, without limitation, account login credentials, credit card number, billing address, and the like thereof [Lambotte ¶ 0038-0041]);
[obtaining a plurality of second object embeddings, wherein:
each second object embedding represents a respective second TI data object], and the respective second TI data object includes second one or more cybersecurity attributes of a cybersecurity threat (Cybersecurity related data 120 may include personal information and/or device information described above. In some embodiments, cybersecurity related data 120 may include data that potentially identifies one or more cybersecurity threats described above. In a non-limiting example, cybersecurity related data may include a plurality of system configurations such as, without limitation, firewall configurations that allow for the presence of security vulnerabilities. In another non-limiting example, cybersecurity related data may include knowledge of entity 116 understanding and/or following security standards [¶ 0040]; entity data 112 of entity 116 may include personal information. In a non-limiting example, personal information may include identity information of entity 116 such as, without limitation, name, age, gender, address, social security number, and the like thereof. In another non-limiting example, personal information may include account information of entity 116 such as, without limitation, account login credentials, credit card number, billing address, and the like thereof [Lambotte ¶ 0038-0041]);
[for each second object embedding, generating a respective similarity value reflecting a similarity between the first object embedding and the respective second object embedding];
[ranking, based on the similarity values, the plurality of second TI data objects]; and
[identifying, based on the ranking, a subset of the plurality of the second TI data objects that are relevant to the first TI data object].
Lambotte discloses the claimed subject matter as discussed above but does not explicitly disclose generating, using an artificial intelligence (AI) model, a first object embedding; obtaining a plurality of second object embeddings, wherein: each second object embedding represents a respective second TI data object; for each second object embedding, generating a respective similarity value reflecting a similarity between the first object embedding and the respective second object embedding; ranking, based on the similarity values, the plurality of second TI data objects; identifying, based on the ranking, a subset of the plurality of the second TI data objects that are relevant to the first TI data object. However, KARPOVSKY teaches generating, using an artificial intelligence (AI) model, a first object embedding (The models used to generate embedding representations of attack campaign steps are used to generate embeddings of security incident signals. An attack campaign step is determined to be matched to a security incident signal when the corresponding embeddings are within a defined distance. [¶ 0030]; Embedding generator model 544, which may also be a transformer-based large language model or any other type of machine learning model, Converts security incident signals to security incident signal embeddings 520 [¶ 0052]; A machine learning model generates embeddings for attack campaign steps, security incident signals, and telemetry query responses [¶ 0003]); obtaining a plurality of second object embeddings, wherein: each second object embedding represents a respective second TI data object (obtaining a plurality of attack campaign step embeddings that represent a plurality of attack campaign steps of an attack campaign, wherein the plurality of attack campaign steps of the attack campaign are inferred from a text description of the attack campaign [¶ 0080]); for each second object embedding, generating a respective similarity value reflecting a similarity between the first object embedding and the respective second object embedding (matching the security incident signal embedding to one of the plurality of attack campaign step embeddings; determining that an instance of the attack campaign has begun based in part on a determination that a threshold of attack campaign step embeddings match individual security incident signal embeddings [¶ 0080]; Match engine 540 may compute a distance within embedding space 546 to determine whether security incident signal embeddings 520 can be ascribed to attack campaign step embeddings 530. A distance between embeddings may be a Euclidian distance, a cosine similarity, or other measure of distance between two vectors. A security incident signal matches with an attack campaign step if the distance between their embedding vectors is less than a defined amount [¶ 0053, Examiner’s Note: computes distance within embedding space for each embedding]); ranking, based on the similarity values, the plurality of second TI data objects (matching the security incident signal embedding to one of the plurality of attack campaign step embeddings [¶ 0080, Examiner’s Note: Matching is interpreted as ranking]); identifying, based on the ranking, a subset of the plurality of the second TI data objects that are relevant to the first TI data object (A security incident signal matches with an attack campaign step if the distance between their embedding vectors is less than a defined amount [¶ 0053, Examiner’s Note: one or more matches is a subset]; determining that an instance of the attack campaign has begun based in part on a determination that a threshold of attack campaign step embeddings match individual security incident signal embeddings [¶ 0080]). Lambotte and KARPOVSKY are analogous art because they are from the same field of endeavor of cybersecurity data analysis. Therefore, based on Lambotte in view of KARPOVSKY, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of KARPOVSKY to the system of Lambotte in order to efficiently compare the cybersecurity data through embedding representations. Hence, it would have been obvious to combine the references above to obtain the invention as specified in the instant claim.
As per claim 3: Lambotte in view of KARPOVSKY teaches all the limitations of claim 1. Furthermore, KARPOVSKY discloses wherein: a second TI data object of the plurality of second TI data objects corresponds to a threat actor (obtaining a plurality of attack campaign step embeddings that represent a plurality of attack campaign steps of an attack campaign, wherein the plurality of attack campaign steps of the attack campaign are inferred from a text description of the attack campaign [KARPOVSKY ¶ 0080]; decomposing an attack campaign description into attack campaign steps. Threat intelligence corpus 402 represents a collection of attack campaign descriptions, optionally including indications of compromise or other structured or semi-structured data. Attack campaign 410 is one example of an entry in threat intelligence corpus 402. Attack campaign 410 may include text description 412 and indicators of compromise 414 [KARPOVSKY ¶ 0045-0047]; Indicators of compromise 414 include malicious IP addresses for 20 and malware list for 22. Malicious IP addresses 420 may be IP addresses, IP address ranges, or other collections of IP addresses that security researchers have found to be involved in cyber-attacks. Similarly, malware list 422 is a list of viruses, spyware, key loggers, and other types of software used in a cyber-attack [KARPOVSKY ¶ 0047]); and the second one or more cybersecurity attributes identify at least one of an industry targeted by the threat actor, or a location targeted by the threat actor (Example properties of data transfer entry 140 in data transfer telemetry log 124 include file path 142, source IP 144, destination IP 146, and transfer size 148. These properties of login attempt 130 and data transfer 140 are illustrative and not limiting [KARPOVSKY ¶ 0040]).
As per claim 4: Lambotte in view of KARPOVSKY teaches all the limitations of claim 3. Furthermore, Lambotte and KARPOVSKY disclose wherein the second one or more cybersecurity attributes further comprises at least one of a motivation of the threat actor, or an indication of whether the threat actor utilizes ransomware (Indicators of compromise 414 include malicious IP addresses for 20 and malware list for 22. Malicious IP addresses 420 may be IP addresses, IP address ranges, or other collections of IP addresses that security researchers have found to be involved in cyber-attacks. Similarly, malware list 422 is a list of viruses, spyware, key loggers, and other types of software used in a cyber-attack [KARPOVSKY ¶ 0047]; a nonlimiting example, cybersecurity threats may include threats from a "ransomware" which is a type of malicious software designed to extort properties by blocking access to files, systems, networks, and the like until ransom is paid. In some cases, paying ransom may not guarantee that access will be restored or recovered [Lambotte ¶ 0021]; comparing entity data 112 with cybersecurity metric 132 may include identifying at least one cybersecurity threat classification 136 as a function of the comparison. As used in this disclosure, a "cybersecurity threat classification" is a category of cybersecurity threat. Cybersecurity threat classification 136 may include, without limitation, "Phishing," "social engineering," "ransomware," "malware," and the like thereof [Lambotte ¶ 0056]).
As per claim 5: Lambotte in view of KARPOVSKY teaches all the limitations of claim 1. Furthermore, Lambotte and KARPOVSKY disclose wherein a second TI data object of the plurality of second TI data objects corresponds to: a threat actor (obtaining a plurality of attack campaign step embeddings that represent a plurality of attack campaign steps of an attack campaign, wherein the plurality of attack campaign steps of the attack campaign are inferred from a text description of the attack campaign [KARPOVSKY ¶ 0080]; decomposing an attack campaign description into attack campaign steps. Threat intelligence corpus 402 represents a collection of attack campaign descriptions, optionally including indications of compromise or other structured or semi-structured data. Attack campaign 410 is one example of an entry in threat intelligence corpus 402. Attack campaign 410 may include text description 412 and indicators of compromise 414 [KARPOVSKY ¶ 0045-0047]; Indicators of compromise 414 include malicious IP addresses for 20 and malware list for 22. Malicious IP addresses 420 may be IP addresses, IP address ranges, or other collections of IP addresses that security researchers have found to be involved in cyber-attacks. Similarly, malware list 422 is a list of viruses, spyware, key loggers, and other types of software used in a cyber-attack [KARPOVSKY ¶ 0047]; Cybersecurity related data 120 may include personal information and/or device information described above. In some embodiments, cybersecurity related data 120 may include data that potentially identifies one or more cybersecurity threats described above. In a non-limiting example, cybersecurity related data may include a plurality of system configurations such as, without limitation, firewall configurations that allow for the presence of security vulnerabilities [Lambotte ¶ 0040]); a cybersecurity vulnerability (Cybersecurity related data 120 may include personal information and/or device information described above. In some embodiments, cybersecurity related data 120 may include data that potentially identifies one or more cybersecurity threats described above. In a non-limiting example, cybersecurity related data may include a plurality of system configurations such as, without limitation, firewall configurations that allow for the presence of security vulnerabilities [Lambotte ¶ 0040]); a malware family (Indicators of compromise 414 include malicious IP addresses for 20 and malware list for 22. Malicious IP addresses 420 may be IP addresses, IP address ranges, or other collections of IP addresses that security researchers have found to be involved in cyber-attacks. Similarly, malware list 422 is a list of viruses, spyware, key loggers, and other types of software used in a cyber-attack [¶ 0047]); or a cybersecurity report.
As per claim 6: Lambotte in view of KARPOVSKY teaches all the limitations of claim 1. Furthermore, Lambotte and KARPOVSKY disclose wherein generating the respective similarity value comprises calculating a cosine similarity between the first object embedding and a respective second object embedding of the plurality of second object embeddings (A distance between embeddings may be a Euclidian distance, a cosine similarity, or other measure of distance between two vectors. A security incident signal matches with an attack campaign step if the distance between their embedding vectors is less than a defined amount [KARPOVSKY ¶ 0053]; Vectors may be more similar where their directions are more similar, and more different where their directions are more divergent, for instance as measured using cosine similarity as computed using a dot product of two vectors [Lambotte ¶ 0042]).
As per claim 9: Lambotte in view of KARPOVSKY teaches all the limitations of claim 1. Furthermore, Lambotte discloses a system, comprising: a memory (Computer system 900 includes a processor 904 and a memory 908 that communicate with each other, and with other components, via a bus 912 [¶ 0163, ¶ 0165, Fig. 9]; Such software may be a computer program product that employs a machine-readable storage medium … As used herein, a machine-readable storage medium does not include transitory forms of signal transmission [¶ 0160]); and a processing device, coupled to the memory, configured to perform operations comprising (Computer system 900 includes a processor 904 and a memory 908 that communicate with each other, and with other components, via a bus 912 [¶ 0163, ¶ 0165, Fig. 9]): The limitations of claim 9 are substantially similar to claim 1 above, and therefore is likewise rejected.
As per claim 10: Lambotte in view of KARPOVSKY teaches all the limitations of claim 9. Furthermore, Lambotte discloses wherein the first one or more cybersecurity attributes of the business entity identify at least one of an industry in which the business entity operates, or an operating location of the business entity ("cybersecurity" is a practice of protecting systems, networks, and/or programs from "cyber-attacks," which are actions from a malicious entity that aimed at accessing, changing, or otherwise destroying sensitive information, properties, or otherwise business processes from the systems, networks, and/or programs [Lambotte ¶ 0021, Fig. 1, Fig. 7]; With continued reference to FIG. 1, processor 104 is configured to receive entity data 112 from an entity 116. As used in this disclosure, "entity data" are data related to an entity 116. "Entity," as described herein, is an independent existence … In a non-limiting example, entity 116 may include a team, small business, organization, company, and the like thereof. [Lambotte ¶ 0037]; entity data 112 of entity 116 may include personal information. In a non-limiting example, personal information may include identity information of entity 116 such as, without limitation, name, age, gender, address, social security number, and the like thereof. In another non-limiting example, personal information may include account information of entity 116 such as, without limitation, account login credentials, credit card number, billing address, and the like thereof [Lambotte ¶ 0038-0041]).
As per claim 11: Lambotte in view of KARPOVSKY teaches all the limitations of claim 9. Furthermore, Lambotte discloses wherein the first one or more cybersecurity attributes of the business entity identify attack surface information of the business entity (Cybersecurity related data 120 may include personal information and/or device information described above. In some embodiments, cybersecurity related data 120 may include data that potentially identifies one or more cybersecurity threats described above. In a non-limiting example, cybersecurity related data may include a plurality of system configurations such as, without limitation, firewall configurations that allow for the presence of security vulnerabilities [Lambotte ¶ 0040]; entity data 112 of entity 116 may include personal information. In a non-limiting example, personal information may include identity information of entity 116 such as, without limitation, name, age, gender, address, social security number, and the like thereof. In another non-limiting example, personal information may include account information of entity 116 such as, without limitation, account login credentials, credit card number, billing address, and the like thereof [Lambotte ¶ 0038-0041]).
As per claim 13: Lambotte in view of KARPOVSKY teaches all the limitations of claim 9. Furthermore, KARPOVSKY discloses wherein: generating the respective similarity value is further based on a third object embedding (matching the security incident signal embedding to one of the plurality of attack campaign step embeddings; determining that an instance of the attack campaign has begun based in part on a determination that a threshold of attack campaign step embeddings match individual security incident signal embeddings [KARPOVSKY ¶ 0080]; Match engine 540 may compute a distance within embedding space 546 to determine whether security incident signal embeddings 520 can be ascribed to attack campaign step embeddings 530. A distance between embeddings may be a Euclidian distance, a cosine similarity, or other measure of distance between two vectors. A security incident signal matches with an attack campaign step if the distance between their embedding vectors is less than a defined amount [KARPOVSKY ¶ 0053]); the third object embedding represents a third TI data object (obtaining a plurality of attack campaign step embeddings that represent a plurality of attack campaign steps of an attack campaign, wherein the plurality of attack campaign steps of the attack campaign are inferred from a text description of the attack campaign [KARPOVSKY ¶ 0080]); and the generating the respective similarity value comprises calculating a first cosine distance between the first object embedding and the respective second object embedding (matching the security incident signal embedding to one of the plurality of attack campaign step embeddings; determining that an instance of the attack campaign has begun based in part on a determination that a threshold of attack campaign step embeddings match individual security incident signal embeddings [KARPOVSKY ¶ 0080]; Match engine 540 may compute a distance within embedding space 546 to determine whether security incident signal embeddings 520 can be ascribed to attack campaign step embeddings 530. A distance between embeddings may be a Euclidian distance, a cosine similarity, or other measure of distance between two vectors. A security incident signal matches with an attack campaign step if the distance between their embedding vectors is less than a defined amount [KARPOVSKY ¶ 0053]) and a second cosine distance between the first object embedding and the third object embedding (Match engine 540 may compute a distance within embedding space 546 to determine whether security incident signal embeddings 520 can be ascribed to attack campaign step embeddings 530. A distance between embeddings may be a Euclidian distance, a cosine similarity, or other measure of distance between two vectors. A security incident signal matches with an attack campaign step if the distance between their embedding vectors is less than a defined amount. As illustrated, phishing e-mail embedding 522 matches 542 with phishing attack step embedding 532. Similarly, malware download embedding 526 matches 546 with malware drop step embedding 536 and suspicious process execution embedding 528 matches 548 with malware execution with persistence step embedding 538. In some configurations, matching three out of four attack campaign steps is within threshold 548, such that match engine 540 generates security alert 550 [KARPOVSKY ¶ 0053]).
As per claim 17: Lambotte in view of KARPOVSKY teaches all the limitations of claim 1. Furthermore, Lambotte discloses a non-transitory computer-readable storage medium comprising instructions that, when executed by a processing device, cause the processing device to (Computer system 900 includes a processor 904 and a memory 908 that communicate with each other, and with other components, via a bus 912 [¶ 0163, ¶ 0165, Fig. 9]; Such software may be a computer program product that employs a machine-readable storage medium … As used herein, a machine-readable storage medium does not include transitory forms of signal transmission [¶ 0160]): The limitations of claim 17 are substantially similar to claim 1 above, and therefore are likewise rejected.
As per claim 18: Lambotte in view of KARPOVSKY teaches all the limitations of claim 17. Furthermore, KARPOVSKY discloses wherein the first TI data object corresponds to at least one of a cybersecurity alert, or a cyberattack campaign (The models used to generate embedding representations of attack campaign steps are used to generate embeddings of security incident signals. An attack campaign step is determined to be matched to a security incident signal when the corresponding embeddings are within a defined distance. [KARPOVSKY ¶ 0030]; Embedding generator model 544, which may also be a transformer-based large language model or any other type of machine learning model, Converts security incident signals to security incident signal embeddings 520 [KARPOVSKY ¶ 0052]; A machine learning model generates embeddings for attack campaign steps, security incident signals, and telemetry query responses [KARPOVSKY ¶ 0003]; Attack campaign 410 may include text description 412 and indicators of compromise 414 [KARPOVSKY ¶ 0045]).
As per claim 19: Lambotte in view of KARPOVSKY teaches all the limitations of claim 17. Furthermore, Lambotte and KARPOVSKY discloses wherein: the first TI data object corresponds to a threat actor (Cybersecurity related data 120 may include personal information and/or device information described above. In some embodiments, cybersecurity related data 120 may include data that potentially identifies one or more cybersecurity threats described above. In a non-limiting example, cybersecurity related data may include a plurality of system configurations such as, without limitation, firewall configurations that allow for the presence of security vulnerabilities. In another non-limiting example, cybersecurity related data may include knowledge of entity 116 understanding and/or following security standards [Lambotte ¶ 0040]; entity data 112 of entity 116 may include personal information. In a non-limiting example, personal information may include identity information of entity 116 such as, without limitation, name, age, gender, address, social security number, and the like thereof. In another non-limiting example, personal information may include account information of entity 116 such as, without limitation, account login credentials, credit card number, billing address, and the like thereof [Lambotte ¶ 0038-0041]); and a second TI data object of the plurality of second TI data objects corresponds to a cybersecurity report (reports of cyber attacks are often augmented with a natural language description. For example, a list of IP addresses that have been involved in an attack is an IoC that may be combined with a written description of the attack. In some cases the written description may be semi-structured, such as by presenting a list of attack campaign steps taken by an attack. Attack reports may come in the form of a blog post, documentation of a penetration testing tool, or other exposition that has a written description component [KARPOVSKY ¶ 0021]).
As per claim 20: Lambotte in view of KARPOVSKY teaches all the limitations of claim 17. Furthermore, Lambotte discloses wherein the instructions further cause the processing device to filter the subset of the plurality of second TI data objects based on one or more filter criterion indicated by a user of a security platform (receiving entity data 112 from entity 116 may include filtering entity identification data from entity data 112 using secure gateway. As used in this disclosure, "entity identification data" is data that permits the identity of an entity to whom the data applies to be reasonably inferred by either direct or indirect means. In a non-limiting example, entity identification data may include personal information of entity 116 described above, such as, without limitation, social security number (SSN), user identification number, financial account number, personal address information, phone number, and the like thereof. In some embodiments, filtering entity identification data from entity data 112 may include filtering entity identification data from entity data 112 as a function of security gateway. In a non-limiting example, secure gateway may iterate an entity data collection, wherein the entity data collection may include a plurality of entity data 116 associated with a data description [Lambotte ¶ 0054]; a nonlimiting example, entity device may be a computer and/or smart phone operated by an entity (i.e., user) in a remote location [Lambotte ¶ 0077, Fig. 6]).
Claims 2 and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Lambotte, in view of KARPOVSKY, in view of Mohankumar et al. (US PGPub No. 2025/0173726; hereinafter “Mohankumar”).
As per claim 2: Lambotte in view of KARPOVSKY teaches all the limitations of claim 1. Furthermore, Lambotte discloses wherein using the AI model comprises: for each cybersecurity attribute of the first one or more cybersecurity attributes (obtaining a plurality of attack campaign step embeddings that represent a plurality of attack campaign steps of an attack campaign, wherein the plurality of attack campaign steps of the attack campaign are inferred from a text description of the attack campaign [KARPOVSKY ¶ 0080]), [generating, using an embedding sub-model, an intermediate embedding; combining the intermediate embeddings; and generating, using a trained AI sub-model and based on the combined intermediate embeddings, the first object embedding].
Lambotte in view of KARPOVSKY discloses the claimed subject matter as discussed above but does not explicitly disclose generating, using an embedding sub-model, an intermediate embedding; combining the intermediate embeddings; and generating, using a trained AI sub-model and based on the combined intermediate embeddings, the first object embedding. However, Mohankumar teaches generating, using an embedding sub-model, an intermediate embedding; combining the intermediate embeddings; and generating, using a trained AI sub-model and based on the combined intermediate embeddings, the first object embedding (As shown in FIG. 6, the triangulation module 118 may leverage the data sources 140 to extract three sets of attributes (e.g., user attributes 611, product attributes 612, and merchant attributes 613) for a given transaction, may generate embeddings sets representative of each set of attributes, and may combine (e.g., aggregate) the embeddings sets into a single embeddings representative of the transaction, which may be used to train a model [¶ 0040]). Lambotte in view of KARPOVSKY and Mohankumar are analogous art because they are from the same field of endeavor of machine learning. Therefore, based on Lambotte in view of KARPOVSKY in view of Mohankumar, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Mohankumar to the system of Lambotte in view of KARPOVSKY in order to utilize the representation of multiple distinct attributes in the aggregate embedding for a more nuanced detection analysis. Hence, it would have been obvious to combine the references above to obtain the invention as specified in the instant claim.
As per claim 14: Lambotte in view of KARPOVSKY teach all the limitations of claim 9. The limitations of claim 14 are substantially similar to claim 2 above, and therefore are likewise rejected.
As per claim 15: Lambotte in view of KARPOVSKY in view of Mohankumar teach all the limitations of claim 14. Furthermore, Lambotte discloses wherein the trained Al sub-model comprises an artificial neural network (machine-learning model 224 may be generated by creating an artificial neural network [Lambotte ¶ 0122]).
As per claim 16: Lambotte in view of KARPOVSKY in view of Mohankumar teach all the limitations of claim 14. Furthermore, Lambotte discloses wherein the trained Al sub-model comprises a transformer (LLM may include an attention mechanism, utilizing a transformer as described further below [Lambotte ¶ 0091]).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Lambotte, in view of KARPOVSKY, in view of ZHAO et al. (US PGPub No. 2025/0272315; hereinafter “ZHAO”).
As per claim 7: Lambotte in view of KARPOVSKY teaches all the limitations of claim 1. Furthermore, KARPOVSKY discloses further comprising [training an Al sub-model of the Al model] (Embedding generator model 544, which may also be a transformer-based large language model or any other type of machine learning model, Converts security incident signals to security incident signal embeddings 520 [Lambotte ¶ 0052]; A machine learning model generates embeddings for attack campaign steps, security incident signals, and telemetry query responses [Lambotte ¶ 0003]) [using an unsupervised learning process, wherein the unsupervised learning process comprises adjusting the Al sub-model based on a feedback action of a user of a security platform].
Lambotte in view of KARPOVSKY discloses the claimed subject matter as discussed above but does not explicitly disclose training an Al sub-model of the Al model using an unsupervised learning process, wherein the unsupervised learning process comprises adjusting the Al sub-model based on a feedback action of a user of a security platform. However, ZHAO teaches training an Al sub-model of the Al model using an unsupervised learning process, wherein the unsupervised learning process comprises adjusting the Al sub-model based on a feedback action of a user of a security platform (The custom field-specific embedding model may, in embodiments, be trained using supervised or unsupervised machine learning techniques based on labeled or unlabeled field-specific training datasets, respectively. As discussed above, a field-specific training dataset may, in embodiments, be generated by pairing portions of a pair of documents that are determined to be relevant to each other. In embodiments, the field-specific training dataset may be labeled using with a relevancy score or metric indicative of a degree of relevance between the documents in the pair. The unlabeled and/or labeled field-specific training datasets may, in embodiments, be divided into a first subset training dataset for training and a second subset validation dataset for validation [¶ 0025]; the ranking model may be trained based on historical data, and/or user interactions. For instance, the ranking model may, in embodiments, be trained based on user feedback on the relevancy of historical search results [¶ 0030]). Lambotte in view of KARPOVSKY and ZHAO are analogous art because they are from the same field of endeavor of machine learning training. Therefore, based on Lambotte in view of KARPOVSKY in view of ZHAO, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of ZHAO to the system of Lambotte in view of KARPOVSKY in order to improve training results with the aid of user feedback. Hence, it would have been obvious to combine the references above to obtain the invention as specified in the instant claim.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Lambotte, in view of KARPOVSKY, in view of ZHAO, in view of Vasseur et al. (US PGPub No. 2019/0138938; hereinafter “Vasseur”).
As per claim 8: Lambotte in view of KARPOVSKY in view of ZHAO teaches all the limitations of claim 7. Furthermore, KARPOVSKY discloses wherein the feedback action of the user comprises at least one of: [providing, to the security platform, a relevance value associated with a relationship between the first TI data object and the second TI data object; or the user engaging with a portion of a TI user interface of the security platform that corresponds to the second TI data object].
Lambotte in view of KARPOVSKY in view of ZHAO discloses the claimed subject matter as discussed above but does not explicitly disclose providing, to the security platform, a relevance value associated with a relationship between the first TI data object and the second TI data object; or the user engaging with a portion of a TI user interface of the security platform that corresponds to the second TI data object. However, Vasseur teaches providing, to the security platform, a relevance value associated with a relationship between the first TI data object and the second TI data object; or the user engaging with a portion of a TI user interface of the security platform that corresponds to the second TI data object (Once AIC 406 detects a network anomaly, AIC 406 may send a custom message to a user interface (UI) via output and visualization interface 318 for review by a user. The message may include information regarding the detected anomaly, such as the network metrics/features that triggered the anomaly detector, the anomaly score of the anomaly, and potentially a flag indicating that the anomaly is exploratory. Indeed, the main functionality of AIC 406 is to detect network anomalies and assess their relevancy by obtaining relevancy feedback from the user [¶ 0069]; AIC 406 may also keep track of the distribution of anomalies reported to the user interface via output and visualization interface 318 and try to maximize the variety of the reported anomalies. By doing so, the relevancy learning by AIC 406 can be accelerated, since the user is able to provide relevancy feedback regarding a variety of anomalies [¶ 0070]). Lambotte in view of KARPOVSKY in view of ZHAO and Vasseur are analogous art because they are from the same field of endeavor of machine learning training. Therefore, based on Lambotte in view of KARPOVSKY in view of ZHAO in view of Vasseur, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Vasseur to the system of Lambotte in view of KARPOVSKY in view of ZHAO in order to adjust the model training based on relevancy feedback from the user to further enhance the performance of the trained model (¶ 0077). Hence, it would have been obvious to combine the references above to obtain the invention as specified in the instant claim.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Lambotte, in view of KARPOVSKY, in view of Eggleton et al. (US PGPub No. 2021/0026952; hereinafter “Eggleton”).
As per claim 12: Lambotte in view of KARPOVSKY teaches all the limitations of claim 9. Furthermore, Lambotte discloses wherein: a second TI data object of the plurality of second TI data objects corresponds to a cybersecurity vulnerability (Cybersecurity related data 120 may include personal information and/or device information described above. In some embodiments, cybersecurity related data 120 may include data that potentially identifies one or more cybersecurity threats described above. In a non-limiting example, cybersecurity related data may include a plurality of system configurations such as, without limitation, firewall configurations that allow for the presence of security vulnerabilities [Lambotte ¶ 0040]); and the second one or more cybersecurity attributes [identifies an operating system impacted by the cybersecurity vulnerability].
Lambotte in view of KARPOVSKY discloses the claimed subject matter as discussed above but does not explicitly disclose identifies an operating system impacted by the cybersecurity vulnerability. However, Eggleton teaches identifies an operating system impacted by the cybersecurity vulnerability (The event monitoring client application 122 may further receive one or more vulnerability descriptors. The one or more vulnerability descriptors include properties of one or more known security vulnerabilities of the one or more systems. For example, the one or more properties of the one or more known security vulnerabilities may include any number or combination of: a name of a security vulnerability; application(s) associated with the security vulnerability; application versions associated with the vulnerability; operating systems associated with the vulnerability; operating system versions associated with the vulnerability; a threat level for the vulnerability; a description of the vulnerability; details of malware known to exploit the vulnerability; names of actors known to target the vulnerability; and details of actions that may be taken to address the vulnerability. These one or more vulnerability descriptors may be received with the plurality of properties, and the event descriptor where applicable, or may be received later, e.g. with the system descriptor or in response to sending an indicator of an appropriate user input to the event monitoring server 130. The event monitoring client application 122 may display the one or more properties of the one or more security vulnerabilities. The properties of the one or more security vulnerabilities may be displayed as part of the GUI, and may be displayed by any of the means by which the plurality of properties of the suspicious system event may be displayed [Eggleton ¶ 0052]). Eggleton and the instant application are analogous art because they are from the same field of endeavor of computer security. Therefore, based on Lambotte in view of KARPOVSKY in view of Eggleton, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Eggleton to the system of Lambotte in view of KARPOVSKY in order to provide details regarding the operating systems to which the security objects correspond along with corresponding operating system version information for use with a security response to the vulnerabilities. Hence, it would have been obvious to combine the references above to obtain the invention as specified in the instant claim.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES P MOLES whose telephone number is (703)756-1043. The examiner can normally be reached M-F 8:00am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jung Kim can be reached at (571) 272-3804. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES P MOLES/Examiner, Art Unit 2494
/JUNG W KIM/Supervisory Patent Examiner, Art Unit 2494