Notice of Pre-AIA or AIA Status
present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This is in response to the original filing of 10/07/2024. Claims 1-20 are pending and have been considered below.
Priority
Acknowledgment is made of no claim of foreign priority.
Drawings
The drawings filed on 10/07/2024 are accepted.
Specification
The specification filed on 10/07/2024 is accepted.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
Claims 1-9: the system claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
The system claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. The system claims limitations of “analyzer module, transformation module, data chunk analyzer, detector, threat report module” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-9: claims limitations of “analyzer module, transformation module, data chunk analyzer, detector, threat report module” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 102
The information disclosure statement (IDS) submitted on 12/09/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 10 and 19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Lim et al U.S. 2024/0378285 A1.
Claim 1: Lim et al teaches a cyber security system, comprising:
an analyzer module configured to determine whether a file under analysis is likely malicious or not malicious (Fig.2, items 202-208, par.35, 55, cybersecurity event classification module of block 150 for training and deploying models to predict the likelihood of a malicious cybersecurity event occurring within a first predetermined period of time from a current time. Par.6, 9-10, The second model is trained to classify whether at least some of the event embedding vectors represent abnormal or potentially malicious behavior. the second model may be used to classify whether abnormal or potentially malicious behavior is present),
a transformation module configured to analyze the file under analysis in order
to generate a representation of the file under analysis that includes a simplified summary on information in and behavioral properties about the file under analysis (par.8, these embeddings provide a field of data that may be manipulated by the second model to determine a summarized sample of data that is used to determine whether a cybersecurity event is likely to occur. Par.25 The second model is a hierarchical temporal event transformer model. As a result of training the second model, the second model may be used to classify whether abnormal or potentially malicious behavior is present) and
then to feed the representation of the file under analysis into a Large Language Model (LLM) (par.67,71, Operation 206 includes training a second model to classify whether at least some of the event embedding vectors represent abnormal or potentially malicious behavior. In some approaches, the event embedding vectors are used as input to train a second model), where the LLM is trained with masked language modelling to create a semantic understanding of the file under analysis that creates a depiction of the file that retains multiple aspects of the information in and behavioral properties about the file under analysis as an embedding, in a space that allows the analyzer module to determine whether the file under analysis is likely malicious or not malicious via how closely the file under analysis as an embedding is related to a known malicious file or a known not malicious file with similar information and behavioral properties, without having to generate some kind of hash (par.65 the first model may be a bidirectional encoder representations from transformers (BERT) model. In one of such approaches, the BERT model may employ a transformer neural-network architecture which may be trained using known masked-language modeling (MLM) techniques. Accordingly, in some approaches, training the first model may include using MLM. More specifically, in some approaches, the first model may be trained to convert textual log events in a causal manner into embeddings (d-dimensional real vectors, e.g. d=768) using self-supervised MLM. In some approaches, during MLM training, the model learns to predict randomly masked words within any given sentence. For context, with respect to approaches described herein, each “sentence” may originate from the historical event log data, which may be extracted from semi-structured event logs).
Claim 10 : Lim et al teaches a method for a cyber security system, comprising:
providing an analyzer module to determine whether a file under analysis is likely malicious or not malicious (Fig.2, items 202-208, par.35, 55, cybersecurity event classification module of block 150 for training and deploying models to predict the likelihood of a malicious cybersecurity event occurring within a first predetermined period of time from a current time. Par.6, 9-10, The second model is trained to classify whether at least some of the event embedding vectors represent abnormal or potentially malicious behavior. the second model may be used to classify whether abnormal or potentially malicious behavior is present),
providing a transformation module to analyze the file under analysis in order
i) to generate a representation of the file under analysis that includes a simplified summary on information in and behavioral properties about the file under analysis (par.8, these embeddings provide a field of data that may be manipulated by the second model to determine a summarized sample of data that is used to determine whether a cybersecurity event is likely to occur. Par.25 The second model is a hierarchical temporal event transformer model. As a result of training the second model, the second model may be used to classify whether abnormal or potentially malicious behavior is present) and ii) then to feed the representation of the file into a Large Language Model (LLM) (par.67,71, Operation 206 includes training a second model to classify whether at least some of the event embedding vectors represent abnormal or potentially malicious behavior. In some approaches, the event embedding vectors are used as input to train a second model), and
providing the LLM trained with masked language modelling to create a semantic understanding of the file under analysis that creates a depiction of the file that retains multiple aspects of the information in and behavioral properties about the file under analysis as an embedding, in a space that allows the analyzer module to determine whether the file under analysis is likely malicious or not malicious via how closely the file under analysis as an embedding is related to a known malicious file or a known not malicious file with similar information and behavioral properties, without having to generate some kind of hash (par.65, the first model may be a bidirectional encoder representations from transformers (BERT) model. In one of such approaches, the BERT model may employ a transformer neural-network architecture which may be trained using known masked-language modeling (MLM) techniques. Accordingly, in some approaches, training the first model may include using MLM. More specifically, in some approaches, the first model may be trained to convert textual log events in a causal manner into embeddings (d-dimensional real vectors, e.g. d=768) using self-supervised MLM. In some approaches, during MLM training, the model learns to predict randomly masked words within any given sentence. For context, with respect to approaches described herein, each “sentence” may originate from the historical event log data, which may be extracted from semi-structured event logs).
Claim 19: Lim et al teaches a non-transitory memory storage device to store instructions in an executable format to be executed by one or more processors, which when executed are configured to cause a computing device to perform operations as follows, comprising:
using an analyzer module to determine whether a file under analysis is likely malicious or not malicious (Fig.2, items 202-208, par.35, 55, cybersecurity event classification module of block 150 for training and deploying models to predict the likelihood of a malicious cybersecurity event occurring within a first predetermined period of time from a current time. Par.6, 9-10, The second model is trained to classify whether at least some of the event embedding vectors represent abnormal or potentially malicious behavior. the second model may be used to classify whether abnormal or potentially malicious behavior is present),
using a transformation module to analyze the file under analysis in order i) to generate a representation of the file under analysis that includes a simplified summary on information in and behavioral properties about the file under analysis (par.8, these embeddings provide a field of data that may be manipulated by the second model to determine a summarized sample of data that is used to determine whether a cybersecurity event is likely to occur. Par.25 The second model is a hierarchical temporal event transformer model. As a result of training the second model, the second model may be used to classify whether abnormal or potentially malicious behavior is present) and ii)
then to feed the representation of the file into a Large Language Model (LLM) (par.67,71, Operation 206 includes training a second model to classify whether at least some of the event embedding vectors represent abnormal or potentially malicious behavior. In some approaches, the event embedding vectors are used as input to train a second model), and using the LLM trained with masked language modelling to create a semantic understanding of the file under analysis that creates a depiction of the file that retains multiple aspects of the information in and behavioral properties about the file under analysis as an embedding, in a space that allows the analyzer module determine whether the file under analysis is likely malicious or not malicious via how closely the file under analysis as an embedding is related to a known malicious file or a known not malicious file with similar information and behavioral properties, without having to generate some kind of hash (par.65 the first model may be a bidirectional encoder representations from transformers (BERT) model. In one of such approaches, the BERT model may employ a transformer neural-network architecture which may be trained using known masked-language modeling (MLM) techniques. Accordingly, in some approaches, training the first model may include using MLM. More specifically, in some approaches, the first model may be trained to convert textual log events in a causal manner into embeddings (d-dimensional real vectors, e.g. d=768) using self-supervised MLM. In some approaches, during MLM training, the model learns to predict randomly masked words within any given sentence. For context, with respect to approaches described herein, each “sentence” may originate from the historical event log data, which may be extracted from semi-structured event logs).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4, 8-9, 13, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Lim et al U.S. 2024/0378285 A1 in view of Humphrey et al U.S. 2021/0273961 A1.
Claims 4 and 13: Lim et al fails to teach, however Humphry et al in the same field of endeavor teaches
an autonomous response engine configured to cooperate with the analyzer module to perform one or more mitigation actions to mitigate a cyber threat caused by the malicious file without a need for a human to initiate the mitigation actions, where the autonomous response engine (par.22-23, 36) is
1) trained with machine learning (par.116),
2) scripted in software code, or
3) a combination of both to perform one or more mitigation actions (22-23, 36, Fig.5) .
Therefore, it would have been obvious to one ordinary skill in the art before the effective filling date of the invention to modify the disclosure of Lim et al with the additional features of Humphry et al in order to be able to assess cyber-attacks at a high level. That is, it is advantageous to view a network compromise as a whole, rather than viewing each affected device individually at a lower level, as suggested by Humphry et al par.13.
Claims 8 and 17: Lim et al fails to teach, however Humphry et al in the same field of endeavor teaches
a detector configured to use a Bayesian statistical inference approach to dynamically improve how abnormal event scores contribute to whether the event is an indicator of a cyber threat when new data is available based upon factoring in data collected across a fleet of cyber security appliances, each cyber security appliance containing its own analyzer module (par.72, 97, 107-109).
Therefore, it would have been obvious to one ordinary skill in the art before the effective filling date of the invention to modify the disclosure of Lim et al with the additional features of Humphry et al in order to be able to assess cyber-attacks at a high level. That is, it is advantageous to view a network compromise as a whole, rather than viewing each affected device individually at a lower level, as suggested by Humphry et al par.13.
Claims 9 and 18: Lim et al fails to teach, however Humphry et al in the same field of endeavor teaches
a threat report module configured to use a second LLM trained to analyze a threat intel report and extract behavioral data (par.46, 114-116Fig.1) in order to
1) match the extracted behavioral data to data currently present in a network under analysis to detect whether a similar potential cyber security incident is present in the network under analysis (par.32-34, 75-79, 84)and
2) to use the extracted behavioral data as a basis for a generation of a security incident simulation performed by a prediction engine, where the prediction engine is configured to use Artificial Intelligence algorithms trained to perform a machine-learned task of Artificial Intelligence-based simulations of cyberattacks on the network under analysis ( 84 ,172, 220-221, 253).
Therefore, it would have been obvious to one ordinary skill in the art before the effective filling date of the invention to modify the disclosure of Lim et al with the additional features of Humphry et al in order to be able to assess cyber-attacks at a high level. That is, it is advantageous to view a network compromise as a whole, rather than viewing each affected device individually at a lower level, as suggested by Humphry et al par.13.
Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Lim et al U.S. 2024/0378285 A1 in view of Immesoete U.S. 2024/0378463 A1.
Claims 6 and 15:Lim et al teaches
wherein the LLM is configured to feed the produced embedding on the file under analysis to at least one of
1) an AI classifier that is trained to examine the embedding to determine whether the file under analysis is likely the malicious file or not likely the malicious file (Lim et al, par.6, 9) or
2) a relational database that is configured to store the embedding in a clustering area within the relational database (par. Abstract and claim 1).
Therefore, it would have been obvious to one ordinary skill in the art before the effective filling date of the invention to modify the disclosure of Lim et al with the additional features of Immesoete in order to provide methods and systems for the integration of Transformer models with Knowledge Graphs for improved artificial intelligence applications, as suggested by Immesoete par.1.
Allowable Subject Matter
Claims 2-3, 5, 7, 11-12, 14, 16 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is an examiner’s statement of reasons for allowance: the prior of record either alone or in combination fails to teach the limitations as presented in dependent claims 2-3, 5, 7, 11-12, 14, 16 and 20 .
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
The following prior art are cited to further show the state of the art at the time of applicant’s invention.
Wan et al U.S 2025/0094538 A1 teaches a system that has a computer processor (14) for receiving a dataset, where the dataset is provided with a set of natural language characters. The computer processor provides a representation of the natural language character set as a first input into a machine learning model and a representation of a natural language summary as a second input into the machine learning model. The machine learning model generates a natural language summary of the natural language character set and a label.
Yang et al U.S. 20250117479 A1 teaches a binary code summarization tool receives binary code and combines the received binary code with natural language in a prompt for a large language model (LLM). The binary code summarization tool receives a semantic summarization from the LLM relating to the received binary code and evaluates the new semantic summarization for malicious functionality in the received binary code.
Chistyakov et al U.S. 20210097177 A1 teaches method for detection of malicious files includes training a mapping model for mapping files in a probability space. A plurality of characteristics of an analyzed file is determined based on a set of rules. A mapping of the analyzed file in probability space is generated based on the determined plurality of characteristics. A first database is searched using the generated mapping of the analyzed file to determine whether the analyzed file is associated with a family of malicious files.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FATOUMATA TRAORE whose telephone number is (571)270-1685. The examiner can normally be reached 6:30-3:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHEWAYE GELAGAY can be reached at 5712724219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Saturday, February 21, 2026
/FATOUMATA TRAORE/Primary Examiner, Art Unit 2436