Prosecution Insights
Last updated: April 19, 2026
Application No. 18/599,759

KNOWLEDGE AND WISDOM EXTRACTION AND CODIFICATION FOR MACHINE LEARNING APPLICATIONS

Non-Final OA §103
Filed
Mar 08, 2024
Examiner
XIE, EDGAR WANGSHU
Art Unit
2433
Tech Center
2400 — Computer Networks
Assignee
Salem Cyber
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
14 granted / 17 resolved
+24.4% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
15 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
58.0%
+18.0% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 17 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action This is the initial office action that has been issued in response to patent application, 18/599,759, filed on 03/08/2024. Claims 1-20, as originally filed, are currently pending and have been considered below. Claims 1, 8, and 15 are independent claims. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of the prior-filed application, Application No. 63/489,245, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. Accordingly, claims 1-20 are not entitled to the benefit of the prior application. Drawings The drawings filed on 03/08/2024 are accepted by the examiner. Claim Objections Claims 3-4, 10-11, and 17-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5, 7-9, 12, 14-16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Burceanu et al. (US Patent Application Publication No. US 2023/0306106 A1, hereinafter, Burceanu) in view of Cobb et al. (US Patent Application Publication No. US 2024/0135019 A1, hereinafter, Cobb). Regarding claim 1, Burceanu discloses: A method comprising: accessing a set of computer security alerts (Burceanu, ¶[0024], “the training process uses a training corpus 18 comprising a collection of data samples, at least some of which may be annotated.”); generating a first training dataset comprising a plurality of first training examples using the set of computer security alerts, each first training example comprising a computer security alert labeled with one or more characteristics associated with a predetermined cause of the computer security alert (Burceanu, ¶[0024], “the training process uses a training corpus 18 comprising a collection of data samples, at least some of which may be annotated.” ¶[0030-0032], “A skilled artisan will appreciate that the illustrated format of data attributes 32a-e (i.e., as an array of numbers) is given only as an example and is not meant to be limiting. Instead, attributes may be encoded as more complicated data objects. Exemplary data attributes 32a-e may include, among others: a blacklist status of the respective page, extracted from multiple sources; lexical features of the respective page—URL length, URL components length, number of special characters, etc.;” ¶[0041-0042], “a category of online threat represented by the respective page (e.g., malware, phishing, defacement); an indicator of a type of obfuscation used in the URL of the respective page;”) training a natural language processing (NLP) model using the first training dataset (Burceanu, ¶[0026], “[0026] In some embodiments, threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks.” ¶[0060], “In some embodiments, training uses a corpus 18 of training samples/inputs having the same format as target data 22 that the respective threat detector is expected to process.”), the NLP model configured to, when applied to a computer security alert, identify a set of characteristics of a cause of the computer security alert (Burceanu, ¶[0023], “security verdict 26 may include a classification label indicative of a category of threats (e.g., malware family).” ¶[0026], “threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks. … Each edge of multitask graph 50 is embodied by a transformation module 40a-h configured to perform the respective transformation. For instance, module 40e transforms attributes 32b into attributes 32e, i.e., determines attributes 32e according to attributes 32b.” ¶[0030], “A skilled artisan will appreciate that the illustrated format of data attributes 32a-e (i.e., as an array of numbers) is given only as an example and is not meant to be limiting. Instead, attributes may be encoded as more complicated data objects.”); training a neural network model using the second training dataset (Burceanu, ¶[0026], “[0026] In some embodiments, threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks.” ¶[0060], “In some embodiments, training uses a corpus 18 of training samples/inputs having the same format as target data 22 that the respective threat detector is expected to process.”), the neural network model configured to, when applied to the set of identified characteristics of the identified cause of the computer security alert outputted by the NLP model, generate a measurement of threat of the computer security alert (Burceanu, ¶[0023], “An exemplary security verdict 26 comprises a numerical score indicating a likelihood of a computer security threat. The score may be Boolean (e.g., YES/NO) or may vary gradually between predetermined bounds (e.g., between 0 and 1). In one such example, higher values indicate a higher likelihood of malice.” ¶[0026], “threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks. … Each edge of multitask graph 50 is embodied by a transformation module 40a-h configured to perform the respective transformation. For instance, module 40e transforms attributes 32b into attributes 32e, i.e., determines attributes 32e according to attributes 32b.” ¶[0030], “A skilled artisan will appreciate that the illustrated format of data attributes 32a-e (i.e., as an array of numbers) is given only as an example and is not meant to be limiting. Instead, attributes may be encoded as more complicated data objects.”). Burceanu does not explicitly teach the following limitation that Cobb teaches: generating a second training dataset comprising a plurality of second training examples by generating variants of one or more of the accessed set of computer security alerts, each generated variant computer security alert associated with a variant set of characteristics of a variant cause of the variant computer security alert, and each second training example comprising a generated variant computer security alert labeled as an above-threshold threat or a below-threshold threat (Cobb, ¶[0037], “the training data 230 may be a collection of historical transaction data from various data stores or a single data store and may include automatic or manual classification of whether each transaction is normal or abnormal.”); and Burceanu in view of Cobb is analogous art because the references are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “training and application of AI models.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Burceanu with Cobb to “generating a second training dataset comprising a plurality of second training examples by generating variants of one or more of the accessed set of computer security alerts, each generated variant computer security alert associated with a variant set of characteristics of a variant cause of the variant computer security alert, and each second training example comprising a generated variant computer security alert labeled as an above-threshold threat or a below-threshold threat”; because, the training data may be a collection of historical transaction data from various data stores (Cobb, ¶[0037]). Regarding claim 2, Burceanu in view of Cobb teaches: The method of claim 1, further comprising: accessing a first target computer security alert (Burceanu, ¶[0024], “the training process uses a training corpus 18 comprising a collection of data samples, at least some of which may be annotated.”); applying the NLP model to the first target computer security alert to identify a first set of target characteristics of a first target cause of the first target computer security alert (Burceanu, ¶[0023], “security verdict 26 may include a classification label indicative of a category of threats (e.g., malware family).” ¶[0026], “threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks. … Each edge of multitask graph 50 is embodied by a transformation module 40a-h configured to perform the respective transformation. For instance, module 40e transforms attributes 32b into attributes 32e, i.e., determines attributes 32e according to attributes 32b.” ¶[0030], “A skilled artisan will appreciate that the illustrated format of data attributes 32a-e (i.e., as an array of numbers) is given only as an example and is not meant to be limiting. Instead, attributes may be encoded as more complicated data objects.”); applying the neural network model to the identified first set of target characteristics to generate a first target measurement of threat of the first target computer security alert (Burceanu, ¶[0023], “An exemplary security verdict 26 comprises a numerical score indicating a likelihood of a computer security threat. The score may be Boolean (e.g., YES/NO) or may vary gradually between predetermined bounds (e.g., between 0 and 1). In one such example, higher values indicate a higher likelihood of malice.” ¶[0026], “threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks. … Each edge of multitask graph 50 is embodied by a transformation module 40a-h configured to perform the respective transformation. For instance, module 40e transforms attributes 32b into attributes 32e, i.e., determines attributes 32e according to attributes 32b.” ¶[0030], “A skilled artisan will appreciate that the illustrated format of data attributes 32a-e (i.e., as an array of numbers) is given only as an example and is not meant to be limiting. Instead, attributes may be encoded as more complicated data objects.”); and in response to determining that the first target computer security alert is associated with the above-threshold threat, modifying an interface displayed to a human operator to present first target measurement of threat (Burceanu, ¶[0023], “An exemplary security verdict 26 comprises a numerical score indicating a likelihood of a computer security threat. The score may be Boolean (e.g., YES/NO) or may vary gradually between predetermined bounds (e.g., between 0 and 1). In one such example, higher values indicate a higher likelihood of malice.” ¶[0078], “Output devices 78 may include display devices such as monitors and speakers among others, as well as hardware interfaces/adapters such as graphic cards, enabling the respective computing appliance to communicate data to a user.”). Regarding claim 5, Burceanu in view of Cobb teaches: The method of claim 1, wherein the NLP model is trained on information of common security alert names and historic computer security alerts (Cobb, ¶[0037], “the training data 230 may be a collection of historical transaction data from various data stores or a single data store and may include automatic or manual classification of whether each transaction is normal or abnormal.”). Regarding claim 7, Burceanu in view of Cobb teaches: The method of claim 1, wherein the measurement of threat of the computer security alert includes one or more of a level of threat, a security action recommendation, and a request for additional information (Burceanu, ¶[0023], “An exemplary security verdict 26 comprises a numerical score indicating a likelihood of a computer security threat. The score may be Boolean (e.g., YES/NO) or may vary gradually between predetermined bounds (e.g., between 0 and 1). In one such example, higher values indicate a higher likelihood of malice.”). Regarding claim 8, Burceanu discloses: A non-transitory computer readable medium configured to store instructions, the instructions when executed by one or more processors causing the one or more processors to perform operations comprising: accessing a set of computer security alerts (Burceanu, ¶[0024], “the training process uses a training corpus 18 comprising a collection of data samples, at least some of which may be annotated.”); generating a first training dataset comprising a plurality of first training examples using the set of computer security alerts, each first training example comprising a computer security alert labeled with one or more characteristics associated with a predetermined cause of the computer security alert (Burceanu, ¶[0024], “the training process uses a training corpus 18 comprising a collection of data samples, at least some of which may be annotated.” ¶[0030-0032], “A skilled artisan will appreciate that the illustrated format of data attributes 32a-e (i.e., as an array of numbers) is given only as an example and is not meant to be limiting. Instead, attributes may be encoded as more complicated data objects. Exemplary data attributes 32a-e may include, among others: a blacklist status of the respective page, extracted from multiple sources; lexical features of the respective page—URL length, URL components length, number of special characters, etc.;” ¶[0041-0042], “a category of online threat represented by the respective page (e.g., malware, phishing, defacement); an indicator of a type of obfuscation used in the URL of the respective page;”) training a natural language processing (NLP) model using the first training dataset (Burceanu, ¶[0026], “[0026] In some embodiments, threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks.” ¶[0060], “In some embodiments, training uses a corpus 18 of training samples/inputs having the same format as target data 22 that the respective threat detector is expected to process.”), the NLP model configured to, when applied to a computer security alert, identify a set of characteristics of a cause of the computer security alert (Burceanu, ¶[0023], “security verdict 26 may include a classification label indicative of a category of threats (e.g., malware family).” ¶[0026], “threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks. … Each edge of multitask graph 50 is embodied by a transformation module 40a-h configured to perform the respective transformation. For instance, module 40e transforms attributes 32b into attributes 32e, i.e., determines attributes 32e according to attributes 32b.” ¶[0030], “A skilled artisan will appreciate that the illustrated format of data attributes 32a-e (i.e., as an array of numbers) is given only as an example and is not meant to be limiting. Instead, attributes may be encoded as more complicated data objects.”); training a neural network model using the second training dataset (Burceanu, ¶[0026], “[0026] In some embodiments, threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks.” ¶[0060], “In some embodiments, training uses a corpus 18 of training samples/inputs having the same format as target data 22 that the respective threat detector is expected to process.”), the neural network model configured to, when applied to the set of identified characteristics of the identified cause of the computer security alert outputted by the NLP model, generate a measurement of threat of the computer security alert (Burceanu, ¶[0023], “An exemplary security verdict 26 comprises a numerical score indicating a likelihood of a computer security threat. The score may be Boolean (e.g., YES/NO) or may vary gradually between predetermined bounds (e.g., between 0 and 1). In one such example, higher values indicate a higher likelihood of malice.” ¶[0026], “threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks. … Each edge of multitask graph 50 is embodied by a transformation module 40a-h configured to perform the respective transformation. For instance, module 40e transforms attributes 32b into attributes 32e, i.e., determines attributes 32e according to attributes 32b.” ¶[0030], “A skilled artisan will appreciate that the illustrated format of data attributes 32a-e (i.e., as an array of numbers) is given only as an example and is not meant to be limiting. Instead, attributes may be encoded as more complicated data objects.”). Burceanu does not explicitly teach the following limitation that Cobb teaches: generating a second training dataset comprising a plurality of second training examples by generating variants of one or more of the accessed set of computer security alerts, each generated variant computer security alert associated with a variant set of characteristics of a variant cause of the variant computer security alert, and each second training example comprising a generated variant computer security alert labeled as an above-threshold threat or a below-threshold threat (Cobb, ¶[0037], “the training data 230 may be a collection of historical transaction data from various data stores or a single data store and may include automatic or manual classification of whether each transaction is normal or abnormal.”); and Burceanu in view of Cobb is analogous art because the references are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “training and application of AI models.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Burceanu with Cobb to “generating a second training dataset comprising a plurality of second training examples by generating variants of one or more of the accessed set of computer security alerts, each generated variant computer security alert associated with a variant set of characteristics of a variant cause of the variant computer security alert, and each second training example comprising a generated variant computer security alert labeled as an above-threshold threat or a below-threshold threat”; because, the training data may be a collection of historical transaction data from various data stores (Cobb, ¶[0037]). Regarding claim 9, Burceanu in view of Cobb teaches: The non-transitory computer readable medium of claim 8, wherein the instructions when executed by one or more processors cause the one or more processors to further perform operations comprising: accessing a first target computer security alert (Burceanu, ¶[0024], “the training process uses a training corpus 18 comprising a collection of data samples, at least some of which may be annotated.”); applying the NLP model to the first target computer security alert to identify a first set of target characteristics of a first target cause of the first target computer security alert (Burceanu, ¶[0023], “security verdict 26 may include a classification label indicative of a category of threats (e.g., malware family).” ¶[0026], “threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks. … Each edge of multitask graph 50 is embodied by a transformation module 40a-h configured to perform the respective transformation. For instance, module 40e transforms attributes 32b into attributes 32e, i.e., determines attributes 32e according to attributes 32b.” ¶[0030], “A skilled artisan will appreciate that the illustrated format of data attributes 32a-e (i.e., as an array of numbers) is given only as an example and is not meant to be limiting. Instead, attributes may be encoded as more complicated data objects.”); applying the neural network model to the identified first set of target characteristics to generate a first target measurement of threat of the first target computer security alert (Burceanu, ¶[0023], “An exemplary security verdict 26 comprises a numerical score indicating a likelihood of a computer security threat. The score may be Boolean (e.g., YES/NO) or may vary gradually between predetermined bounds (e.g., between 0 and 1). In one such example, higher values indicate a higher likelihood of malice.” ¶[0026], “threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks. … Each edge of multitask graph 50 is embodied by a transformation module 40a-h configured to perform the respective transformation. For instance, module 40e transforms attributes 32b into attributes 32e, i.e., determines attributes 32e according to attributes 32b.” ¶[0030], “A skilled artisan will appreciate that the illustrated format of data attributes 32a-e (i.e., as an array of numbers) is given only as an example and is not meant to be limiting. Instead, attributes may be encoded as more complicated data objects.”); and in response to determining that the first target computer security alert is associated with the above-threshold threat, modifying an interface displayed to a human operator to present first target measurement of threat (Burceanu, ¶[0023], “An exemplary security verdict 26 comprises a numerical score indicating a likelihood of a computer security threat. The score may be Boolean (e.g., YES/NO) or may vary gradually between predetermined bounds (e.g., between 0 and 1). In one such example, higher values indicate a higher likelihood of malice.” ¶[0078], “Output devices 78 may include display devices such as monitors and speakers among others, as well as hardware interfaces/adapters such as graphic cards, enabling the respective computing appliance to communicate data to a user.”). Regarding claim 12, Burceanu in view of Cobb teaches: The non-transitory computer readable medium of claim 8, wherein the NLP model is trained on information of common security alert names and historic computer security alerts (Cobb, ¶[0037], “the training data 230 may be a collection of historical transaction data from various data stores or a single data store and may include automatic or manual classification of whether each transaction is normal or abnormal.”). Regarding claim 14, Burceanu in view of Cobb teaches: The non-transitory computer readable medium of claim 8, wherein the measurement of threat of the computer security alert includes one or more of a level of threat, a security action recommendation, and a request for additional information (Burceanu, ¶[0023], “An exemplary security verdict 26 comprises a numerical score indicating a likelihood of a computer security threat. The score may be Boolean (e.g., YES/NO) or may vary gradually between predetermined bounds (e.g., between 0 and 1). In one such example, higher values indicate a higher likelihood of malice.”). Regarding claim 15, Burceanu discloses: A system comprising memory with instructions encoded thereon that, when executed by one or more processors, cause the one or more processors to perform operations comprising: accessing a set of computer security alerts (Burceanu, ¶[0024], “the training process uses a training corpus 18 comprising a collection of data samples, at least some of which may be annotated.”); generating a first training dataset comprising a plurality of first training examples using the set of computer security alerts, each first training example comprising a computer security alert labeled with one or more characteristics associated with a predetermined cause of the computer security alert (Burceanu, ¶[0024], “the training process uses a training corpus 18 comprising a collection of data samples, at least some of which may be annotated.” ¶[0030-0032], “A skilled artisan will appreciate that the illustrated format of data attributes 32a-e (i.e., as an array of numbers) is given only as an example and is not meant to be limiting. Instead, attributes may be encoded as more complicated data objects. Exemplary data attributes 32a-e may include, among others: a blacklist status of the respective page, extracted from multiple sources; lexical features of the respective page—URL length, URL components length, number of special characters, etc.;” ¶[0041-0042], “a category of online threat represented by the respective page (e.g., malware, phishing, defacement); an indicator of a type of obfuscation used in the URL of the respective page;”) training a natural language processing (NLP) model using the first training dataset (Burceanu, ¶[0026], “[0026] In some embodiments, threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks.” ¶[0060], “In some embodiments, training uses a corpus 18 of training samples/inputs having the same format as target data 22 that the respective threat detector is expected to process.”), the NLP model configured to, when applied to a computer security alert, identify a set of characteristics of a cause of the computer security alert (Burceanu, ¶[0023], “security verdict 26 may include a classification label indicative of a category of threats (e.g., malware family).” ¶[0026], “threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks. … Each edge of multitask graph 50 is embodied by a transformation module 40a-h configured to perform the respective transformation. For instance, module 40e transforms attributes 32b into attributes 32e, i.e., determines attributes 32e according to attributes 32b.” ¶[0030], “A skilled artisan will appreciate that the illustrated format of data attributes 32a-e (i.e., as an array of numbers) is given only as an example and is not meant to be limiting. Instead, attributes may be encoded as more complicated data objects.”); training a neural network model using the second training dataset (Burceanu, ¶[0026], “[0026] In some embodiments, threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks.” ¶[0060], “In some embodiments, training uses a corpus 18 of training samples/inputs having the same format as target data 22 that the respective threat detector is expected to process.”), the neural network model configured to, when applied to the set of identified characteristics of the identified cause of the computer security alert outputted by the NLP model, generate a measurement of threat of the computer security alert (Burceanu, ¶[0023], “An exemplary security verdict 26 comprises a numerical score indicating a likelihood of a computer security threat. The score may be Boolean (e.g., YES/NO) or may vary gradually between predetermined bounds (e.g., between 0 and 1). In one such example, higher values indicate a higher likelihood of malice.” ¶[0026], “threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks. … Each edge of multitask graph 50 is embodied by a transformation module 40a-h configured to perform the respective transformation. For instance, module 40e transforms attributes 32b into attributes 32e, i.e., determines attributes 32e according to attributes 32b.” ¶[0030], “A skilled artisan will appreciate that the illustrated format of data attributes 32a-e (i.e., as an array of numbers) is given only as an example and is not meant to be limiting. Instead, attributes may be encoded as more complicated data objects.”). Burceanu does not explicitly teach the following limitation that Cobb teaches: generating a second training dataset comprising a plurality of second training examples by generating variants of one or more of the accessed set of computer security alerts, each generated variant computer security alert associated with a variant set of characteristics of a variant cause of the variant computer security alert, and each second training example comprising a generated variant computer security alert labeled as an above-threshold threat or a below-threshold threat (Cobb, ¶[0037], “the training data 230 may be a collection of historical transaction data from various data stores or a single data store and may include automatic or manual classification of whether each transaction is normal or abnormal.”); and Burceanu in view of Cobb is analogous art because the references are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “training and application of AI models.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Burceanu with Cobb to “generating a second training dataset comprising a plurality of second training examples by generating variants of one or more of the accessed set of computer security alerts, each generated variant computer security alert associated with a variant set of characteristics of a variant cause of the variant computer security alert, and each second training example comprising a generated variant computer security alert labeled as an above-threshold threat or a below-threshold threat”; because, the training data may be a collection of historical transaction data from various data stores (Cobb, ¶[0037]). Regarding claim 16, Burceanu in view of Cobb teaches: The system of claim 15, wherein the instructions when executed by one or more processors cause the one or more processors to further perform operations comprising: accessing a first target computer security alert (Burceanu, ¶[0024], “the training process uses a training corpus 18 comprising a collection of data samples, at least some of which may be annotated.”); applying the NLP model to the first target computer security alert to identify a first set of target characteristics of a first target cause of the first target computer security alert (Burceanu, ¶[0023], “security verdict 26 may include a classification label indicative of a category of threats (e.g., malware family).” ¶[0026], “threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks. … Each edge of multitask graph 50 is embodied by a transformation module 40a-h configured to perform the respective transformation. For instance, module 40e transforms attributes 32b into attributes 32e, i.e., determines attributes 32e according to attributes 32b.” ¶[0030], “A skilled artisan will appreciate that the illustrated format of data attributes 32a-e (i.e., as an array of numbers) is given only as an example and is not meant to be limiting. Instead, attributes may be encoded as more complicated data objects.”); applying the neural network model to the identified first set of target characteristics to generate a first target measurement of threat of the first target computer security alert (Burceanu, ¶[0023], “An exemplary security verdict 26 comprises a numerical score indicating a likelihood of a computer security threat. The score may be Boolean (e.g., YES/NO) or may vary gradually between predetermined bounds (e.g., between 0 and 1). In one such example, higher values indicate a higher likelihood of malice.” ¶[0026], “threat detector 20 comprises a plurality of interconnected AI modules, which may be embodied as individual artificial neural networks. … Each edge of multitask graph 50 is embodied by a transformation module 40a-h configured to perform the respective transformation. For instance, module 40e transforms attributes 32b into attributes 32e, i.e., determines attributes 32e according to attributes 32b.” ¶[0030], “A skilled artisan will appreciate that the illustrated format of data attributes 32a-e (i.e., as an array of numbers) is given only as an example and is not meant to be limiting. Instead, attributes may be encoded as more complicated data objects.”); and in response to determining that the first target computer security alert is associated with the above-threshold threat, modifying an interface displayed to a human operator to present first target measurement of threat (Burceanu, ¶[0023], “An exemplary security verdict 26 comprises a numerical score indicating a likelihood of a computer security threat. The score may be Boolean (e.g., YES/NO) or may vary gradually between predetermined bounds (e.g., between 0 and 1). In one such example, higher values indicate a higher likelihood of malice.” ¶[0078], “Output devices 78 may include display devices such as monitors and speakers among others, as well as hardware interfaces/adapters such as graphic cards, enabling the respective computing appliance to communicate data to a user.”). Regarding claim 19, Burceanu in view of Cobb teaches: The system of claim 15, wherein the NLP model is trained on information of common security alert names and historic computer security alerts (Cobb, ¶[0037], “the training data 230 may be a collection of historical transaction data from various data stores or a single data store and may include automatic or manual classification of whether each transaction is normal or abnormal.”). Claims 6, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Burceanu et al. (US Patent Application Publication No. US 2023/0306106 A1, hereinafter, Burceanu) in view of Cobb et al. (US Patent Application Publication No. US 2024/0135019 A1, hereinafter, Cobb) and further in view of Ezrielev et al. (US Patent Application Publication No. US 2025/0077955 A1, hereinafter, Ezrielev). Regarding claim 6, Burceanu in view of Cobb teaches: The method of claim 1, wherein generating a first training dataset comprising a plurality of first training examples comprises: Burceanu in view of Cobb does not explicitly teach the following limitation that Ezrielev teaches: accessing a mapping between contextual information and one or more terms included in the computer security alert in each first training example; and labeling each first training example using the contextual information based on the accessed mapping. (Ezrielev, ¶[0076], “Training data repository 200 may include data that defines an association between two pieces of information (e.g., which may be referred to as “labeled data”). For example, in the context of facial recognition, training data repository 200 may include images or video of a person who has already been identified by a user. The relationship between the images or video and the identification may be a portion of labeled data. Any of the training datasets (e.g., 200A) from training data repository 200 may relate the facial attributes of a person to their identifier (e.g., name, username, etc.) thereby including any number of portions of labeled data.”) Burceanu in view of Cobb and further in view of Ezrielev is analogous art because the references are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “training and application of AI models.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Burceanu in view of Cobb with Ezrielev to “accessing a mapping between contextual information and one or more terms included in the computer security alert in each first training example; and labeling each first training example using the contextual information based on the accessed mapping.”; because, any of the training datasets from training data repository may relate the facial attributes of a person to their identifier (e.g., name, username, etc.) thereby including any number of portions of labeled data (Ezrielev, ¶[0067]). Regarding claim 13, Burceanu in view of Cobb and further in view of Ezrielev teaches: The non-transitory computer readable medium of claim 8, wherein the instructions to generate a first training dataset comprising a plurality of first training examples when executed by one or more processors cause the one or more processors to further perform operations comprising: accessing a mapping between contextual information and one or more terms included in the computer security alert in each first training example; and labeling each first training example using the contextual information based on the accessed mapping. (Ezrielev, ¶[0076], “Training data repository 200 may include data that defines an association between two pieces of information (e.g., which may be referred to as “labeled data”). For example, in the context of facial recognition, training data repository 200 may include images or video of a person who has already been identified by a user. The relationship between the images or video and the identification may be a portion of labeled data. Any of the training datasets (e.g., 200A) from training data repository 200 may relate the facial attributes of a person to their identifier (e.g., name, username, etc.) thereby including any number of portions of labeled data.”) Regarding claim 20, Burceanu in view of Cobb and further in view of Ezrielev teaches: The system of claim 15, wherein the instructions to generate a first training dataset comprising a plurality of first training examples when executed by one or more processors cause the one or more processors to further perform operations comprising: accessing a mapping between contextual information and one or more terms included in the computer security alert in each first training example; and labeling each first training example using the contextual information based on the accessed mapping. (Ezrielev, ¶[0076], “Training data repository 200 may include data that defines an association between two pieces of information (e.g., which may be referred to as “labeled data”). For example, in the context of facial recognition, training data repository 200 may include images or video of a person who has already been identified by a user. The relationship between the images or video and the identification may be a portion of labeled data. Any of the training datasets (e.g., 200A) from training data repository 200 may relate the facial attributes of a person to their identifier (e.g., name, username, etc.) thereby including any number of portions of labeled data.”) CONCLUSION Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDGAR W XIE whose telephone number is (703)756-4777. The examiner can normally be reached Monday - Friday, 8:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JEFFREY PWU can be reached at (571)272-6798. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDGAR W XIE/Examiner, Art Unit 2433 /WASIKA NIPA/Primary Examiner, Art Unit 2433
Read full office action

Prosecution Timeline

Mar 08, 2024
Application Filed
Dec 19, 2025
Examiner Interview (Telephonic)
Dec 23, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602475
AGGREGATING INPUT/OUTPUT OPERATION FEATURES EXTRACTED FROM STORAGE DEVICES TO FORM A MACHINE LEARNING VECTOR TO CHECK FOR MALWARE
2y 5m to grant Granted Apr 14, 2026
Patent 12579267
Methods and Systems for Analyzing Environment-Sensitive Malware with Coverage-Guided Fuzzing
2y 5m to grant Granted Mar 17, 2026
Patent 12579281
Dynamic Prioritization of Vulnerability Risk Assessment Findings
2y 5m to grant Granted Mar 17, 2026
Patent 12566844
SYSTEM AND METHOD FOR COLLABORATIVE SMART EVIDENCE GATHERING AND INVESTIGATION FOR INCIDENT RESPONSE, ATTACK SURFACE MANAGEMENT, AND FORENSICS IN A COMPUTING ENVIRONMENT
2y 5m to grant Granted Mar 03, 2026
Patent 12513001
BLOCKCHAIN VERIFICATION OF DIGITAL CONTENT ATTRIBUTIONS
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+37.5%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 17 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month