Prosecution Insights
Last updated: April 19, 2026
Application No. 18/477,241

LARGE LANGUAGE MODELS FOR ACTOR ATTRIBUTIONS

Final Rejection §103
Filed
Sep 28, 2023
Examiner
GREENE, JOSEPH L
Art Unit
2443
Tech Center
2400 — Computer Networks
Assignee
Crowdstrike Inc.
OA Round
4 (Final)
63%
Grant Probability
Moderate
5-6
OA Rounds
4y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
347 granted / 550 resolved
+5.1% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
48 currently pending
Career history
598
Total Applications
across all art units

Statute-Specific Performance

§101
9.6%
-30.4% vs TC avg
§103
61.0%
+21.0% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 550 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 1. Claims 1 – 20 are currently pending in this application. Claims 1, 9, and 17 are amended as filed on 03/09/2026. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Huang et al. (Pre-Grant Publication No. US 2024/0430279 A1), hereinafter Huang, in view of Nomula et al. (Pre-Grant Publication No. US 2024/0168443 A1), hereinafter Nomula, in view of Ryver (Pre-Grant Publication No. US 2021/0352087 A1), and in further view of Murdock, IV et al. (Pre-Grant Publication No. US 2024/0098055 A1), hereinafter Murdock. 2. With respect to claims 1, 9, and 17, Huang taught a method comprising: deploying a first machine learning model based on first data associated with a first cybersecurity incident of a plurality of cybersecurity incidents (0005, where the cybersecurity incident monitoring is described in 0002 & 0021); training the first ML model based on actor attribution associated with the first cybersecurity incident to deploy a second ML model (0031); receiving second data associated with a second cybersecurity incident of the plurality of cybersecurity incidents (0031, the multiple modules teaches the second module); and training, by a processing device for the second ML model using the second data, an attribution of the second cybersecurity incident to an actor (0031). However, Huang did not explicitly state that the models were being generated that training the models included producing the models. On the other hand, Nomula did teach that the models were being generated that training the models included producing the models (0013 & 0076. See also: 0069). Both of the systems of Huang and Nomula are directed towards managing machine learning models and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention, to modify the teachings of Huang, to utilize generating a machine learning models, as taught by Nomula, as it could be argued that Huang already performed this step in the training and deploying features of the invention. However, it is not explicitly stated. However, Nomula did not explicitly state wherein the second ML model is trained using reasoning data comprising historical analytical processes that produced historical associations between historical cybersecurity incidents and historical threat actors. On the other hand, Ryver did teach wherein the second ML model is trained using reasoning data comprising historical analytical processes that produced historical associations between historical cybersecurity incidents and historical threat actors (0054-0058, where the historical malicious actor data trains the model). Both of the systems of Huang and Ryver are directed towards managing machine learning models and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention, to modify the teachings of Huang, to utilize generating a machine learning models based on historical malicious actor data, as taught by Ryver, in order to maintain a more accurate model for detecting threats. However, Huang did not explicitly state investigation data, wherein the investigation data comprises analyst notes corresponding to historical attribution investigations and that the output comprised a validation of reasoning information identified in the second data; wherein the historical analytical processes include investigation options utilized to research the historical cybersecurity incidents and the investigation data comprises analyst notes documenting the historical cybersecurity incidents corresponding to historical attributions; and wherein the validation of reasoning information comprises comparing an analyst report included in the second data to the historical analytical processes and the investigation data. On the other hand, Murdock did teach investigation data, wherein the investigation data comprises analyst notes corresponding to historical attribution investigations (0031) and that the output comprised a validation of reasoning information identified in the second data (0031, the factor weights used to train/update further models. Accordingly, the weights represent reasoning data under broadest reasonable interpretation); wherein the historical analytical processes include investigation options utilized to research the historical cybersecurity incidents and the investigation data comprises analyst notes documenting the historical cybersecurity incidents corresponding to historical attributions (0031, where the insights of the user teaches analyst notes that are documenting the incident. Accordingly, investigative options can simply be any decision data in the record under broadest reasonable interpretation); and wherein the validation of reasoning information comprises comparing an analyst report included in the second data to the historical analytical processes and the investigation data (0030-0031, where it can be seen that the user data is utilized to train the model base don reported user solutions such as system faults, which could potentially be a cybersecurity incident. Lastly, Murdock generically teaches dealing with incidents. An incident, specifically being a cybersecurity incident, was previously shown by Huang: 0005). Both of the systems of Huang and Murdock are directed towards security analytics and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention, to modify the teachings of Huang, to utilize training a model with user notations, as taught by Murdock, in order to look at the totality of relevant data for training a model. 3. As for claims 2 and 10, they are rejected on the same basis as claims 1 and 9 (respectively). In addition, Huang taught identifying the first cybersecurity incident associated with the first data from at least one of: a data archive, scaped content, or an external report (0032, where the external trigger that’s reported, at least, teaches the external report limitation). 4. As for claims 3, 11, and 18, they are rejected on the same basis as claims 1, 9, and 17 (respectively). In addition, Ryver taught wherein the reasoning data comprises step-by-step demonstrations of the historical analytical process that produced the associations between the historical cybersecurity incidents and the historical threat actors (0054-0058, where the step process is given to build a historical record of actors and associated incidents). 5. As for claims 4 and 12, they are rejected on the same basis as claims 1 and 9 (respectively). In addition, Huang taught wherein the actor attribution corresponds to a ground truth label for a particular incident of the plurality of cybersecurity incidents, and wherein the actor for the particular incident is identifiable with a threshold level of confidence using the ground truth label (0015 & 0017, where the associated historical data is a ground truth label based on the applicant’s definition, located in paragraph 0023, as well as broadest reasonable interpretation). 6. As for claims 5 and 13, they are rejected on the same basis as claims 1 and 9 (respectively). In addition, Huang taught wherein the second data associated with the second cybersecurity incident is a prompt related to the plurality of cybersecurity incidents, the second cybersecurity incident being validated based on the prompt (0028, where the updated events prompt the system under broadest reasonable interpretation). 7. As for claims 6, 14, and 19, they are rejected on the same basis as claims 6, 14, and 19 (respectively). In addition, Huang taught wherein the producing the attribution of the second cybersecurity incident, comprises: outputting, by the second ML model, at least one of a prediction, a textual analysis, or an embedding associated with the second data (0017). 8. As for claims 7, 15, and 20, they are rejected on the same basis as claims 1, 9, and 17 (respectively). In addition, Huang taught wherein the textual analysis comprises at least one of: a hypothesis related to the second cybersecurity incident, the validation of the reasoning information associated with a prompt related to the second cybersecurity incident, or a suggestion for an additional prompt related to discovery procedures associated with the second cybersecurity incident (0017, where the prediction, at least, teaches the hypothesis limitation). 9. As for claims 8 and 16, they are rejected on the same basis as claims 1 and 9 (respectively). In addition, Huang taught storing a record of prior cybersecurity incidents in an indexed database based on an embedding generated from the second data (0021, the historical activity is stored in databases in accordance with 0020 & 0058). Response to Arguments Applicant's arguments filed 03/09/2026 have been fully considered but they are not persuasive. 10. The applicant argues on page 11 that “the incident management system data is used to build user interest profiles, not to document cybersecurity investigations. Murdock explicitly states this data determines a user's "areas of expertise" and "topics of interest" so the system can alert the user when "a multi-party discussion in which the user is a participant has become relevant for the user."” However, 0030 shows that the incident management system is designed to managed user incidents such as system faults (which could potentially be a cyber security incident), browser issues, and the like. 11. The applicant argues on page 12 that “Murdock discloses a system that uses weights for individual relevance factors such as a user's level of expertise on the topic, how often a user has historically responded to topics, and the magnitude of data input on a topic. These weights are mathematical coefficients for computing relevance scores, not an output that validates the reasoning in an analyst report. In addition, claim 1 as amended requires comparing an analyst report to "the historical analytical processes and the investigation data," which is not taught or suggested by Murdock. Murdock's system compares discussion topics to user interest profiles to determine relevance, but Murdock does not teach an analyst report as an input. Instead, Murdock's inputs are multi-party discussion content and user profile data, which are not an analyst report containing cybersecurity reasoning requiring validation. Furthermore, Murdock's output is a "relevance alert" notifying users that a discussion matches their interests. Murdock discloses "generating an alert indicating that the multi-party discussion has a topic relevance above a relevance threshold for the user." This is different from "validation of reasoning information”. However, Murdock’s stated purpose for the user corpus data is generally dealing with optimization machine learning models (0047). More importantly, optimization includes assigning users the correct topics/channels, but also managing incidents that user may encounter (0030). The Murdock’s system doesn’t appear to specifically claim cybersecurity incidents (it does, however, mention security applications), but the system is generally related to training models for optimization and user incident management. As a cybersecurity incident is a type of incident (Huang: 0005), the claimed limitations have been provided. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH L GREENE whose telephone number is (571)270-3730. The examiner can normally be reached Monday - Thursday, 10:00am - 4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nicholas R. Taylor can be reached at 571 272-3889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSEPH L GREENE/Primary Examiner, Art Unit 2443
Read full office action

Prosecution Timeline

Sep 28, 2023
Application Filed
Apr 24, 2025
Non-Final Rejection — §103
Jul 18, 2025
Examiner Interview Summary
Jul 18, 2025
Applicant Interview (Telephonic)
Jul 29, 2025
Response Filed
Aug 08, 2025
Final Rejection — §103
Oct 02, 2025
Applicant Interview (Telephonic)
Oct 02, 2025
Examiner Interview Summary
Oct 10, 2025
Response after Non-Final Action
Nov 12, 2025
Request for Continued Examination
Nov 22, 2025
Response after Non-Final Action
Dec 04, 2025
Non-Final Rejection — §103
Feb 02, 2026
Examiner Interview Summary
Feb 02, 2026
Applicant Interview (Telephonic)
Mar 09, 2026
Response Filed
Mar 26, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12568075
METHOD, SYSTEM AND APPARATUS OF AUTHENTICATING USER AFFILIATION FOR AN AVATAR DISPLAYED ON A DIGITAL PLATFORM
2y 5m to grant Granted Mar 03, 2026
Patent 12567425
ENCODING METHOD AND DECODING METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12566897
ANTI-TAMPER CIRCUIT, LED CABINET AND LED DISPLAY SCREEN
2y 5m to grant Granted Mar 03, 2026
Patent 12563049
SYSTEMS AND METHODS FOR A.I.-BASED MALWARE ANALYSIS ON OFFLINE ENDPOINTS IN A NETWORK
2y 5m to grant Granted Feb 24, 2026
Patent 12531830
METHOD AND ELECTRONIC DEVICE FOR DEVICE IP STATUS CHECKING AND CONNECTION ORCHESTRATION
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
63%
Grant Probability
99%
With Interview (+36.9%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 550 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month