Prosecution Insights
Last updated: April 19, 2026
Application No. 18/216,833

TECHNIQUES FOR UTILIZING EMBEDDINGS TO MONITOR PROCESS TREES

Non-Final OA §103
Filed
Jun 30, 2023
Examiner
HERZOG, MADHURI R
Art Unit
2438
Tech Center
2400 — Computer Networks
Assignee
Crowdstrike Inc.
OA Round
3 (Non-Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
90%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
516 granted / 662 resolved
+19.9% vs TC avg
Moderate +12% lift
Without
With
+11.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
35 currently pending
Career history
697
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
13.0%
-27.0% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 662 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 have been examined. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/19/2025 has been entered. Response to Amendment Claims 1, 8, and 15 have been amended. Applicant’s arguments with respect to claims 1, 8, and 15 regarding the new limitations: “producing a natural language explanation that describes, in human-readable terms, which one or more of the plurality of processes in the process tree are relevant in the identification of the malware”, have been considered but are moot in view of the new ground of rejection presented in the current office action. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-3, 6-9, 12-16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over prior art of record US 20200167464 to Griffin et al (hereinafter Griffin) and prior art of record US 20220318377 to Edwards et al (hereinafter Edwards). As per claims 1, 8, and 15, Griffin teaches: A method of detecting malware, the method comprising: generating a process tree embedding corresponding to a process tree, the process tree comprising a plurality of processes (Griffin: [0015] Malicious activity detection system 104 uses a process tree vectorization tool 108 to generate an embedding vector for each of the process trees 106 (i.e., vectorizes each of the process trees 106). The embedding vectors are also referred to herein as vectorized process trees. [0022]: In step 210, malicious activity detection system 104 (see FIG. 1) vectorizes second process trees 112 (see FIG. 1), which specify computer processes that are currently being executed in computer 102 (see FIG. 1). The vectorizing in step 210 results in vectorized second process trees); and processing, by a processing device, the process tree embedding with a machine learning model to generate an identification of malware associated with the process tree (Griffin: [0024] In step 212, malicious activity detection system 104 (see FIG. 1) provides the vectorized second process trees as input vectors to artificial neural network 110. [0025] After step 212 and prior to step 214, artificial neural network 110 (see FIG. 1) provides an output indicating that a combination of the input vectors provided in step 212 indicates the malicious activity 114 (see FIG. 1)), and producing a natural language explanation that describes, in human-readable terms, (Griffin: [0034]: malicious activity detection system 104 (see FIG. 1) uses a natural language generation engine to generate a text in a natural language that includes a description of the malicious activity 114 (see FIG. 1) based on the one or more other computer-based actions and malicious activity detection system 104 (see FIG. 1) generates an alert that includes the text in the natural language that includes the description of the malicious activity 114 (see FIG. 1) and sends the alert to another computer system for viewing by a human analyst). Griffin teaches a description of malicious activity but does not explicitly teach: generate an identification of malware. Also, Griffin does not teach: wherein the processing comprises: identifying a target process, from the plurality of processes, that correspond to an execution associated with the malware; identifying, from the plurality of processes, an ancestor process of the target process that is associated with spawning the target process. generate an identification of malware (Edwards: [0184]: In block 432, the security agent may send an event with a story graph to a policy engine of an enterprise or global security server. This policy engine can then watch for similar malware on other systems within the enterprise or globally. [0200]: Depending on the operating mode, the remedial actions may be full or partial, but in either case may involve rolling back some or all of the work done by the identified malware process, i.e., the malware is identified); wherein the processing comprises: identifying a target process, from the plurality of processes, that correspond to an execution associated with the malware (Edwards: [0159] FIG. 3 is a block diagram of a process tree 300. Process tree 300 is disclosed particularly in the context of malware detection. In this example, runme.exe 316 has been identified (either as a file or as a process) as suspicious. [0162]: However, task scheduler 340 is a direct parent of PowerShell 344, which may be identified as malicious. PowerShell 344 has an indirect (horizontal) parent-child relationship with PowerShell 320, because PowerShell 320 caused PowerShell 344 to be spawned by task scheduler service 340. [0163]: Any one of runme.exe 316, PowerShell 320, or PowerShell 344 may be the one initially identified as a malicious process); identifying, from the plurality of processes, an ancestor process of the target process that is associated with spawning the target process (Edwards: [0163]: If PowerShell 344 is the first identified, then the security agent may walk tree 300 to determine that there is an indirect parent-child relationship between PowerShell 344 and PowerShell 320, i.e., PowerShell 320 is identified as the ancestor process of PowerShell 344 (malicious process). The security agent may also determine that PowerShell 320 is a direct descendant of runme.exe 316, i.e., runme.exe is identified as the ancestor process of PowerShell 320. [0164] This may start a deep remediation of runme.exe, which ultimately will encompass all three of runme.exe 316, PowerShell 320, and PowerShell 344); producing…. which one or more of the plurality of processes in the process tree are relevant in the identification of the malware (Edwards: [0087]: compile a report of the plurality of actions, wherein the actions are grouped by the common responsible parent actor; send the report to a machine or human analysis agent. [0088]: wherein the report further associates the plurality of actions with their direct parent actors). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to employ the teachings of Edwards in the invention of Griffin to include the above limitations. The motivation to do so would be to execute a deep remediation, which rolls back changes and to also execute an AutoRun removal process which removes persistence (Edwards: [0183]). As per claims 2 and 18, Griffin in view of Edwards teaches: The method of claim 1, wherein processing the process tree embedding with the machine learning model to generate the identification of malware associated with the process tree comprises: processing the process tree embedding with the machine learning model to generate a classification of the process tree as being associated with malware (Griffin: [0015]. [0022]: In step 210, malicious activity detection system 104 (see FIG. 1) vectorizes second process trees 112 (see FIG. 1), which specify computer processes that are currently being executed in computer 102 (see FIG. 1). The vectorizing in step 210 results in vectorized second process trees. [0024] In step 212, malicious activity detection system 104 (see FIG. 1) provides the vectorized second process trees as input vectors to artificial neural network 110. [0025] After step 212 and prior to step 214, artificial neural network 110 (see FIG. 1) provides an output indicating that a combination of the input vectors provided in step 212 indicates the malicious activity 114 (see FIG. 1)); and responsive to the classification indicating that the process tree is associated with malware, generating, by the processing device, the identification of the ancestor process of the target process (Griffin: [0025]: artificial neural network 110 (see FIG. 1) provides an output indicating that a combination of the input vectors provided in step 212 indicates the malicious activity 114 (see FIG. 1). Edwards: [0149]: Process tree 200 may be built to help track malicious or possibly malicious activity by an application. [0163]: If PowerShell 344 is the first identified, then the security agent may walk tree 300 to determine that there is an indirect parent-child relationship between PowerShell 344 and PowerShell 320, i.e., PowerShell 320 is identified as the ancestor process of PowerShell 344 (malicious process). The security agent may also determine that PowerShell 320 is a direct descendant of runme.exe 316, i.e., runme.exe is identified as the ancestor process of PowerShell 320). The examiner provides the same rationale to combine prior arts Griffin and Edwards as in claims 1 and 15 above. As per claims 3, 9, and 16, Griffin in view of Edwards teaches: The method of claim 1, wherein generating the process tree embedding corresponding to the process tree comprises: generating a process embedding corresponding to a first process of the process tree; and generating the process tree embedding based on the process embedding (Griffin: [0015] Malicious activity detection system 104 uses a process tree vectorization tool 108 to generate an embedding vector for each of the process trees 106 (i.e., vectorizes each of the process trees 106). [0029] The vectorizing of process trees in step 204 and step 210 creates embedding vectors (i.e., process embeddings) that provide a compact representation of computer processes and the relative meanings of the computer processes. In one embodiment, malicious activity detection system 104 (see FIG. 1) requires that data from the first and second process trees 106 and 112 (see FIG. 1) be integer encoded so that a unique integer represents each process). As per claims 6, 12, and 19, Griffin in view of Edwards teaches: The method of claim 1, wherein generating the process tree embedding corresponding to the process tree comprises: generating process embeddings for each of the plurality of processes of the process tree; and aggregating the process embeddings to generate the process tree embedding (Griffin: [0015] Malicious activity detection system 104 uses a process tree vectorization tool 108 to generate an embedding vector for each of the process trees 106 (i.e., vectorizes each of the process trees 106). [0029] The vectorizing of process trees in step 204 and step 210 creates embedding vectors (i.e., process embeddings) that provide a compact representation of computer processes and the relative meanings of the computer processes. In one embodiment, malicious activity detection system 104 (see FIG. 1) requires that data from the first and second process trees 106 and 112 (see FIG. 1) be integer encoded so that a unique integer represents each process). As per claims 7, 13, and 20, Griffin in view of Edwards teaches: The method of claim 1, wherein generating the process tree embedding corresponding to the process tree comprises: generating the process tree embedding based on respective metadata of each of the plurality of processes of the process tree (Griffin: [0012] In one embodiment, an augmented threat response system generates a process tree, creates an embedding vector for each detailed process taxonomy of the process tree (i.e., vectorizes the process taxonomies), associates the taxonomies with running processes, and analyzes each process sub-tree and associated sub-trees to proactively determine whether the sub-trees represent a contextual sub-task that has the capability for malicious behavior (i.e., to recognize a threat vector). It was well known to one of ordinary skill in the art that a process taxonomy of a process tree includes the names (metadata) of the executing processes. [0022]: In step 210, malicious activity detection system 104 (see FIG. 1) vectorizes second process trees 112 (see FIG. 1), which specify computer processes that are currently being executed in computer 102. [0029] The vectorizing of process trees in step 204 and step 210 creates embedding vectors (i.e., process embeddings) that provide a compact representation of computer processes and the relative meanings of the computer processes). As per claims 14, Griffin in view of Edwards teaches: The system of claim 8, wherein the processing device is further to: responsive to the identification of malware associated with the process tree, initiate remediation on one or more of the plurality of processes of the process tree (Griffin: In step 214, responsive to artificial neural network 110 (see FIG.1) providing the aforementioned output, malicious activity detection system 104 (see FIG. 1) generates and sends (or otherwise presents) remediation recommendation(s) 116 (see FIG. 1) and/or performs remedial action(s) based on remediation recommendation(s) 116 (see FIG. 1). [0037] In one embodiment, malicious activity detection system 104 (see FIG. 1) (1) configures attributes of the remedial action(s) in one or more policies and (2) determines that an amount of risk associated with the malicious activity exceeds a threshold amount of risk, where performing the remedial action(s) in step 214 is performed automatically based on (i) the one or more policies and (ii) the amount of risk exceeding the threshold amount of risk). Claims 4, 5, 10, 11, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Griffin in view of Edwards as applied to claims 3, 9, and 16 above, and further in view of prior art of record US 20240330446 to Bulut et al (hereinafter Bulut). As per claims 4, 10, and 17, Griffin in view of Edwards teaches: wherein generating the process embedding comprises submitting metadata associated with the first process (Griffin: [0012] In one embodiment, an augmented threat response system generates a process tree, creates an embedding vector for each detailed process taxonomy of the process tree (i.e., vectorizes the process taxonomies), associates the taxonomies with running processes, and analyzes each process sub-tree and associated sub-trees to proactively determine whether the sub-trees represent a contextual sub-task that has the capability for malicious behavior (i.e., to recognize a threat vector). It was well known to one of ordinary skill in the art that a process taxonomy of a process tree includes the names (metadata) of the executing processes. [0022]: In step 210, malicious activity detection system 104 (see FIG. 1) vectorizes second process trees 112 (see FIG. 1), which specify computer processes that are currently being executed in computer 102. [0029] The vectorizing of process trees in step 204 and step 210 creates embedding vectors (i.e., process embeddings) that provide a compact representation of computer processes and the relative meanings of the computer processes. In one embodiment, malicious activity detection system 104 (see FIG. 1) uses an embedding layer for neural networks on the first and second process trees 106 and 112). Griffin in view of Edwards does not teach: submitting metadata associated with the first process to a large language model (LLM). However, Bulut teaches: submitting metadata associated with the first process to a large language model (LLM) (Bulut: [0019]: In some embodiments, a security specific dataset may be generated from a set of security documents, such as security logs, alerts, and threat intelligence documents. A security log may include records of security events, such as login/logout activity (process), including associated time stamps (metadata), locations (metadata), usernames, IP addresses (metadata), and computer names for each security event. [0024] In one embodiment, a security specific LLM may be deployed to generate search results for a knowledge base of security logs. The security specific LLM may be used to create embedding representations for each of the documents in the knowledge base. Also, [0075]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to employ the teachings of Bulut in the invention of Griffin in view of Edwards to include the above limitations. The motivation to do so would be to improve the performance and energy efficiency of machine learning systems that generate security related information and detect security related anomalies and events (Bulut: [0017]). As per claims 5 and 11, Griffin in view of Edwards and Bulut teaches: The method of claim 4, wherein the metadata comprises at least one of an operating system process identifier (ID) of the first process, a unique generated process ID (LIPID) of first process, an operating system process ID of an ancestor in the process tree of first process, a LIPID of the ancestor in the process tree of the first process, a filename of an executable image of the first process, a command line used to create the first process, a filename of an executable image of the ancestor in the process tree of the first process, a command line of the ancestor in the process tree of the first process, or an identification of an action that caused a generation of the metadata for the first process (Griffin: [0012] In one embodiment, an augmented threat response system generates a process tree, creates an embedding vector for each detailed process taxonomy of the process tree (i.e., vectorizes the process taxonomies), associates the taxonomies with running processes, and analyzes each process sub-tree and associated sub-trees to proactively determine whether the sub-trees represent a contextual sub-task that has the capability for malicious behavior (i.e., to recognize a threat vector). It was well known to one of ordinary skill in the art that a process taxonomy of a process tree includes the filenames of executable images (metadata) of the executing processes. [0022]: In step 210, malicious activity detection system 104 (see FIG. 1) vectorizes second process trees 112 (see FIG. 1), which specify computer processes that are currently being executed in computer 102. [0029] The vectorizing of process trees in step 204 and step 210 creates embedding vectors (i.e., process embeddings) that provide a compact representation of computer processes and the relative meanings of the computer processes). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MADHURI R HERZOG whose telephone number is (571)270-3359. The examiner can normally be reached 8:30AM-4:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Taghi Arani can be reached at (571)272-3787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MADHURI R. HERZOG Primary Examiner Art Unit 2438 /MADHURI R HERZOG/Primary Examiner, Art Unit 2438
Read full office action

Prosecution Timeline

Jun 30, 2023
Application Filed
Mar 27, 2025
Non-Final Rejection — §103
Apr 24, 2025
Applicant Interview (Telephonic)
Apr 24, 2025
Examiner Interview Summary
Jul 02, 2025
Response Filed
Aug 15, 2025
Final Rejection — §103
Sep 23, 2025
Applicant Interview (Telephonic)
Sep 23, 2025
Examiner Interview Summary
Oct 17, 2025
Response after Non-Final Action
Nov 19, 2025
Request for Continued Examination
Nov 30, 2025
Response after Non-Final Action
Jan 09, 2026
Non-Final Rejection — §103
Mar 11, 2026
Applicant Interview (Telephonic)
Mar 11, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603766
QKD SWITCHING SYSTEM AND PROTOCOLS
2y 5m to grant Granted Apr 14, 2026
Patent 12592925
METHOD AND SYSTEM FOR AUTHENTICATING A USER ON AN IDENTITY-AS-A-SERVICE SERVER WITH A TRUSTED THIRD PARTY
2y 5m to grant Granted Mar 31, 2026
Patent 12592820
SYSTEMS AND METHODS FOR DIGITAL RETIREMENT OF INFORMATION HANDLING SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12587383
METHOD AND SYSTEM FOR OUT-OF-BAND USER IDENTIFICATION IN THE METAVERSE VIA BIOGRAPHICAL (BIO) ID
2y 5m to grant Granted Mar 24, 2026
Patent 12556550
THREAT DETECTION PLATFORMS FOR DETECTING, CHARACTERIZING, AND REMEDIATING EMAIL-BASED THREATS IN REAL TIME
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
90%
With Interview (+11.9%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 662 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month