Prosecution Insights
Last updated: April 19, 2026
Application No. 18/558,361

EXTRACTING DEVICE, EXTRACTING METHOD, AND EXTRACTING PROGRAM

Final Rejection §101§103§112
Filed
Oct 31, 2023
Examiner
POUDEL, SAMIKSHYA NMN
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
NTT, Inc.
OA Round
2 (Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
8 granted / 18 resolved
-13.6% vs TC avg
Strong +80% interview lift
Without
With
+80.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
29 currently pending
Career history
47
Total Applications
across all art units

Statute-Specific Performance

§101
16.2%
-23.8% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
11.5%
-28.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/31/2023 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Regarding claims 2, 8 and 12, Claims 2, 8 and 12 are objected to because of the following informalities: In line 3, “a longest common subsequence” should read “ the longest common subsequence”. In line 5, “a sequence of a plurality of signatures” should read “the sequence of a plurality of signatures”. Appropriate correction is required. Regarding claims 3, 9, and 13, Claims 3, 9, and 13 are objected to because of the following informalities: In line 6 , “a plurality of log groups” should read “ the plurality of log groups”. In line 8, “a variance value” should read “the variance value”. Appropriate correction is required. Regarding claims 4, 10,and 14, Claims 4, 10,and 14, are objected to because of the following informalities: In line 2, “a plurality of log groups” should read “ the plurality of log groups”. Appropriate correction is required. Claim Interpretation The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that use the word “step of” are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, Such claim limitation(s) is/are: “a step of collecting”, “a step of referring”, “a step of extracting”, “a step of calculating”, and “a step of outputting” in claim 6. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2, 3, 4 recite the limitation "the log group" in line 3, 9 and 5 . It is unclear that which log group (i.e., the first, second, each log group, third) of claim 1, it is referring back and lacks proper antecedent basis. Examiner suggest to clarify the scope of claim language. Dependent claims are also rejected for inheriting the deficiencies set forth above for independent claims. Appropriate correction is required. Similarly, Claims 8-10, 12-14 recites the same claim limitation as 2-4, thus they are rejected for same reasons. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Independent claims 1, 6 and 7: Step1: Claims 1 is drawn to “a device”, claim 6 is drawn to “method”, and claim 7 is drawn to “a program”, therefore each of these claim groups falls under one of four categories of statutory subject matter (process/method, machines/products/apparatus, manufactures, and compositions of matter). Step 2A, Prong 1: Claims 1, 6, and 7 are directed to a judicially recognized exception of an abstract idea without significantly more. Each of claims 1, 6, and 7 recites limitations “collecting a log of a computer to be investigated” is just merely data gathering, and “extracting a first log group which matches a signature indicated by a rule from the collected logs, wherein the rule includes an ordered list of a plurality of signatures that indicate an attack on the computer, and the ordered list includes the plurality of signatures in order of characteristic of the attack”, “extracting a second log group in which a longest common subsequence between a chronological sequence of signatures which match logs in the extracted first log group and a sequence of a plurality of signatures indicated in the rule is the longest”, “calculating, for each log group in which the longest common subsequence is the longest, a variance value of a time difference between each log which is adjacent in time series in said each log group” and “outputting the longest common subsequence in a third log group with a minimum calculated variance value as an attack trace candidate” that under its broadest reasonable interpretation, enumerates a mental evaluation and abstract ideas. Other than reciting a generic “a processor” (Claim 1), nothing in the claims preclude the steps from practically being performed in the human mind. For example, other than the “a processor” language, the claims encompass a user visually and manually perform grouping data records, comparing sequences, calculating statistical metrics (variance) , and selecting the longest common subsequence with minimum variance. Furthermore, Longest common subsequence (LCS) is well known mathematical algorithm and variance is basic statistical calculation. The mere nominal recitation of a generic computer component (computer processor) to automate these steps are nothing more than abstract mental and mathematical concepts (See MPEP 2106.04(a)(2)(I)(III)). Step 2A, Prong 2: Claim 6 does not recite any additional elements/or steps that would integrate the abstract idea into a practical application. However, claims 1 and 7 recites additional element “a processor” to execute the computer program instructions, and “computer-readable non-transitory recording medium” to store computer program instructions. The computer processor and the computer readable storage media are recited at a high level of generality (i.e., as generic computer components performing generic computer functions to process and to store data respectively). These generic computer functions are no more than mere instructions to apply the exception using generic computer components. The combination of these additional elements does not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea (MPEP 2106.05(f)). Step 2B: The additional elements “a processor” to execute the computer program instructions, and “computer-readable non-transitory recording medium” to store computer program instructions are no more than generic, off-the-shelf computer components, and the Symantec, TLI, OIP Techs, and Versata court decisions cited in MPEP 2106.05(d)(II) indicate that mere collection/receipt of data over a network and/or storing and retrieving information in memory are well-understood, routine, and conventional functions when it is claimed in a merely generic manner (See MPEP 2106.05(d)(II)(IV)). As such, claims 1, 6, and 7 are not patent eligible. Dependent claims 2-5, 8-11, and 12-15: Step 1: Claims 2-5 is drawn to “a device”, claim 8-11 is drawn to “method”, and claim 12-15 is drawn to “a program”, therefore each of these claims falls under one of four categories of statutory subject matter (process/method, machines/products/apparatus, manufactures, and compositions of matter). Steps 2A-2B: Dependent claims 2- 5, 8-11, and 12-15 are also ineligible for the same reasons given with respect to claims 1, 6 and 7. Claims 2- 5, 8-11, and 12-15 recite further a mental evaluation and abstract ideas of description of LCS, variance and outputting the longest LCS with minimum variance or Just LCS (MPEP 2106.04(a)(2)(I)). Claims 2- 5, 8-11, and 12-15 fail to recite any additional elements/steps that might integrates the abstract idea into a practical application. As such, claims 2- 5, 8-11, and 12-15 are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-15 are rejected under 35 U.S.C. 103 as being unpatentable over Aoki (US 20180046800 A1) in view of Kohout (US 20160080404 A1) in further view of Dennison (US 9043894 B1). Regarding claim 1, An extraction device comprising a processor configured to execute operations comprising, comprising: collecting a log of a computer to be investigated (Aoki, The event means an event that represents each of characteristics when certain characteristics are observed in communications. For example, the event may be an event in which a communication with a specific communication destination is included according to an analysis of a device log (i.e., a log of a computer to be investigated) recorded by a firewall, a web proxy, or the like, [0036] The IF unit 110 is, for example, a network interface card (NIC) or the like, and transmits and receives various kinds of data to and from an external apparatus. For example, the IF unit 110 receives, as the monitoring target NW analysis result, a result of analysis of a device log or the like in a firewall, a web proxy, or the like installed in the monitoring target NW , [0058] The event sequence storage unit 120 and the detection event sequence storage unit 121 appropriately store information handled by the sequence generation unit 130, the detection sequence extraction unit 140, and the detection unit 150, [0059]) [Examiner interprets that IF unit collecting the monitored target NW analysis result which comes from device logs (e.g., firewall proxy) in the monitored network as collecting a log of a computer to be investigated]; extracting a first log group which matches a signature indicated by a rule from the collected logs, wherein the rule includes an ordered list of a plurality of signatures that indicate an attack on the computer, and the ordered list includes the plurality of signatures in order of characteristic of the attack (Aoki, the event sequence is a sequence in which the monitoring target NW analysis results are arranged in chronological order for each of hosts in the monitoring target NW, or a sequence in which the malware communication analysis results are arranged in chronological order for each of malware samples, [0036] FIG. 3, an event detected for each of malware identifiers is stored in association with an event type and an event occurrence time, similarly to the monitoring target NW analysis result, [0037] The sequence generation unit 130 the of the detection device 100 includes an exclusion event extraction unit 131 and an event sequence generation unit 132, uses the monitoring target NW analysis results and the malware communication analysis result as inputs, and generates an event sequence for each of the inputs. The sequence generation unit 130 generates the event sequence from events that match a rule (i.e., a signature) characterizing a communication among communications in the monitoring target network and communications caused by malware and that are acquired for each of the identifiers that distinguish among terminals in the monitoring target network or pieces of malware, by taking into account the order of occurrence of the events, [0038]) [Examiner interprets that system taking monitored target NW analysis result (i.e., collected logs) as an input and sequence generation unit generating the sequences of events in chronological order that matches a rule characterizing the malware (i.e., a signature) as extracting a first log group which matches a signature indicated by a rule from the collected logs, wherein the rule includes an ordered list of a plurality of signatures that indicate an attack on the computer, and the ordered list includes the plurality of signatures in order of characteristic of the attack]; extracting a second log group in which a longest common subsequence between a chronological sequence of signatures which match logs in the extracted first log group and a sequence of a plurality of signatures indicated in the rule is the longest (Aoki, the event sequence is a sequence in which the monitoring target NW analysis results are arranged in chronological order for each of hosts in the monitoring target NW, or a sequence in which the malware communication analysis results are arranged in chronological order for each of malware samples, [0036] the representative event sequence extraction unit 142 generates, from the common event sequences, a digraph in which nodes represent events, edges represent the order of occurrence of the events, and weights of the edges represent the numbers of occurrences of before-after relationships of the events, calculates a sum of the weights for each of simple paths of the digraph, and uses a simple path indicating the maximum weight as the representative event sequence, [0066] the common event sequence extraction unit 141 (i.e., the extraction unit) sets the event sequences with the similarities at a predetermined level or higher in the same cluster in the hierarchical clustering (Step S303), [0128] the common event sequence extraction unit 141 extracts longest common subsequences from among common subsequences between the event sequences in the same cluster (Step S306). The common event sequence extraction unit 141 then extracts longest common subsequences longer than a predetermined length as the common event sequences (Step S307), [0131]) [Examiner interprets extracting longest common subsequences between the subsequences of event sequences (i.e., a log group with malware signature) that are arranged in chronological order as extracting a second log group in which a longest common subsequence between a chronological sequence of signatures which match logs in the extracted first log group and a sequence of a plurality of signatures indicated in the rule is the longest]; calculating, for each log group in which the longest common subsequence is the longest (Aoki, the common event sequence extraction unit 141 (i.e., the extraction unit) sets the event sequences with the similarities at a predetermined level or higher in the same cluster in the hierarchical clustering (Step S303), [0128] the common event sequence extraction unit 141 extracts longest common subsequences from among common subsequences between the event sequences in the same cluster (Step S306). The common event sequence extraction unit 141 then extracts longest common subsequences longer than a predetermined length as the common event sequences (Step S307), [0131]) [Examiner interprets that common event sequence extraction unit performing for each cluster (i.e., each log group) the operation of identifying the longest common subsequence with in that group as calculating, for each log group in which the longest common subsequence is the longest]; and outputting the longest common subsequence in a third log group (Aoki, the representative event sequence extraction unit determines whether determination on all of the simple paths has been performed (Step S405). When determining that the determination on all of the simple paths has been performed (YES at Step S405), the representative event sequence extraction unit 142 outputs, as the representative event sequence of the cluster, a representative event sequence with the maximum weight (i.e., the longest common subsequence in the log group) among the representative event sequences (Step S406), [0137]) [Examiner interprets that outputting the representative event sequence of the cluster, a representative event sequence with the maximum weight (i.e., the longest common subsequence in the log group) among the representative event sequences as outputting the longest common subsequence in a third log group with a minimum calculated variance value as an attack trace candidate]. Aoki does not appear to explicitly teach: Calculating a variance value of a time difference between each log which is adjacent in time series in said each log group; the longest common subsequence in a third log group with a minimum calculated variance value as an attack trace candidate. However, Kohout teaches: Calculating a variance value of a time difference between each log which is adjacent in time series in said each log group (Kohout, After persistent connections are identified using the loop of FIG. 2A, a second loop is run for each of the identified persistent connections (step 245). A feature vector is created for each of the identified persistent connections (i.e., log group) on the basis of the statistics collected for that connection (step 250), [0042] Flows inter-arrival times variance—this feature measures the variance of time intervals between consecutive outgoing flows (i.e., each log) for the given persistent connection, [0056]) [Examiner interprets that preforming processing per persistent connection (i.e., log group) and calculating statistical variance between adjacent timestamps as calculating a variance value of a time difference between each log which is adjacent in time series in the log group]; Therefore, it would have been obvious to PHOSITA before the effective filing date to modify the teaching of Aoki to include a concept of calculating a variance value of a time difference between each log which is adjacent in time series in said each log group as taught by Kohout for the purpose of measuring the variance of time intervals between consecutive outgoing flows (i.e., each log) for the given persistent connection [Kohout:0056]. Aoki and Kohout does not appear to explicitly teach: the longest common subsequence in a third log group with a minimum calculated variance value as an attack trace candidate However, Dennison teaches: the longest common subsequence in the log group with a minimum calculated variance value as an attack trace candidate (Dennison, In the example of block 317A, the system may calculate a variance of the particular connection pair series (i.e., the longest common subsequence) . The variance may, for example, provide an indication of the regularity, or periodicity, of the connection pairs over time. Higher variances may indicate that the connection pair is less likely to be related to malware beaconing activity, as malware beaconing activity may generally occur at very regular intervals. Thus, lower variances may indicate that the connection pair is more likely to be related to malware beaconing activity, the system may calculate a variance of the particular connection pair series,(Col 12, lines 5-13) At block 318, the system may determine which connection pairs have beaconing scores that satisfy a particular threshold. For example, the system may determine that any beaconing pairs having beaconing scores below a particular variance are likely to represent malware beaconing activity. Accordingly, the beaconing malware pre-filter system may designate and use those connection pairs as seeds (Col 12 lines 28-34)) [Examiner interprets that computing each candidate time series its variance, and seed generation logic picking the one with the lowest variance and designating it got downstream processing as outputting the longest common subsequence in the log group with a minimum calculated variance value as an attack trace candidate]; Therefore, it would have been obvious to PHOSITA before the effective filing date to modify the teaching of Aoki and Kohout to include a concept of the longest common subsequence in the log group with a minimum calculated variance value as an attack trace candidate as taught by Dennison for the purpose of providing an indication of the regularity, or periodicity, of the connection pairs over time such as Higher variances may indicate that the connection pair is less likely to be related to malware beaconing activity, as malware beaconing activity may generally occur at very regular intervals. Thus, lower variances may indicate that the connection pair is more likely to be related to malware beaconing activity [Dennison: (Col 12, lines 5-13)] Regarding claim 2, Aoki, Kohout, and Dennison teaches the extraction device according to claim 1, wherein the longest common subsequence represents a length of a longest common subsequence between a sequence of signatures when the log group matching any of the signatures indicated in the rule is re-arranged in a chronological sequence and a sequence of a plurality of signatures indicated in the rule. (Aoki, the event sequence is a sequence in which the monitoring target NW analysis results are arranged in chronological order for each of hosts in the monitoring target NW, or a sequence in which the malware communication analysis results are arranged in chronological order for each of malware samples, [0036] the event matching unit 143 re-calculates the match rate based on the following equations: First match rate=the LCS length between the candidate for the detection event sequence and the sequence in the monitoring target NW/the sequence length of the candidate for the detection event sequence (i.e., the longest length), [0099] the common event sequence extraction unit 141 extracts longest common subsequences from among common subsequences between the event sequences in the same cluster (Step S306). The common event sequence extraction unit 141 then extracts longest common subsequences longer than a predetermined length as the common event sequences (Step S307), [0131]) [Examiner interprets extracting longest common subsequences (i.e., the longest length) between the subsequences of event sequences (i.e., a log group with malware signature) that are arranged in chronological order as limitation 2]; Regarding claim 3, Aoki, Kohout, and Dennison teaches the extraction device according to claim 1, the processor further configured to execute operations comprising: determining whether there is a plurality of log groups in which the longest common subsequence is the longest, wherein, when the determination indicates that there is a plurality of log groups in which the longest common subsequence is the longest, the calculating further comprises calculating, for each log group in which the longest common subsequence is the longest (Aoki, the common event sequence extraction unit 141 sets the event sequences with the similarities at a predetermined level or higher in the same cluster in the hierarchical clustering (Step S303), [0128] The common event sequence extraction unit 141 determines whether a process of extracting the common event sequences from all of the clusters (i.e., the plurality of log groups) is performed (Step S304). When determining that the process of extracting the common event sequences from all of the clusters is performed (YES at Step S304), the common event sequence extraction unit 141 ends the common event sequence extraction process, [0129] In contrast, when determining that the process of extracting the common event sequences from all of the clusters is not performed (NO at Step S304), the common event sequence extraction unit 141 specifies a cluster from which the common event sequences are to be extracted (Step S305), [0130] the common event sequence extraction unit 141 extracts longest common subsequences from among common subsequences between the event sequences in the same cluster (Step S306). The common event sequence extraction unit 141 then extracts longest common subsequences longer than a predetermined length as the common event sequences (Step S307), [0131]) [Examiner interprets that checking and looping over clusters (i.e., a plurality of log groups) and performing the operations of identifying the longest common subsequence with in that cluster until system finds the common event sequences from all of the clusters (i.e., a plurality of log groups) as determining whether there is a plurality of log groups in which the longest common subsequence is the longest, wherein, when the determination indicates that there is a plurality of log groups in which the longest common subsequence is the longest,]; Aoki does not appear to explicitly teach: Calculating a variance value of a time difference between each log which is adjacent in time series in the log group However, Kohout teaches: Calculating a variance value of a time difference between each log which is adjacent in time series in the log group (Kohout, After persistent connections are identified using the loop of FIG. 2A, a second loop is run for each of the identified persistent connections (step 245). A feature vector is created for each of the identified persistent connections (i.e., log group) on the basis of the statistics collected for that connection (step 250), [0042] Flows inter-arrival times variance—this feature measures the variance of time intervals between consecutive outgoing flows (i.e., each log) for the given persistent connection, [0056]) [Examiner interprets that preforming processing per persistent connection (i.e., log group) and calculating statistical variance between adjacent timestamps as calculating a variance value of a time difference between each log which is adjacent in time series in the log group]; Therefore, it would have been obvious to PHOSITA before the effective filing date to modify the teaching of Aoki to include a concept of calculating a variance value of a time difference between each log which is adjacent in time series in the log group as taught by Kohout for the purpose of measuring the variance of time intervals between consecutive outgoing flows (i.e., each log) for the given persistent connection [Kohout:0056]. Regarding claim 4, Aoki, Kohout, and Dennison teaches the extraction device according to claim 3, wherein, when the determination indicates that there is not a plurality of log groups in which the longest common subsequence is the longest, the outputting further comprises outputting the longest common subsequence in the log group in which the longest common subsequence is the longest as an attack trace candidate (Aoki, The common event sequence extraction unit 141 determines whether a process of extracting the common event sequences from all of the clusters (i.e., the plurality of log groups) is performed (Step S304). When determining that the process of extracting the common event sequences from all of the clusters is performed (YES at Step S304), the common event sequence extraction unit 141 ends the common event sequence extraction process, [0129] In contrast, when determining that the process of extracting the common event sequences from all of the clusters is not performed (NO at Step S304), the common event sequence extraction unit 141 specifies a cluster from which the common event sequences are to be extracted (Step S305), [0130] the representative event sequence extraction unit determines whether determination on all of the simple paths has been performed (Step S405). When determining that the determination on all of the simple paths has been performed (YES at Step S405), the representative event sequence extraction unit 142 outputs, as the representative event sequence of the cluster, a representative event sequence with the maximum weight (i.e., the longest common subsequence in the log group) among the representative event sequences (Step S406), [0137] the common event sequence extraction unit 141 extracts longest common subsequences from among common subsequences between the event sequences in the same cluster (Step S306). The common event sequence extraction unit 141 then extracts longest common subsequences longer than a predetermined length as the common event sequences (Step S307), [0131]) [Examiner interprets that checking and looping over clusters (i.e., a plurality of log groups) and performing the operations of identifying the longest common subsequence with in that cluster until system finds the common event sequences from all of the clusters (i.e., a plurality of log groups) and outputting the representative event sequence of the cluster, a representative event sequence with the maximum weight (i.e., the longest common subsequence in the log group) among the representative event sequences from other clusters as limitation 4]; Regarding claim 5, Aoki, Kohout, and Dennison teaches the extraction device according to claim 1, wherein the outputting further comprises outputting a log group in which the calculated variance value is the smallest (Dennison, In the example of block 317A, the system may calculate a variance of the particular connection pair series. The variance may, for example, provide an indication of the regularity, or periodicity, of the connection pairs over time. Higher variances may indicate that the connection pair is less likely to be related to malware beaconing activity, as malware beaconing activity may generally occur at very regular intervals. Thus, lower variances may indicate that the connection pair is more likely to be related to malware beaconing activity, the system may calculate a variance of the particular connection pair series,(Col 12, lines 5-13) At block 318, the system may determine which connection pairs have beaconing scores that satisfy a particular threshold. For example, the system may determine that any beaconing pairs having beaconing scores below a particular variance are likely to represent malware beaconing activity. Accordingly, the beaconing malware pre-filter system may designate and use those connection pairs as seeds (Col 12 lines 28-34)) [Examiner interprets that computing each candidate time series its variance, and seed generation logic picking the one with the lowest variance and designating it got downstream processing as outputting a log group in which the calculated variance value is the smallest]; Therefore, it would have been obvious to PHOSITA before the effective filing date to modify the teaching of Aoki and Kohout to include a concept of outputting a log group in which the calculated variance value is the smallest as taught by Dennison for the purpose of providing an indication of the regularity, or periodicity, of the connection pairs over time such as Higher variances may indicate that the connection pair is less likely to be related to malware beaconing activity, as malware beaconing activity may generally occur at very regular intervals. Thus, lower variances may indicate that the connection pair is more likely to be related to malware beaconing activity [Dennison: (Col 12, lines 5-13)] Claims 6, and 7 recite commensurate subject matter as claim 1. Therefore, they are rejected for the same reasons. Except the additional elements: Aoki further teaches: An extraction method, program (Aoki, a method , [0001] [0015]) A computer-readable non-transitory recording medium storing computer-executable program instructions that when executed by a processor cause a computer to execute operations comprising (Aoki, a computer readable recording medium, and cause a computer to read and execute the program recorded in the recording medium to implement the same processes, [0175]) Regarding claims 8-11, and 12-15, Claims 8-11, and 12-15 recite commensurate subject matter as claim 2-5. Therefore, they are rejected for the same reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20200342095 A1: “relates to a technique for generating an attack detection rule” US 10230744 B1: “relates generally to computer security techniques, and more particularly to techniques for identifying suspicious communications associated with computer security attacks, such as malware attacks” US 20120124667 A1: “relates to a machine-implemented method for determining whether a to-be-analyzed software is a known malware, more particularly to a machine-implemented method for determining whether a to-be-analyzed software is a known malware or a variant of the known malware” Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMIKSHYA POUDEL whose telephone number is (703)756-1540. The examiner can normally be reached 7:30 AM - 5PM Mon- Fri. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHEWAYE GELAGAY can be reached at (571)272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.N.P./Examiner, Art Unit 2436 /SHEWAYE GELAGAY/Supervisory Patent Examiner, Art Unit 2436
Read full office action

Prosecution Timeline

Oct 31, 2023
Application Filed
Aug 06, 2025
Non-Final Rejection — §101, §103, §112
Oct 31, 2025
Interview Requested
Nov 12, 2025
Applicant Interview (Telephonic)
Nov 12, 2025
Examiner Interview Summary
Dec 11, 2025
Response Filed
Mar 12, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591663
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 31, 2026
Patent 12470379
LINK ENCRYPTION AND KEY DIVERSIFICATION ON A HARDWARE SECURITY MODULE
2y 5m to grant Granted Nov 11, 2025
Patent 12452254
SECURE SIGNED FILE UPLOAD
2y 5m to grant Granted Oct 21, 2025
Patent 12341788
NETWORK SECURITY SYSTEMS FOR IDENTIFYING ATTEMPTS TO SUBVERT SECURITY WALLS
2y 5m to grant Granted Jun 24, 2025
Patent 12292969
Provenance Inference for Advanced CMS-Targeting Attacks
2y 5m to grant Granted May 06, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
99%
With Interview (+80.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month