Prosecution Insights
Last updated: April 19, 2026
Application No. 18/799,680

METHODS AND APPARATUS TO IMPLEMENT A DETERMINISTIC INDICATOR AND CONFIDENCE SCORING MODEL

Final Rejection §103§DP
Filed
Aug 09, 2024
Examiner
CHAI, LONGBIT
Art Unit
2431
Tech Center
2400 — Computer Networks
Assignee
Musarubra US LLC
OA Round
2 (Final)
88%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
647 granted / 737 resolved
+29.8% vs TC avg
Strong +32% interview lift
Without
With
+32.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
23 currently pending
Career history
760
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
36.7%
-3.3% vs TC avg
§102
30.4%
-9.6% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 737 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Currently pending claims are 1 – 20. Response to Arguments As per claim 1, 8 & 15, the rejections of 35 U.S.C. 101 as submitted on 10/18/2025 has been withdrawn in view of the remarks and claim amendments filed on 3/4/2026. As per claim 1, 8 & 15, Applicant's arguments with respect to instant claims have been fully considered but are moot in view of the new ground(s) of rejection necessitated by Applicant's amendment – please see the following section for the detail of rationale to make the corresponding prior-art(s) rejections as set forth below. As per claim 1, 8 & 15, Applicant asserts prior-art(s) does not teach “assigning the first DISC score to the first indicator based on the similarity between the first indicator and the second indicator” because Masaki’s teaching at its Paragraph [0048] regarding a degree of similarity between sentences included in attack information pieces isn't the same as assigning the first DISC score to the first indicator based on the similarity between the first indicator and the second indicator (Remarks: Page 10 / Paras 5 – 7). Examiner respectfully disagrees with the following rationale. (a) Based upon the Principles of Patent Law, according to MPEP §2145, one cannot show non-obviousness by attacking references individually where the rejections are based on combinations of references. See Keller, 642 F.2d at 425 and (b) the test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference (i.e., assigning the first DISC score to the first indicator based on the similarity), but rather, the test is what the combined teachings of those references would have suggested to those of ordinary skill in the art (i.e., assigning the first DISC score to the first indicator based on the similarity between the first indicator and the second indicator). See Keller, 642 F.2d at 425. (b) In light of that, the primary reference Sanchez teaches that (b-1) sending a query with an IOC indicator, (i.e., as a threat indicator), to a security knowledge base (Sanchez: Col. 18 Line 54 – 65 and Col. 2 Line 49 – 51); and (b-2) first, determining whether the IOC indicator (i.e. threat indicator) has been seen in the security knowledge base and if so, to what extent (Sanchez: Col. 20 Line 37 – 39: i.e. determining whether it has been assigned a threat level including a severity, a deterministic likelihood and a confidence score of the threat level, and (b-3) in order to predict and determine the target malicious data of the first indicator is associated with an active spear phishing (malware) campaign (i.e. an identification of a campaign) (Sanchez: Col. 19 Line 32 – 34), the determination is first based on (A) an initial affectedness score regarding to what extent (i.e. a likelihood) a system is affected by such a malicious entity (Sanchez: Col. 19 Line 12 – 18), as a current / instant deterministic indicator, and then adjusted by (B) a relative weightings(s) assigned to the source of knowledge bases, i.e., as a deterministic indicator of confidence scoring (DISC) score of a creditability (weighting) of the source of the indicator (Sanchez: Col. 16 Line 61 – 65 & Col. 22 Line 44 – 67) and accordingly, in light of that – the secondary reference Masaki teaches determining a degree of similarity between a series of occurrences of malicious events illustrated by sentences when compared an instant malicious attack information with reference information of attack data records such as a frequency of occurrence or an order of occurrence as illustrated in a sentence of the attack information, statistical information thereof, or the like (Masaki: Para [0048]) – and thereby, (b-4) the associated scoring level associated with the target malicious data of the first indicator can be divided into the following three categories: (i) a severity of the threat (Sanchez: Col. 19 Line 14 – 18) as a lethality component; (ii) a likelihood of a use of the IOC that is meant for the attack as determinism component (Sanchez: Col. 19 Line 12 – 18); and (iii) ultimately – a creditability level (weighting) of a source of an intelligent base (1st - primary reference) incorporated with a degree of similarity (2nd - secondary reference) as one type of malicious analytical indicator designated as a deterministic indicator of confidence scoring (DISC) score – i.e. assigning a DISC score to a first indicator based on a degree of similarity in order to predict and determine whether the target malicious data of the first indicator is associated with an active spear phishing (malware) campaign (e.g., an identification of a campaign) (Sanchez: Col. 19 Line 32 – 34 & Col. 16 Line 61 – 65) || (Masaki: see above) to meet the recited claim language and besides, this is also consistent with the disclosure of the instant specification (SPEC-PG.PUB: Para [0024]). As such, Applicant's arguments are respectfully traversed. Double Patenting The nonstatutory (or provisional) double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. See In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent is shown to be commonly owned with this application. See 37 CFR 1.130(b). Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). Claim(s) 1 – 20 are rejected under the judicially created doctrine of double patenting as being unpatentable over claim 1 – 24 of U.S. Patent No. 12,093,382. Although the conflicting claims are not identical, they are not patentably distinct from each other such as determine a similarity between the first indicator and the second indicator; however, this feature is disclosed by the Masaki et al. (WO 2021/144954) (see below) – accordingly, because the listed claims of U.S. Patent virtually contain(s) every element of the listed claims of the instant application and thus anticipate the claim(s) of the instant application. Claim(s) of the instant application therefore is/are not patently distinct from the earlier patent claim(s) and as such is/are unpatentable over obvious-type double patenting. A later patent claim is not patentably distinct from an earlier patent claim if the later claim is obvious over, or anticipated by, the earlier claim. In re Longi, 759 F.2d at 896, 225 USPQ at 651 (affirming a holding of obviousness-type double patenting because the claims at issue were obvious over claims in four prior art patents); In re Berg, 140 F.3d at 1437, 46 USPQ2d at 1233 (Fed. Cir. 1998) (affirming a holding of obviousness type double patenting where a patent application claim to a genus is anticipated by a patent claim to a species within that genus). “ELI LILLY AND COMPANY v BARR LABORATORIES, INC., United States Court of Appeals for the Federal Circuit, ON PETITION FOR REHEARING EN BANC (DECIDED: May 30, 2001)”. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 – 20 are rejected under 35 U.S.C.103 as being unpatentable over Sanchez et al. (U.S. Patent 11,194,905), in view of Masaki et al. (WO 2021/144954), and in view of Cazazos (U.S. Patent 2017/0068816). As per claim 1, 8 & 15, Sanchez teaches an apparatus, comprising: at least one memory (Sanchez: Figure 2); machine-readable instructions (Sanchez: Figure 2); and at least one processor circuit to be programmed by the machine-readable instructions (Sanchez: Figure 2) to: obtain a first indicator (Sanchez: see above & FIG. 8 / E-816 & Col. 4 Line 11 – 22: collecting and obtaining a first indicator (IOC1) as a first cyber-threat indicator associated with target malicious data from a set of IOCs (Indicator of Compromise) such as IOC1, IOC2, … & IOCn); determine whether the first indicator has a deterministic indicator and confidence model (DISC) score assigned to the first indicator in an indicator database (Sanchez: see above & Col. 18 Line 54 – 65, Col. 20 Line 37 – 39, Col. 2 Line 49 – 51, Col. 19 Line 8 – 21 / Line 26 – 39, Col. 16 Line 61 – 65 and Col. 22 Line 44 – 67: (a) sending a query with an IOC indicator, (i.e., as a threat indicator), to a security knowledge base (Sanchez: Col. 18 Line 54 – 65 and Col. 2 Line 49 – 51); and (b) first, determining whether the IOC indicator (i.e. threat indicator) has been seen in the security knowledge base and if so, to what extent (Sanchez: Col. 20 Line 37 – 39: i.e. determining whether it has been assigned a threat level including a severity, a deterministic likelihood and a confidence score of the threat level – see below for detail): (c) in order to predict and determine the target malicious data of the first indicator (e.g.) a malicious IP address (or a link of an URL address) is associated with an active spear phishing (malware) campaign (i.e. an identification of a campaign) (Sanchez: Col. 19 Line 32 – 34), the determination is first based on (c-1) an initial affectedness score regarding to what extent (i.e. a likelihood) a system is affected by such a malicious entity (Sanchez: Col. 19 Line 12 – 18), as a current / instant deterministic indicator, and then adjusted by (c-2) a relative weightings(s) assigned to the source of knowledge bases, i.e., as a confidence scoring (DISC) score of a creditability (weighting) of the source of the indicator (Sanchez: Col. 16 Line 61 – 65 & Col. 22 Line 44 – 67) – Accordingly, (d) the associated scoring level associated with the target malicious data of the first indicator can be divided into the following three categories: (d-1) a severity of the threat (Sanchez: Col. 19 Line 14 – 18) as a lethality component; (d-2) a likelihood of a use of the IOC that is meant for the attack as determinism component (Sanchez: Col. 19 Line 12 – 18); and (d-3) a creditability (weighting) of a source of an intelligent base as a confidence scoring (DISC) score (Sanchez: Col. 16 Line 61 – 65); and thereby this is consistent with the disclosure of the instant specification (SPEC-PG.PUB: Para [0024]). in response to a determination that the first indicator does not have the DISC score, compute a first DISC score for the first indicator, wherein to compute the first DISC score the at least one processor circuit is to (see below): compare the first indicator to a second indicator in the indicator database to determine a similarity (see below: Masaki) between the first indicator and the second indicator (Sanchez: see above & Col. 19 Line 53 – 60 and Col. 20 Line 39 – 43: determining the accuracy of a target IOC indicator that is associated with a particular attack campaign based on the relative impact level of affectedness as compared with any other stored indicator(s), across (from) many different customer environments, in the intelligent (indicator) database). However, Sanchez does not disclose expressly (i) determining a similarity between the first indicator and the second indicator, wherein (ii) the similarity is determined using a neural network. First, Masaki (& Sanchez) teaches to determine a similarity between the first indicator and the second indicator (Sanchez: see above) || (Masaki: Para [0048] & Para [0051] Line 1 – 6: (a) a similarity of the attack information pieces (i.e. one type of threat indicators) is determined based on a result of comparison between a degree of a similarity and a predetermined threshold and (b) when exceeding (greater than) the threshold, determining a plurality of attack information pieces (e.g. between the 1st and the 2nd ones) are similar to each other). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to propose the modification of determining a similarity between the first indicator and the second indicator because Masaki teaches to alternatively, effectively and securely determine a similarity of the attack information pieces (i.e. one type of threat indicators) based on a result of comparison between a degree of a similarity and a predetermined threshold and when exceeding (greater than) the threshold, determining a plurality of attack information pieces (e.g. between the 1st and the 2nd ones) are similar to each other (see above) within the Sanchez’s system of determining the accuracy of a target IOC indicator that is associated with a particular attack campaign based on the relative impact level of affectedness as compared with any other stored indicator(s), across (from) many different customer environments, in the intelligent (indicator) database (see above). Besides, Cazazos (& Sanchez as modified) teaches determining a similarity using a neural network (Sanchez | Masaki: see above) || (Cazazos: Para [0004] Line 12 – 20: determining a similarity between two object entities by applying a machine learning algorithm such as a deep neural network (DNN) algorithm and subsequently identifying whether a target executable software is malware by applying a built malware detection model accordingly). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to propose the modification of determining a similarity using a neural network because Cazazos teaches to alternatively, effectively and securely determine similarities between two object entities by applying a machine learning algorithm such as a deep neural network (DNN) algorithm and subsequently identifying whether a target executable software is malware by applying a built malware detection model accordingly (see above) within the Sanchez’s system of determining the accuracy of a target IOC indicator that is associated with a particular attack campaign based on the relative impact level of affectedness as compared with any other stored indicator(s), across (from) many different customer environments, in the intelligent (indicator) database (see above) assigning the first DISC score to the first indicator based on the similarity between the first indicator and the second indicator (Sanchez | Cazazos: see above) || (a) a similarity of the attack information pieces (i.e. one type of threat indicators) is determined based on a result of comparison between a degree of a similarity and a predetermined threshold and (b) when exceeding (greater than) the threshold, determining a plurality of attack information pieces (e.g. between the 1st and the 2nd ones) are similar to each other (Masaki: Para [0048] & Para [0051] Line 1 – 6); and (b) in order to predict and determine the target malicious data of the first indicator (e.g.) a malicious IP address (or a link of an URL address) is associated with an active spear phishing (malware) campaign (i.e. an identification of a campaign) (Sanchez: Col. 19 Line 32 – 34), the determination is first based on (b-1) an initial affectedness score regarding to what extent (i.e. a likelihood) a system is affected by such a malicious entity (Sanchez: Col. 19 Line 12 – 18), as a current / instant deterministic indicator, and then adjusted by (b-2) a relative weightings(s) assigned to the source of knowledge bases, i.e., as a confidence scoring (DISC) score of a creditability (weighting) of the source of the indicator (Sanchez: Col. 16 Line 61 – 65 & Col. 22 Line 44 – 67) – Accordingly, (c) the associated scoring level associated with the target malicious data of the first indicator can be divided into the following three categories: (c-1) a severity of the threat (Sanchez: Col. 19 Line 14 – 18) as a lethality component; (c-2) a likelihood of a use of the IOC that is meant for the attack as determinism component (Sanchez: Col. 19 Line 12 – 18); and (c-3) a creditability (weighting) of a source of an intelligent base as a confidence scoring (DISC) score (Sanchez: Col. 16 Line 61 – 65); and thereby this is consistent with the disclosure of the instant specification (SPEC-PG.PUB: Para [0024]). As per claim 2 – 3, 9 – 10 & 16 – 17, Sanchez as modified using a neural network to determine a code similarity between the first indicator and the second indicator (Sanchez: see above) || (Cazazos: Para [0004] Line 12 – 20: determining similarities between call graphs by applying a machine learning algorithm such as a deep neural network (DNN) algorithm to the determined (code) similarity, and identifying whether a target executable code is malware by applying the built malware detection model to the target executable code). As per claim 4, 11 & 18, Sanchez as modified teaches selecting a current DISC score from the indicator database, the selection based on a difference between the current DISC score and the DISC score of the first indicator; determine whether the difference between the current DISC score and the DISC score of the first indicator exceeds a threshold, the threshold based on at least one of a lethality component, a determinism component, or a confidence component; and in response to a determination that the difference does not exceed the threshold, report the identification of the campaign to an entity (Masaki: Para [0048] & Para [0051] Line 1 – 6: (a) a similarity of the attack information pieces (i.e. one type of threat indicators) is determined based on a result of comparison between a degree of a similarity and a predetermined threshold and (b) when exceeding (greater than) the threshold, determining a plurality of attack information pieces (e.g. between the 1st and the 2nd ones) are similar to each other). As per claim 5 – 6, 12 – 13 & 19 – 20, the instant claim is directed to a claimed content having functionality corresponding to the Claims 1, and are rejected by a similar rationale. As per claim 7 & 14, Sanchez as modified teaches wherein the determination that the first indicator does not have the DISC score includes at least one of a failed query to the indicator database or a value of a query to the indicator database being zero (Sanchez: see above & Col. 18 Line 54 – 65, Col. 20 Line 37 – 39, Col. 2 Line 49 – 51, Col. 19 Line 8 – 21 / Line 26 – 39, Col. 16 Line 61 – 65 and Col. 22 Line 44 – 67: (a) sending a query with an IOC indicator, (also as a threat indicator), to a security knowledge base (Sanchez: Col. 18 Line 54 – 65 and Col. 2 Line 49 – 51); and (b) first, determining whether the IOC indicator (i.e. threat indicator) has been seen in the security knowledge base and if so, to what extent (Sanchez: Col. 20 Line 37 – 39: i.e. determining whether it has (& what it is) an assigned threat level including a severity, a deterministic likelihood and a confidence score of the threat level (see more above)) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LONGBIT CHAI whose telephone number is (571)272-3788. The examiner can normally be reached Monday - Friday 9:00am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynn D. Feild can be reached at 571-272-2092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. --------------------------------------------------- /Longbit Chai/ Longbit Chai E.E. Ph.D. Primary Examiner, Art Unit 2431 No. #2540 – 2026 ---------------------------------------------------
Read full office action

Prosecution Timeline

Aug 09, 2024
Application Filed
Nov 30, 2025
Non-Final Rejection — §103, §DP
Mar 04, 2026
Response Filed
Mar 23, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574418
CONFIDENTIAL RESOURCE TRUSTED DOMAIN MIGRATION STRATEGY
2y 5m to grant Granted Mar 10, 2026
Patent 12568099
FINDING ANOMALOUS PATTERNS
2y 5m to grant Granted Mar 03, 2026
Patent 12568086
AUTOMATIC SECURITY COVERAGE EXPANSION OF CLOUD SECURITY POSTURE MANAGEMENT (CSPM) ASSETS
2y 5m to grant Granted Mar 03, 2026
Patent 12563097
Systems and methods for tag-based policy enforcement for dynamic cloud workloads
2y 5m to grant Granted Feb 24, 2026
Patent 12563102
DYNAMIC ATTRIBUTE BASED EDGE-DEPLOYED SECURITY
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+32.3%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 737 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month