Prosecution Insights
Last updated: April 19, 2026
Application No. 18/826,095

USER MODEL-BASED DATA LOSS PREVENTION

Non-Final OA §103§DP
Filed
Sep 05, 2024
Examiner
CHAI, LONGBIT
Art Unit
2431
Tech Center
2400 — Computer Networks
Assignee
Armorblox LLC
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
647 granted / 737 resolved
+29.8% vs TC avg
Strong +32% interview lift
Without
With
+32.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
23 currently pending
Career history
760
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
36.7%
-3.3% vs TC avg
§102
30.4%
-9.6% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 737 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Currently pending claims are 1 – 20. Claim Objection Claim 1 is objected to because of the following informalities (and Examiner respectfully request to correct as follows): “one or more processors” should be replaced with “one or more hardware processors (or a processor devices)” – Examiner notes this is because a computer processor could be a software processor (e.g. a Microsoft WORD processor). Appropriate correction(s) is (are) required. // “A computer processor” may include the “software processor” (e.g. a word processor) // Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. See In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent is shown to be commonly owned with this application. See 37 CFR 1.130(b). Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). Claim(s) 1 – 20 are rejected under the judicially created doctrine of obviousness-type double patenting as being unpatentable over claim 1 – 17 of U.S. Patent No. 11,349,873. Although the conflicting claims are not identical, they are not patentably distinct from each other because, for example, the difference between these two applications such as a set of features from a set of received electronic messages specifying linguistic characteristics of a particular user is well-known in the field in view of Wasserblat et al. (U.S. Patent 8,145,562: Figure 2 / E-205, Col. 13 Line 2 – 5 / Line 26 – 28, Col. 14 Line 18 – 23 / Line 30 – 38 and Col. 5 Line 21 – 31) – accordingly, because the listed claims of U.S. Patent virtually contain(s) every element of the listed claims of the instant application and thus anticipate the claim(s) of the instant application. Claim(s) of the instant application therefore is/are not patently distinct from the earlier patent claim(s) and as such is/are unpatentable over obvious-type double patenting. A later patent claim is not patentably distinct from an earlier patent claim if the later claim is obvious over, or anticipated by, the earlier claim. In re Longi, 759 F.2d at 896, 225 USPQ at 651 (affirming a holding of obviousness-type double patenting because the claims at issue were obvious over claims in four prior art patents); In re Berg, 140 F.3d at 1437, 46 USPQ2d at 1233 (Fed. Cir. 1998) (affirming a holding of obviousness type double patenting where a patent application claim to a genus is anticipated by a patent claim to a species within that genus). “ELI LILLY AND COMPANY v BARR LABORATORIES, INC., United States Court of Appeals for the Federal Circuit, ON PETITION FOR REHEARING EN BANC (DECIDED: May 30, 2001)”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 – 4, 6 – 11, 13 – 18 & 20 are rejected under 35 U.S.C. 103 as being unpatentable over Himler et al. (U.S. Patent 9,774,626), in view of Wasserblat et al. (U.S. Patent 8,145,562). As per claim 1, 8 & 15, Himler teaches a system comprising: one or more processors (Himler: FIG. 5); and one or more non-transitory computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising (Himler: FIG. 5): receiving an electronic message from a purported sending user that purportedly is an actual user (Himler: see above & Col. 6 Line 66 – Col. 7 Line 14, Col. 10 Line 32 – 35 and Col. 12 Line 59 – 65: analyzing whether a received message is from a trusted (true) sender (i.e. an actual user) or a purported sending user that maliciously and purportedly is the sending user (e.g.) based on the true sender history logs (records)). Himler teaches using a statistical linguistic analysis to identify linguistic elements (characteristics) of the malicious messages (Himler: see above & Col. 10 Line 47 – 60). However, Himler does not disclose expressly characterizing first linguistic characteristics of the purported sending user from the electronic message. Wasserblat (& Himler) teaches characterizing first linguistic characteristics of the purported sending user from the electronic message (Himler: see above & Col. 10 Line 47 – 60: using a statistical linguistic analysis to identify linguistic elements (characteristics) of the malicious messages) || (Wasserblat: Figure 2 / E-205, Col. 13 Line 2 – 5 / Line 26 – 28, Col. 14 Line 18 – 23 / Line 30 – 38 and Col. 5 Line 21 – 31: (a) utilizing a text linguistic model for training and analyzing the interactions of textual / linguistic features (i.e. messages exchanged) as input parameters, from a particular sending user extracted from the associated interactions as received, to assess a similarity level and fraud by determining an identity risk score and checking (comparing) whether the fraud risk score exceeding a predetermined threshold, wherein (b) the behavior features are compared against a user profile of a particular sending user (i.e. a given user identity) during the analysis process (Col. 5 / Line 21 – 31)). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to propose the modification of using a machine-learned user model describing at least linguistic features of the sending user's electronic messages because Wasserblat’s teaching can alternatively, effectively and securely utilize a text linguistic model for training and analyzing the interactions of textual / linguistic features (i.e. messages exchanged) as input parameters, from the sending users extracted from the associated interactions as received, to assess a similarity level and fraud by determining an identity risk score and checking (comparing) whether the fraud risk score exceeding a predetermined threshold, wherein the behavior features are compared against a user profile of a particular user (i.e. a given user identity) during the analysis process (see above) within the Himler’s system of analyzing, using a cybersecurity protection mechanism, on a received message sent from a user (employee) of an organization (i.e. an enterprise user) (see above). identifying a model that is trained to identify second linguistic characteristics of the actual user (Himler: see above & Col. 10 Line 65 – Col. 11 Line 5 / Line 12 – 17, Col. 8 Line 6 – 13 and Col. 12 Line 59 – 65: using linguistic machine learning models enhanced with semantic parsing techniques and other natural language processing techniques which are trained to recognize patterns of linguistic elements indicative of a legitimate messages from a trusted (true) sender (i.e. an actual user) based on the true sender history logs (records)) || (Wasserblat: see above); determining, using the model, a difference between the first linguistic characteristics and the second linguistic characteristics (Himler: see above & Col. 10 Line 65 – Col. 11 Line 5 / Line 12 – 17, Col. 8 Line 6 – 13 and Col. 12 Line 59 – 65: determining a difference of the identified patterns of linguistic elements between the first and the second linguistic characteristics of both malicious and legitimate messages based on the true sender history logs (records)) || (Wasserblat: see above); determining, based at least in part on the difference, that the purported sending user is not the actual user (Himler: see above & Col. 10 Line 32 – 46 and Col. 12 Line 59 – 65: based on a similarity score of a threshold amount when compared to a trusted message (from a true / trusted user) based on the true sender history log (record)) || (Wasserblat: see above); and in response to determining that the purported sending user is not the actual user, executing a security action on the electronic message (Himler: see above & Col. 8 Line 58 – 67: (e.g.) deleting the message and reporting to a cybersecurity analyzer server) || (Wasserblat: see above). As per claim 2, 9 & 16, Himler as modified teaches determining that the difference between the first linguistic characteristics and the second linguistic characteristics exceeds a threshold, wherein executing the security action includes altering the electronic message to include an indication that the electronic message is not from the actual user (Himler: see above & Col. 7 Line 12 – 14 and Col. 6 Line 25 – 32 / Line 58 – 65) || (Wasserblat: see above). As per claim 3, 10 & 17, Himler as modified teaches generating a security policy signature that represents content of the electronic message (Himler: see above & Col. 10 Line 32 – 40 & Col. 12 Line 7 – 9: generating and using a probabilistic hashing (or a vector space modeling), as a security policy signature, to compare, at least, a hashed value (i.e. obfuscating (e.g.) of a mimi-part of identity or a specific (communication) domain, and etc. as found in a portion of the text message body as a part of specific prominent features (see above (b)) to determine (as a part of a user identity) whether or not it was originated from a trusted or malicious sender); generating a security policy for an enterprise with which the actual user is associated (Himler: Col. 12 Line 59 – 65 and Col. 13 Line 23 – 29 / Line 4 – 9, Col. 12 Line 59 – 65 and Col. 8 Line 1 – 13: analyzing the received messages using (factor-specific) feature values as input characteristics incorporated into a machine learning model to identify (classify) a sender of the received message associated with an enterprise environment for the purpose of cybersecurity protection), the security policy being configured to use the security policy signature to identify potentially malicious electronic messages (Himler: see above & Col. 10 Line 32 – 40 & Col. 12 Line 7 – 9: generating and using a probabilistic hashing (or a vector space modeling), as a security policy signature for identifying potentially malicious messages); and implementing the security policy on electronic messages communicated with users associated with the enterprise (Himler: see above). As per claim 4, 11 & 18, Himler as modified teaches determining a score for the electronic message, the score being indicative of a risk of the electronic message to the enterprise (Himler: see above: based on a weighted compositive similarity score and a designated threshold); and in response to the score exceeding a security threshold, preventing transmission of the electronic message to a receiving user (Himler: see above & Col. 8 Line 58 – 67: the created new security policy can be (e.g.) (a) deleting the received message and reporting to a cybersecurity analyzer server to prevent transmission of the first electronic message to a receiving user if the risk score exceeding the designated threshold and (b) forwarding (i.e. transmitting) the message if the risk score falling below the threshold). As per claim 6, 13 & 20, Himler as modified teaches wherein the actual user is a member of an enterprise (Himler: see above & Col. 10 Line 32 – 33 and Col. 8 Line 10 – 13: a trusted sender (i.e. an internal sender or trusted external sender) as a member of an enterprise), the operations further comprising: determining that the purported sending user is not a member of the enterprise (Himler: see above & Col. 10 Line 32 – 35: if not sure that the sender is a trusted sender (i.e. an internal sender or trusted external sender) as a member of an enterprise w.r.t. a purported sending user); and in response to determining that the purported sending user is not a member of the enterprise, accessing an obfuscated version of the second linguistic characteristics (Himler: see above & Col. 10 Line 36 – 46: using and accessing a probabilistic hashing (or a vector space modeling) to compare, at least, a hashed value (i.e. obfuscating (e.g.) of a mimi-part of identity or a specific (communication) domain, and etc. as found in a portion of the text message body to determine (as a part of a user identity) whether or not it was originated from a trusted or malicious sender). As per claim 7 & 14, Himler as modified teaches obfuscating the second linguistic characteristics by applying a privacy-preserving one-way hash to the second linguistic characteristics to generate a user identity for the purported sending user (Himler: see above & Col. 10 Line 36 – 46: using and accessing a probabilistic hashing (or a vector space modeling) to compare, at least, a hashed value (i.e. obfuscating (e.g.) of a mimi-part of identity or a specific (communication) domain, and etc. as found in a portion of the text message body to determine (as a part of a user identity) whether or not it was originated from a trusted or malicious sender). Allowable Subject Matter Claim 5, 12 & 19 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LONGBIT CHAI whose telephone number is (571)272-3788. The examiner can normally be reached Monday - Friday 9:00am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynn D. Feild can be reached at 571-272-2092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. --------------------------------------------------- /Longbit Chai/ Longbit Chai E.E. Ph.D. Primary Examiner, Art Unit 2431 No. #2564 – 2025 ---------------------------------------------------
Read full office action

Prosecution Timeline

Sep 05, 2024
Application Filed
Jan 11, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574418
CONFIDENTIAL RESOURCE TRUSTED DOMAIN MIGRATION STRATEGY
2y 5m to grant Granted Mar 10, 2026
Patent 12568099
FINDING ANOMALOUS PATTERNS
2y 5m to grant Granted Mar 03, 2026
Patent 12568086
AUTOMATIC SECURITY COVERAGE EXPANSION OF CLOUD SECURITY POSTURE MANAGEMENT (CSPM) ASSETS
2y 5m to grant Granted Mar 03, 2026
Patent 12563097
Systems and methods for tag-based policy enforcement for dynamic cloud workloads
2y 5m to grant Granted Feb 24, 2026
Patent 12563102
DYNAMIC ATTRIBUTE BASED EDGE-DEPLOYED SECURITY
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+32.3%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 737 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month