Prosecution Insights
Last updated: April 19, 2026
Application No. 18/502,480

INFORMATION SHARING FOR CYBERATTACK RECOGNITION AND RESPONSE

Non-Final OA §103§112
Filed
Nov 06, 2023
Examiner
WORKU, SARON MATTHEWOS
Art Unit
2408
Tech Center
2400 — Computer Networks
Assignee
Wells Fargo Bank N A
OA Round
3 (Non-Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
12 granted / 18 resolved
+8.7% vs TC avg
Strong +54% interview lift
Without
With
+53.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
30 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
46.6%
+6.6% vs TC avg
§102
37.0%
-3.0% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103 §112
DETAILED ACTION This office action is in response to applicant’s submission filed on November 25, 2025. Claim 9 was previously canceled. Claims 1-8 and 10-20 are pending and rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on November 25, 2025 has been entered. Response to Amendment This communication is in response to the amendment filed on November 25, 2025. The Examiner has acknowledged the amended claims 1, 11, and 20. Claim 9 was previously canceled. Claims 1-8 and 10-20 are pending and are rejected. Response to Arguments Applicant’s Arguments (Remarks) filed November 25, 2025 have been fully considered, but are moot. Applicant’s arguments with respect to claims 1, 11, and 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The remainder of applicant’s amendments have already been claimed in present and past dependent claims in a reworded format as “do not share with each other the information about attributes of the peer attack but that share access to the data warehouse” is already read upon in the prior art from the original limitation. The amended claims are still taught by the cited prior art as detailed in the rejection below. Therefore, the rejections have been made based off of the same rationale. See also 103 rejection below. For the above reasons, Examiner maintains that Connell and Stephan teach each and every limitation as currently claimed. Applicant amended claims 1, 11, and 20, therefore the 112(b) rejection has been withdrawn. A new 112(b) rejection has been issued. See also Claim Rejections below. Claim Rejections - 35 USC § 112 Claim 1-8 and 10-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “about” in claims 1, 6-8, 10-11, and 16-20 is a relative term which renders the claim indefinite. The term “about” is not defined by the claim, the specification does not provide a standard for support the term “about” introducing a lack of clarity, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For example “information about activity” ; “information about attributes of a peer attack”. They are unclear or vague what the “information” entails due to the word “about”. Better alternatives or suggestions are “activity information” or “information of an activity” or “information of activities”. The dependent claims included in the statement of rejection but not specifically addressed in the body of the rejection have inherited the deficiencies of their parent claim and have not resolved the deficiencies. Therefore, they are rejected based on the same rationale as applied to their parent claims above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8 and 10-20 are rejected under 35 U.S.C. 103 as being unpatentable over US 2020/0358807 A1 to Connell et al. (hereinafter, “Connell”) in view of US 2018/0109558 A1 to Stephan et al. (hereinafter, “Stephan”). Regarding claim 1, Connell discloses: A method comprising: outputting, by a computing system operated by a first entity and to a data warehouse, information about collecting network parameters of a network associated with an enterprise at risk from cyber security threats” [0006]; “The ETL 102 further performs a transform function to the data from the enterprise network parameter data source 104 and data from the threat intelligence data sources 106. This transform function forms the data into relevant threat intelligence data useable within the cyber security threat assessment system 100. The ETL 102 then loads the relevant threat intelligence data into a data warehouse and analytics system 108 and also into a data storage database 110” [0026] [Examiner notes that the network information collected would substantially include any and all information including normal and abnormal activity]), receiving, by the computing system and from the data warehouse, information about attributes of a peer attack directed to a second network operated by a second entity (“The data warehouse and analytics module 108 analyzes the relevant threat intelligence data to produce various enterprise metrics regarding cyber security of the target enterprise network. These enterprise metrics may include a holistic threat assessment score representing an overall risk level to the enterprise, individual threat rating score for each type of technology utilized within the enterprise network, various potential financial losses attributable to a potential cyber attack including losses attributable per technology and an overall enterprise loss, and a technology heat map that provides a geographic representation of cyber threats for the enterprise. In certain embodiments, the holistic threat assessment score for the target enterprise is normalized against a plurality of other enterprises also at risk of a cyber attack” [0027]; “passive scans of an organization's externally-facing technologies” [0080] [Examiner notes that both the system and the data warehouse are in communication when analyzing and gathering information based on any real or potential peer attacks happening within the entities within the enterprise or the enterprise as a whole itself]), wherein the first entity and the second entity are distinct organizations (“An enterprise, such as a corporation, not-for-profit organization or other such entity, typically owns, deploys and manages network connected systems” [0003] [Examiner notes that the enterprise contains many entities as it is usually referred to as a collection of systems, services, or entities]; “vulnerabilities in supply chain networks for the enterprise” [0025] [Examiner notes that this text highlights that an enterprise's functional scope includes external and distinct organizations that could be completely separate]) and marketplace competitors that do not share with each other the information about attributes of the peer attack but that share access to the data warehouse (“The threat intelligence data sources 106 provide data external to the target enterprise yet still relevant to the enterprise network. This threat intelligence data may include dark web data; technology vulnerability data; deep web data; upstream, downstream and peer network threats; data from hacker discussion boards; changes to behavioral Tactics, Techniques and Procedures (TTP); global internet infrastructure vulnerabilities; vulnerabilities in supply chain networks for the enterprise; technical capabilities, tactics, techniques and history of a hacker or group of hackers and any such other threat intelligence data external to the network of the target enterprise yet still relevant that network” [0025] [Examiner notes that the enterprise's functional scope includes external and distinct organization which could be separate and even marketplace competitors; it is not just limited to a single entity. External threat intelligence (like vulnerabilities and supply chain risks) highlight that competitors aren't completely isolated. They often share dependencies and attack surfaces (such as access to a data warehouse). Mapping these connections show how a first entity (company A) and a second entity (company B) can both be vulnerable in similar or different ways, even though they are competing in the same marketplace. Examiner also notes that peer attack attributes are not listed in the threat intelligence data, so they are not shared]), and applying, by the computing system, a model to identify a network asset configured to perform an operation Once each of Ts, Vs, and Al are determined for each IP address they are summed together and multiplied by a normalization factor to arrive at a cyber security threat score for the IP address. In the illustrated embodiment shown in FIG. 9, the normalization factor is 0.091. This is performed for each IP address identified as part of the target enterprise to arrive at the holistic cyber security threat score for the target enterprise, such as Threat Beta shown in FIG. 6. The normalization factor is a calibration factor used to establish a norm, or median score as the cyber security threat score for each IP address is a metric designed to show a deviation from a norm. In establishing a range of scores and a median, survey samples were made from companies across all industry classifications as established by the industry standard Global Industry Classification Standard (GICS)” [0063-0064]; “The score may then be adjusted based on whether the associated cyber threat actors have the capability, patterns of behavior and a desire to gain access to the data and information of the target enterprise” [0044]; In the illustrated embodiment, the normalization factor of 0.091 was used to calibrate the algorithm to match the median metric when viewed across a statistically significant survey of representative companies across all GICS classifications. Periodically, the survey is repeated to discern if recalibration is necessary due to new technologies, the relative state of cyber security preparedness across industries, or other substantive developments that may affect a target enterprise's information technology ecosystems” [0065]; [Examiner notes that this text shows that the level indicators indicate automated analysis. To get those Ts, Vs, and Al values, the system must analyze the nature of the threats and vulnerabilities – Examiner interprets this as a machine-learning model, especially since these texts show a system that adapts to change (e.g., new tech, new threat trends) that supports the need of retraining. Since actions are based on threat predictions, it is reasonable to interpret that it needs ongoing accuracy in order to remain completely dependable (naturally supports retraining). The weighted metrics also depend on historical data and can be updated or retrained to reflect new threat types including peer attacks. Multiple dimensions are assessed per IP address, an evaluation which includes recognizing Ts (threats) based on known attack patterns – such as those observed in peer attacks. These components represent threat indicators, vulnerabilities, and asset-level factors derives from analyzing the asset's behavior. The normalization factor is also based on industry-wide data meaning that the model compares assets and their vulnerabilities against peer organizations. Since the threat score “is a metric designed to show a deviation from a norm”, it shows if an asset exhibits vulnerabilities or attack patterns similar to peers who have been attacked. The scans also identify attack vectors or vulnerabilities previously observed in peer environments. The scoring transforms the vulnerability detection into a numerical risk value, which is exactly what models are designed to do. Also, since the first entity and second entity are both considered part of the same enterprise, then the model can be used across both, any, and all entities within that enterprise]); and outputting, by the computing system and to the network asset, a control signal to modify the operation The goal of Threat Beta βi by technology is to trigger a call to action for the larger Threat Betas in a company's passive scan set of technologies” [0088] [Examiner notes that this trigger acts as a type of signal or input that causes the network asset's cybersecurity system (overall system) to do something]). Connell does not explicitly disclose: including information indicated as normal activity and information indicated as abnormal activity; wherein the information about attributes of the peer attack includes information about events occurring in the second network during the peer attack that has been processed to remove references to the second entity and to remove privacy data associated with any customers of the second entity; However, Stephan discloses: including information indicated as normal activity and information indicated as abnormal activity (“At least one object of this disclosure is to enable owners of networks and/or devices to be made aware of malicious clients/actors/devices which might try to communicate with or infiltrate such networks and/or devices” [0009] [Examiner notes that a person of ordinary skill in the art would understand “abnormal activity” to include network behavior generated by malicious actors communication with the network. The disclosure’s malicious actors communicating with a network are represented as network information indicated as abnormal activity, while non-malicious communications are represented as normal activity]); wherein the information about attributes of the peer attack includes information about events occurring in the second network during the peer attack that has been processed to remove references to the second entity and to remove privacy data associated with any customers of the second entity (““Sensitive communication” includes, but is not limited to, any communication containing personally identifiable data such as names, locations, financial information and/or healthcare information” [0039]; “obfuscate the true identity to make it easier for people to share sensitive threat data” [0085]; “that all of the threat data does not need to be stored globally, and that data which is sensitive to an organization can remain on premise and remain in control of the organization” [0102]; “In at least one embodiment within this disclosure, when data is extracted from a data source, the system can remove sensitive and/or private information prior to or during the extraction. The sensitive and/or private information can be stored local to the data source, either at the data source itself or within a device in the same local network as the data source” [0119] [Examiner notes that the second reference is coming in to show that activity data that is sent out can be enhanced and/or modified to not include or obfuscate information about a specific network/entity]); Thus, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to combine the method of Connell with the added structure of Stephan because removing sensitive information can be advantageous because such removal can help to maintain the privacy of the data source 110 which provides the data to the processor 108/ETL module 112 [Stephan 0142]. Claim 11 recites substantially the same limitation as claim 1, in the form of a computing system operated by a first entity for implementing the corresponding method, therefore it is rejected under the same rationale. Examiner wants to note that the reference teaches “processing circuitry and a storage device, wherein the processing circuitry has access to the storage device” by stating “The system 1000 may include one or more processors 1002… and storage devices 1014. Although not explicitly shown in FIG. 10, each component provided may be interconnected physically, communicatively, and/or operatively for inter-component communications in order to realize functionality ascribed to the various entities identified…” [0091]. Claim 20 recites substantially the same limitation as claim 1, in the form of a non-transitory computer readable media comprising computer readable program code for implementing the corresponding method, therefore it is rejected under the same rationale. Regarding claims 2 and 12, a combination of Connell-Stephan discloses all limitations of claims 1/11. Furthermore, Connell discloses: wherein outputting the control signal includes: outputting the control signal to protect the first network against an attack having the attributes of the peer attack (“The final function, f5( . . . ), that combines the four individual functions is managed by analysts such that resulting Threat Beta for technologies are reasonable and drives appropriate actions. The process of combining the four functions is done as appropriate to value for the data in the functions. Specifically, the number of ports found and CVE Score have the predominance of weight in the calculation, while exploit databases and Dark Mentions have lower impact on Threat Beta. After executing a passive scan of an organization's internet structure, Threat Betas for each technology are calculated and presented to analysts. The calculated Threat Betas assist in prioritizing organizational efforts that are based on known vulnerabilities and the number of instances of the technology. The aggregate Threat Beta for the organization is used to estimate overall cyber event impact. The goal of Threat Beta βi by technology is to trigger a call to action for the larger Threat Betas in a company's passive scan set of technologies” [0086-0088] [Examiner notes that the model identifies a network asset vulnerable to a peer attack and assigns a Threat Beta βi score reflecting that risk which triggers protective actions. Upon receiving the signal, the system controlling the asset executes defense mechanisms targets at mitigating peer attack characteristics]). Regarding claims 3 and 13, a combination of Connell-Stephan discloses all limitations of claims 2/12. Furthermore, Connell discloses: wherein outputting the control signal further includes: deploying a new security control within the first network (“The final function, f5( . . . ), that combines the four individual functions is managed by analysts such that resulting Threat Beta for technologies are reasonable and drives appropriate actions” [0086]; “The goal of Threat Beta βi by technology is to trigger a call to action for the larger Threat Betas in a company's passive scan set of technologies” [0088]). Regarding claims 4 and 14, a combination of Connell-Stephan discloses all limitations of claims 2/12. Furthermore, Connell discloses: wherein outputting the control signal further includes: modifying the operation of an existing security control within the first network (“The final function, f5( . . . ), that combines the four individual functions is managed by analysts such that resulting Threat Beta for technologies are reasonable and drives appropriate actions” [0086]; “The goal of Threat Beta βi by technology is to trigger a call to action for the larger Threat Betas in a company's passive scan set of technologies” [0088]). Regarding claims 5 and 15, a combination of Connell-Stephan discloses all limitations of claims 1/11. Furthermore, Connell discloses: retraining, by the computing system, the model to recognize the attributes of the peer attack (“In the illustrated embodiment, the normalization factor of 0.091 was used to calibrate the algorithm to match the median metric when viewed across a statistically significant survey of representative companies across all GICS classifications. Periodically, the survey is repeated to discern if recalibration is necessary due to new technologies, the relative state of cyber security preparedness across industries, or other substantive developments that may affect a target enterprise's information technology ecosystems” [0065]; “Once each of Ts, Vs, and Al are determined for each IP address they are summed together and multiplied by a normalization factor to arrive at a cyber security threat score for the IP address” [0063] [Examiner notes that these texts show a system that adapts to change (e.g., new tech, new threat trends) that supports the need of retraining. Since actions are based on threat predictions, it is reasonable to interpret that it needs ongoing accuracy in order to remain completely dependable (naturally supports retraining). The weighted metrics also depend on historical data and can be updated or retrained to reflect new threat types including peer attacks]). Regarding claims 6 and 16, a combination of Connell-Stephan discloses all limitations of claims 1/11. Furthermore, Connell discloses: wherein outputting information about activity within the first network includes: regularly outputting information about activity within the first network (“collecting network parameters of a network associated with an enterprise at risk from cyber security threats” [0006]; “The ETL 102 further performs a transform function to the data from the enterprise network parameter data source 104 and data from the threat intelligence data sources 106. This transform function forms the data into relevant threat intelligence data useable within the cyber security threat assessment system 100. The ETL 102 then loads the relevant threat intelligence data into a data warehouse and analytics system 108 and also into a data storage database 110” [0026] [Examiner notes that in order to collect and store relevant information, it is natural for a system to periodically output information after collecting it]). Regarding claims 7 and 17, a combination of Connell-Stephan discloses all limitations of claims 1/11. Furthermore, Connell discloses: wherein outputting information about activity within the first network includes: outputting information about an attack occurring within the first network (“The data warehouse and analytics module 108 analyzes the relevant threat intelligence data to produce various enterprise metrics regarding cyber security of the target enterprise network. These enterprise metrics may include a holistic threat assessment score representing an overall risk level to the enterprise, individual threat rating score for each type of technology utilized within the enterprise network, various potential financial losses attributable to a potential cyber attack including losses attributable per technology and an overall enterprise loss, and a technology heat map that provides a geographic representation of cyber threats for the enterprise. In certain embodiments, the holistic threat assessment score for the target enterprise is normalized against a plurality of other enterprises also at risk of a cyber attack” [0027] [Examiner notes that this text portrays a clear output of attack-relevant data as there is a threat assessment score (explicitly quantification of risk), threat rating per technology (shows which asset is more vulnerable), potential financial losses attributable to an attack (ties risk directly to real world consequences), and more importantly a technology heat map that actually shows where attacks are emerging from or focused on which is based on real threat actor activity]). Regarding claims 8 and 18, a combination of Connell-Stephan discloses all limitations of claims 1/11. Connell does not explicitly disclose: wherein outputting information about activity within the first network includes: outputting processed information, wherein the processed information includes data that has been modified to remove references to the first entity and to remove privacy information associated with any customers of the first entity. However, Stephan discloses: wherein outputting information about activity within the first network includes: outputting processed information, wherein the processed information includes data that has been modified to remove references to the first entity and to remove privacy information associated with any customers of the first entity (““Sensitive communication” includes, but is not limited to, any communication containing personally identifiable data such as names, locations, financial information and/or healthcare information” [0039]; “obfuscate the true identity to make it easier for people to share sensitive threat data” [0085]; “that all of the threat data does not need to be stored globally, and that data which is sensitive to an organization can remain on premise and remain in control of the organization” [0102]; “In at least one embodiment within this disclosure, when data is extracted from a data source, the system can remove sensitive and/or private information prior to or during the extraction. The sensitive and/or private information can be stored local to the data source, either at the data source itself or within a device in the same local network as the data source” [0119] [Examiner notes that the second reference is coming in to show that activity data that is sent out can be enhanced and/or modified to not include or obfuscate information about a specific network/entity]). Thus, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to combine the method of Connell with the added structure of Stephan because removing sensitive information can be advantageous because such removal can help to maintain the privacy of the data source 110 which provides the data to the processor 108/ETL module 112 [Stephan 0142]. Regarding claim 10, a combination of Connell-Stephan discloses all limitations of claim 1. Furthermore, Connell discloses: receiving, by the computing system and from the data warehouse, information about normal activity taking place at a third network operated by a third entity, wherein the first entity and the third entity are marketplace competitors (“collecting network parameters of a network associated with an enterprise at risk from cyber security threats” [0006]; “The ETL 102 further performs a transform function to the data from the enterprise network parameter data source 104 and data from the threat intelligence data sources 106. This transform function forms the data into relevant threat intelligence data useable within the cyber security threat assessment system 100. The ETL 102 then loads the relevant threat intelligence data into a data warehouse and analytics system 108 and also into a data storage database 110” [0026] [Examiner interprets the enterprise, being connected to all of the entities and the data warehouse, having access to all entities and being able to identify any activity within them. Examiner notes that both the network parameters and the umbrella of relevant threat intelligence data (necessary to have normal activity to produce a baseline in order to compare abnormal activity to) could both include normal activity data within the network]). Claim 19 recites substantially the same limitation as claim 10, in the form of a computing system for implementing the corresponding method, therefore it is rejected under the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure: Heuser et al. (US 2025/0150464 A1) teaches techniques for maintaining and using a warehouse of data about potential or actual cyberattack threats for an industry. In one example, this disclosure describes a method that includes outputting, by a computing system operated by a first entity and to a data warehouse, information about activity within a first network operated by the first entity; receiving, by the computing system and from the data warehouse, information about attributes of a peer attack directed to a second network operated by a second entity, wherein the first entity and the second entity may be marketplace competitors; applying, by the computing system, a model to identify a network asset included within the first network that is vulnerable to an attack having the attributes of the peer attack; and outputting, by the computing system and to the network asset, a control signal to modify the operation of the network asset. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARON MATTHEWOS WORKU whose telephone number is (703)756-1761. The examiner can normally be reached Monday - Friday, 9:30 am - 6:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Linglan Edwards can be reached on 571-270-5440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SARON MATTHEWOS WORKU/Examiner, Art Unit 2408 /TECHANE GERGISO/Primary Examiner, Art Unit 2408
Read full office action

Prosecution Timeline

Nov 06, 2023
Application Filed
Jun 10, 2025
Non-Final Rejection — §103, §112
Aug 19, 2025
Interview Requested
Sep 03, 2025
Applicant Interview (Telephonic)
Sep 04, 2025
Examiner Interview Summary
Sep 15, 2025
Response Filed
Sep 29, 2025
Final Rejection — §103, §112
Nov 03, 2025
Interview Requested
Nov 25, 2025
Response after Non-Final Action
Dec 17, 2025
Request for Continued Examination
Dec 31, 2025
Response after Non-Final Action
Jan 10, 2026
Non-Final Rejection — §103, §112
Mar 19, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547939
SYSTEM AND A METHOD FOR PERFORMING A PRIVACY-PRESERVING DISTRIBUTION SIMILARITY TESTS BETWEEN A PLURALITY OF DATASETS
2y 5m to grant Granted Feb 10, 2026
Patent 12524579
SRAM PHYSICALLY UNCLONABLE FUNCTION (PUF) MEMORY FOR GENERATING KEYS BASED ON DEVICE OWNER
2y 5m to grant Granted Jan 13, 2026
Patent 12513013
Dynamic Cross-Node Multidimensional Hashchain Network-Based Meta-Content Enabler for Real-Time Content Based Anomaly Detection
2y 5m to grant Granted Dec 30, 2025
Patent 12475240
PROTECTED CONTENT CONTAMINATION PREVENTION
2y 5m to grant Granted Nov 18, 2025
Patent 12470519
INTRA-VLAN TRAFFIC FILTERING IN A DISTRIBUTED WIRELESS NETWORK
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+53.6%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month