Prosecution Insights
Last updated: April 19, 2026
Application No. 18/627,414

SYSTEM AND METHOD FOR SIGNAL PROCESSING FOR CYBER SECURITY

Final Rejection §103
Filed
Apr 04, 2024
Examiner
KNACKSTEDT, JACOB BENEDICT
Art Unit
2408
Tech Center
2400 — Computer Networks
Assignee
Royal Bank Of Canada
OA Round
2 (Final)
88%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
37 granted / 42 resolved
+30.1% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
21 currently pending
Career history
63
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
61.6%
+21.6% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
14.8%
-25.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 42 resolved cases

Office Action

§103
DETAILED ACTION This office action is in response to the application filed on 02/18/2026. Claim(s) 1-20 is/are pending and are examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments with respect to amended claim(s) 1, 12, and 20 have been fully considered but are moot in view of the new ground(s) of rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 9-10, 12-16, 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Siddiqui (US 12,323,435 B2), hereinafter Siddiqui in view of Almaz (US 2022/0070201 A1) , hereinafter Almaz in further view of Reddy (US 2019/0311367 A1), hereinafter Reddy. Regarding Claim(s) 1,12, and 20 Siddiqui teaches: A computer-implemented system for signal processing for cyber fraud detection, the system comprising: (Siddiqui Col. 2 Ln. 25- 35 teaches, a computer-implemented method performed by a system for mitigating the risk of fraud related to streaming content consumption is disclosed.) a processor; and a non-transitory memory storing one or more sets of instructions that when executed by the processor, causes the system to: (Siddiqui Fig.2 teaches, a processor and a memory for executing the method.) receive a trigger signal for fraud detection, the trigger signal comprising an event indicator and entity data associated with an entity profile stored in a database; (Siddiqui Col. 12 Ln. 40-52 teaches, The first level breach analyzer 304 is configured to identify the user interaction data, which is flagged (i.e., trigger signal) by both the data filter module 302 (for example, by highlighting the outliers) and the anomaly detector 306 (for example, by identifying the anomalies) and select such user interaction data as potential fraud event and forward such information to the risk module 308. (i.e., event indicator). Col. 13 Ln. 43-50, The rule management module is configured to receive the user breach profile (i.e., entity data)) determine, based on the trigger signal, a risk signal processing model comprising a plurality of risk components, each risk component associated with a respective weighing factor; (Siddiqui Col. 11-12 Ln. 53-67 and 1-5 teaches, The risk module 308 is configured to receive the users' interaction data corresponding to the set of users associated with outliers/anomalies from the first level breach analyzer 304. (i.e., receiving the information containing the flagged information) the baseline threshold values may be different for each user and are defined by the content provider for each user or a group of users. In another example embodiment, the baseline threshold values may be adapted or modified dynamically based on the analysis of the detection of outliers. For example, the baseline threshold value related to a number of user logins can be changed to ‘three’ from ‘four’, when there are lesser detections of outliers related to user logins. (i.e., respective weighing factor). Col. 14 Ln. 45-55 teaches, the risk scoring module 312 is configured to generate a risk profile for the user based on the user breach profiles related to the risk parameters, which includes information related to the current outlier/anomalous behavior as well as the breach history information.) compute, based on the risk signal processing model, a respective risk signal for each of the plurality of risk components; (Siddiqui Col. 15 Ln. 43-50 teaches, a cumulative risk score may be generated based on risk scores corresponding to different parameters, (i.e. risk components) and a user may be classified or labeled into a category from among a plurality of categories (i.e., risk signal) based on the cumulative score.) process the respective risk signal for each of the plurality of risk components in real time or near real time to generate an aggregated risk signal; and (Siddiqui Col. 15 Ln. 43-50 teaches, a cumulative risk score may be generated based on risk scores corresponding to different parameters, and a user may be classified or labeled into a category from among a plurality of categories based on the cumulative score. Col. 12 Ln. 19-27 teaches, the ‘outliers’ provide an indication of user interaction data standing out when compared to preset baseline threshold values. The outlier analysis does not take into account a change in a user's current interaction vis-à-vis how the user has interacted in the past or how other users are interacting given a time of the day, day of the week, type of content, and the like. (i.e., real time updating)) generate, based on the aggregated risk signal, a fraud or cyber security alert signal. (Siddiqui Col. 14 Ln. 6-30 teaches, If the rule is violated, an error message may be generated, which may confirm that a fraudulent event has occurred. In one example embodiment, increasing the severity may include increasing the number of actions, types of actions, and the like, taken against the user that may limit the user to commit fraud. (i.e., aggregation)) Siddiqui does not appear to explicitly teach but in related art: wherein the event indicator identifies one of a plurality of different logical phases of interaction with an electronic service platform; (Almaz ¶ 4 teaches, monitoring a plurality of electronically-observable actions of an entity, the plurality of electronically-observable actions of the entity corresponding to a respective plurality of events enacted by the entity; (i.e. logical phase of interaction) converting the plurality of electronically-observable actions of the entity to electronic information representing the plurality of actions of the entity; identifying an anomalous event from the plurality of events enacted by the entity. ¶ 47 teaches, a user may use an endpoint device 304 to access and browse a particular website on the Internet. In this example, the individual actions performed by the user to access and browse the website constitute a cyber behavior. As another example, a user may use an endpoint device 304 to download a data file from a particular system at a particular point in time. In this example, the individual actions performed by the user to download the data file, and associated temporal information, such as a time-stamp associated with the download, constitute a cyber behavior. In these examples, the actions are enacted within cyberspace, in combination with associated temporal information, makes them electronically-observable.) to generate a risk sub-component score for at least one of: an account risk component, a platform session risk component, an external entity risk component, or a transaction risk component; (Almaz ¶ 73 teaches, the security analytics system 118 may be implemented to perform risk-adaptive operations to access risk scores associated with the same user account, but accrued on different endpoint devices. (i.e., account risk and external entity risk)) It would have been obvious to one with ordinary skill the art, prior to the applicant's earliest effective filing date, to combine the teachings of Siddiqui with Almaz, to modify the system for mitigating risk of fraud with the digital assets associated with the different the analysis of multiple event types of Almaz. The motivation to do so, Almaz ¶ 35 teaches, provides a useful and concrete result of performing security analytics functions to mitigate security risk. Siddiqui in view of Almaz does not appear to explicitly teach but in related art: updating, based on a stream of the risk sub-component scores, a risk signal processing model comprising a plurality of risk components including the account risk component, the platform session risk component, the external entity risk component and the transaction risk component (Reddy ¶ 78 teaches, the multiple risk scores aggregated into total risk score is a customer risk score indicating a risk level associated with this customer of the new transaction, a transaction risk score indicating a risk level associated with the new transaction, and a geo risk score indicating the risk level associated with the location of the transaction as well as the destination. (i.e., sub-component scores) ¶ 116 teaches, Using the transaction data, (i.e., updating) processing logic identifies time-based behavior over a period of time using the fingerprint and historical data (processing block 1503). In one embodiment, identifying time-based behavior over a period of time comprises automatically extracting topologies of suspicious behavior by extracting and inferring features. (i.e., risk signal processing)) route the trigger signal to a plurality of worker data processes associated with the identified logical phase of the trigger signal, each of the associated plurality of worker data processes processing the trigger signal in parallel (Reddy ¶ 81 teaches, each signal comprises a plurality of threat vectors measured simultaneously in a time unit. The collection of signals is organized as a financial genome in which various threat vectors are linked by their similarity.) It would have been obvious to one with ordinary skill the art, prior to the applicant's earliest effective filing date, to combine the teachings of Siddiqui in view of Almaz, to modify the system for mitigating risk of fraud with the digital assets associated with the different the analysis of multiple event types of Almaz with the aggregating score of multiple features of Reddy. The motivation to do so, Reddy ¶ 2 teaches, to improve effectiveness of transaction surveillance and suspicious activity monitoring. Regarding Claim(s) 2 and 13 Siddiqui-Almaz-Reddy teaches: The system of claim 1, (Siddiqui-Almaz-Reddy teaches the parent claim above.) wherein the risk signal is obtained from one or more external databases or websites pertaining to the entity profile. (Siddiqui Col. 11 Ln. 49-53 teaches, it is understood that the data filter module 302 may be capable of processing user interaction data (i.e., entity profile information) corresponding to millions of users of streaming content. (i.e., website)) Regarding Claim(s) 3 and 14 Siddiqui-Almaz-Reddy teaches: The system of claim 2, obtaining the risk signal from one or more external databases or websites comprises: (Siddiqui-Almaz-Reddy teaches the parent claim above.) obtaining a risk signal associated with a risk level of an IP address. (Siddiqui Col. 12 Ln. 57-65 teaches, the risk module 308 is configured to identify at least one of a user, a user identifier, a user's device, a user's IP address, etc. from the received user interaction data and thereafter fetch or retrieve relevant historical details from a user history database 360 in the storage module 164.) Regarding Claim(s) 4 Siddiqui teaches: The system of claim 3, wherein processing the risk signal in real time or near real time to generate intelligence comprises: (Siddiqui-Almaz-Reddy teaches the parent claim above.) determining a high risk score based on a fraudulent history associated with the IP address. (Siddiqui Col. 14 Ln. 50-60 teaches, configured to determine a plurality of risk scores vis-à-vis various risk parameters for the user and thereafter label the user as a high-risk user, a moderate-risk user, a low-risk user, etc. Each risk score corresponds to a user breach profile, where the user breach profile indicates which risk parameter is breached and the risk score indicates a score (i.e., a numerical value) related to the breach of the risk parameter.) Regarding Claim(s) 5 and 15 Siddiqui-Almaz-Reddy teaches: The system of claim 1, wherein the one or more sets of instructions when executed by the processor, further causes the system to: (Siddiqui-Almaz-Reddy teaches the parent claim above.) based on the generated risk score, automatically generate an electronic signal causing a graphical user interface representing a risk alert to be displayed to one or more devices. (Siddiqui Col. 16 Ln. 31-41 teaches, the user with the label L.sub.1 posing a very low security threat to the digital platform may be provided an option to utilize services of the content provider at a subsidized rate (e.g., 20% reduction in payment charges to access premium services of the digital platform). In such cases, the system 150 may be caused to display a message or a notification for the user 102 on a user interface (UI) of a corresponding electronic device (i.e., risk alert)) Regarding Claim(s) 6 and 16 Siddiqui-Almaz-Reddy teaches: The system of claim 1, (Siddiqui-Almaz-Reddy teaches the parent claim above.) wherein the one or more sets of instructions when executed by the processor, further causes the system to generate a command signal to deactivate or lock an account associated with the entity profile. (Siddiqui Col. 14 Ln. 15-25 teaches, some examples of negative actions may include, but are not limited to, blocking a playback of content for the user, logging the user out on multiple devices, restricting all access from the user's account and blacklisting the user. (i.e., deactivate or lock)) Regarding Claim(s) 9 and 18 Siddiqui-Almaz-Reddy teaches: The system of claim 1, (Siddiqui-Almaz-Reddy teaches the parent claim above.) wherein the trigger signal is initiated by a login attempt associated with the entity profile. (Siddiqui Col. 1 Ln. 53-57 teaches, a rule may be set in place for a number of logins, say 5 logins allowed within a preset timeframe, such as one hour.) Regarding Claim(s) 10 and 19 Siddiqui-Almaz-Reddy teaches: The system of claim 1, (Siddiqui-Almaz-Reddy teaches the parent limitation above.) wherein the trigger signal is automatically generated by one or more predefined tasks. (Siddiqui Col. 1 Ln. 50-55 teaches, Further, predefined rules are put in place to raise an alarm or execute an action to limit the scope of the fraudulent action.) Claim(s) 7, 8, 11, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Siddiqui-Almaz-Reddy in view of Jakobsson(US 2023/0006976 A1), hereinafter Jakob. Regarding Claim(s) 7 and 17 Siddiqui-Almaz-Reddy teaches: The system of claim 1, (Siddiqui-Almaz-Reddy teaches the parent claim above.) Siddiqui-Almaz-Reddy does not appear to explicitly teach but in related art: wherein the entity profile is associated with a user account providing access to one or more digital assets. (Jakob ¶ 320 teaches, by using a machine-learning module trained to detect fraud, theft or other abuse; and/or by receiving a report from a user previously or currently associated with the digital asset,) It would have been obvious to one with ordinary skill the art, prior to the applicant's earliest effective filing date, to combine the teachings of Siddiqui-Almaz-Reddy with Jakob, to modify the system for mitigating risk of fraud with the digital assets associated with the different the analysis of multiple event types of Almaz with the aggregating score of multiple features of Reddy with the digital assets associated with a user of Jakob. The motivation to do so, Jakob ¶ 91, to mitigate abuse of a digital asset. Regarding Claim(s) 8 Siddiqui-Almaz-Reddy-Jakob teaches: The system of claim 7, (Siddiqui-Almaz-Reddy-Jakob teaches the parent limitation above.) wherein the one or more digital assets comprises one or more of: digital assets, digital currency, encrypted user data, financial assets, and credit history. (Jakob ¶ 320 teaches, by using a machine-learning module trained to detect fraud, theft or other abuse; and/or by receiving a report from a user previously or currently associated with the digital asset.) The motive given in Claim 7 is equally applicable to the above claim. Regarding Claim(s) 11 Siddiqui-Almaz-Reddy-Jakob teaches: The system of claim 10, (Siddiqui-Almaz-Reddy-Jakob teaches the parent claim above.) wherein the one or more predefined tasks comprises one of: an electronic money transfer, an access request from an external party, a credit history request. (Jakob ¶ 145 teaches, it should be appreciated that the data collection capabilities of any media wallet application described herein can also be implemented outside the context of an NFT platform and/or in a dedicated application and/or in an application unrelated to the storage of fungible tokens (i.e., cryptocurrency) and/or NFTs. Jakob ¶ 109 teaches, trained to detect theft and/or other abuse, among numerous other techniques as appropriate for the particular digital assets being transferred (i.e., transfer of fungible token which is considered an electronic money transfer) and the particular platforms being used.) The motive given in Claim 7 is equally applicable to the above claim. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 11,461,458 B2 - Measuring Data-breach Propensity Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB BENEDICT KNACKSTEDT whose telephone number is (703)756-5608. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Linglan Edwards can be reached on (571) 270-5440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.B.K./Examiner, Art Unit 2408 /LINGLAN EDWARDS/Supervisory Patent Examiner, Art Unit 2408
Read full office action

Prosecution Timeline

Apr 04, 2024
Application Filed
Sep 16, 2025
Non-Final Rejection — §103
Feb 18, 2026
Response Filed
Mar 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596633
VULNERABILITY DETECTION METHOD AND DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12591692
METHODS FOR SECURING DATA
2y 5m to grant Granted Mar 31, 2026
Patent 12579300
ELECTRONIC APPARATUS AND CONTROL METHOD THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12579124
ZERO-CODE APPROACH FOR MODEL VERSION UPGRADES
2y 5m to grant Granted Mar 17, 2026
Patent 12566885
DATA PROCESSING SYSTEMS AND METHODS FOR AUTOMATICALLY DETECTING TARGET DATA TRANSFERS AND TARGET DATA PROCESSING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+16.7%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 42 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month