Prosecution Insights
Last updated: April 19, 2026
Application No. 18/212,455

Intelligent Search Engine for Detecting Unauthorized Activity

Non-Final OA §103
Filed
Jun 21, 2023
Examiner
KNACKSTEDT, JACOB BENEDICT
Art Unit
2408
Tech Center
2400 — Computer Networks
Assignee
BANK OF AMERICA CORPORATION
OA Round
3 (Non-Final)
88%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
37 granted / 42 resolved
+30.1% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
21 currently pending
Career history
63
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
61.6%
+21.6% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
14.8%
-25.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 42 resolved cases

Office Action

§103
DETAILED ACTION This office action is in response to the application filed on 11/21/2025. Claim(s) 1-6, 8-14, and 16-20 is/are pending and are examined. Claim(s) 7 and 15 are cancelled. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/21/2025 has been entered. Response to Arguments Applicant's arguments with respect to amended claim(s) 1, 9, and 17 have been fully considered but are moot in view of the new ground(s) of rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 9-11, 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over by Jones (US 2024/0364730 A1), hereinafter Jones in view of Jayaraman (2024/0281944 A1), hereinafter Jayaraman in further view of Narendranathan (US 2024/0333699 A1), hereinafter Naren in further view of Plymouth (US 2013/0293363 A1), hereinafter Ply in further view of Muddu (US 2019/0173893 A1), hereinafter Muddu . Regarding Claim(s) 1, 9, and 17 Jones teaches: A method, comprising: at a computing platform comprising at least one processor, a communication interface, and memory: (Jones ¶ 176, 179 teaches, a system, method, or article of manufacture and a computer readable media that includes non-transitory computer readable storage medium impressed with computer program instructions, when executed on a processor, implement the methods described above.) receiving, by the at least one processor, input from a computing device associated with an entity; (Jones ¶ 54 teaches, Usual context represents one or more contexts from which users usually access customer endpoints.) determining, by the at least one processor, that the entity is a human entity; (Jones ¶ 46 teaches, by “step-up authentication” refers to an additional security challenge to an ostensible user to establish their identity beyond initial credentials received in response to a request for a userID and password or from a SSO source. Examples of step-up include multi-factor authentication, CAPTCHA, biometrics, and similar techniques.) receiving, by the at least one processor, the identity information from the one or more data sources; comparing, by the at least one processor, using an artificial intelligence model, data from the received input with the identity information from the one or more data sources; (Jones ¶ 70-72 and 82-84 teaches, Examples of extracted features include request information such as source IP address, a user-agent string, and other such information that can be found in a request header or payload. Once access prediction service receives features from signal node, access prediction service provides those features to ML models and receives a risk score.) based on the comparison, assigning, by the at least one processor, using the artificial intelligence model, a risk score associated with the entity, wherein the risk score determines a presence of potential unauthorized activity associated with the entity; (Jones¶ 62-65 teaches, The feedback from the access prediction service can include a risk score that reflects whether the request is anomalous for that particular userID.) receiving, by the at least one processor, feedback data on the risk score assigned by the artificial intelligence model; and (Jones¶ 62-65 teaches, The feedback from the access prediction service can include a risk score that reflects whether the request is anomalous for that particular userID.) automatically and continuously updating, by the at least one processor, based on the feedback data, the artificial intelligence model. (Jones ¶ 100 and 119 teaches, Bayesian inference is a class of techniques that update probability of a hypothesis as more evidence becomes available. In further enhancements, the machine learning models can include supervised and/or semi-supervised learning models, trained on labeled data based on authentication journey event data. The models can be trained on data labeled by which of terminal node were reached by the authentication journey, the data also including a variety of feature categories) Jones does not appear to explicitly teach but in related art: responsive to determining that the entity is a human entity, querying, by the at least one processor, one or more data sources for identity information related to the entity; (Jayaraman ¶ 157 and 218 teaches, the verification analysis compares the received authentication data to stored user authentication data to determine whether the received and stored authentication data sets match. The identity management service, thus, determines whether a correct user name, password, PIN, biometric data, device identification, or other authentication data is received. Following the verification analysis, (i.e., responsive to) some embodiments retrieve additional elements of end user data (i.e., identity information) that can be verified against transfer data or augmentation data included as part of an electronic transfer instrument.) It would have been obvious to one with ordinary skill the art, prior to the applicant's earliest effective filing date, to combine the teachings of Jones with Jayaraman, to modify the access prediction service method of Jones with the retrieving user related information after verification. The motivation to do so, Jayaraman ¶ 121, for enhanced security and accuracy. Jones in view of Jayaraman does not appear to explicitly teach but in related art: , wherein the data from the received input with the identity information from the one or more data sources further comprises: identifying, via the artificial intelligence model, a set of fact points about a known user, generating, via the artificial intelligence model using the identified set of fact points about the known user, an authentication question, wherein the identified set of facts points about the known user constitutes a correct response to the authentication question (Naren ¶ 50 teaches, the user is prompted to answer authentication questions based on a corpus of pre-selected user data that is received corresponding to a combination of, for example, any of credit-based, clinical-based, calendar entry-based, or claims-based information, that is used to generate KBA questions and answers using Artificial Intelligence and Machine Learning techniques. It can be determined whether to authenticate the user (e.g., whether to reset the user's possession factor credentials) in response to whether the user correctly answers a requisite number of current dynamic, personalized knowledge-based authentication questions.) It would have been obvious to one with ordinary skill the art, prior to the applicant's earliest effective filing date, to combine the teachings of Jones in view of Jayaraman with Naren, to modify the access prediction service method of Jones with the retrieving user related information after verification of Jayaraman with the ai generated authentication questions of Naren. The motivation to do so, Naren ¶ 50, to authenticate a user. Jones-Jayaraman-Naren does not appear to explicitly teach but in related art: Wherein the one or more data sources comprise financial institution data sources (Ply ¶ 32 teaches, an alert event signal may be generated based on information from credit bureaus, news agencies, regulating agencies, other financial institutions, and other third party sources.) transmitting a notification of the presence of potential unauthorized activity associated with the entity; and (Ply ¶ 137 teaches, a high rate of response for fraudulent transaction alerts based on suspicious transaction locations. (i.e., unauthorized activity). ¶ 139 teaches, The alert event signal may include information related to the alert to be provided, which may include the type of alert, the alert message, the customer account, and/or the transaction.) displaying via the computing platform and a user interface, a display notification of the presence of potential unauthorized activity associated with the entity, wherein the user interface further displays an alert including information associated with the alert, a provide feedback option, and […]. (Ply Abstract, alerts to one or more customers at an optimized time and communication channel based on at least customer preferences, transactions, activities, usage patterns, and other information; and for allowing customers to directly respond and communicate feedback to alerts (i.e. provide feedback) that are provided to complete various account actions, including to "snooze" one or more alerts. ¶ 139 teaches, The alert event signal may include information related to the alert to be provided, which may include the type of alert, the alert message, the customer account, and/or the transaction.) It would have been obvious to one with ordinary skill the art, prior to the applicant's earliest effective filing date, to combine the teachings of Jones-Jayaraman-Naren with Ply, to modify the access prediction service method of Jones with the retrieving user related information after verification of Jayaraman with the ai generated authentication questions of Naren with the financial alerts of Ply. The motivation to do so, Ply ¶ 9, optimally provide alerts to the customers but also ensure that such alerts provide proper notice to customers.. Jones-Jayaraman-Naren-Ply does not appear to explicitly teach but in related art: the risk score associated with the entity … an additional details option (Muddu ¶ 459-464 teaches, "Threat Review" view 4000 can identify a particular threat by its type and provides a summary description 4002 along with a threat score 4003. Threats Review view 4000 additionally prompts the user to take "Actions" 4010, view additional "Details" 4011, or set up a "Watchlist" 4021. By clicking on the "Actions" tab 4010, the user can select from several options, as shown in FIG.) It would have been obvious to one with ordinary skill the art, prior to the applicant's earliest effective filing date, to combine the teachings of Jones-Jayaraman-Naren-Ply with Muddu, to modify the access prediction service method of Jones with the retrieving user related information after verification of Jayaraman with the ai generated authentication questions of Naren with the financial alerts of Ply with the detailed alert dashboard of Muddu. The motivation to do so, Muddu ¶ 137, improve threat detection and targeted response by using a variety of threat indicators. Regarding Claim(s) 3, 11, and 19 Jones-Jayaraman-Naren-Ply-Muddu teaches: The computing platform of claim 1, (Jones-Jayaraman-Naren-Ply-Muddu teaches the parent claim above.) further including instructions that, when executed, cause the computing platform to: retrieve a predetermined threshold; (Jones ¶ 62-64 teaches, A risk score can indicate high risk, for example, by exceeding a risk threshold, or by falling within a range indicating a high-risk level.) compare the risk score to the predetermined threshold; and (Jones ¶ 62-64 teaches, A risk score can indicate high risk, for example, by exceeding a risk threshold, or by falling within a range indicating a high-risk level.) based on the comparison, determine an occurrence of unauthorized activity associated with the entity when the risk score is above the predetermined threshold. (Jones ¶ 62-64 teaches, A risk score can indicate high risk, for example, by exceeding a risk threshold, or by falling within a range indicating a high-risk level.) Regarding Claim(s) 2, 10, and 18 Jones-Jayaraman-Naren-Ply-Muddu teaches: The computing platform of claim 1, (Jones-Jayaraman-Naren-Ply-Muddu teaches the parent claim above) transmitting the authentication question to a computing device associated with the entity; receiving, from the computing device associated with the entity, response data responsive to the authentication question; (Jones ¶ 75 teaches, MFA node 225 presents an MFA challenge to the user, if the request is routed to MFA node 225. The MFA challenge requires that the client provide verification factors beyond the userID and password that the client previously presented such as providing answers to security questions that match previously provided security questions.) comparing the response data to the set of fact points about the known user; and (Jones ¶ 75 teaches, MFA node 225 presents an MFA challenge to the user, if the request is routed to MFA node 225. The MFA challenge requires that the client provide verification factors beyond the userID and password that the client previously presented such as providing answers to security questions that match previously provided security questions. The outcome determines how the request itself is further handled, and post-journey handling of the request itself is not a focus of this disclosure. The outcome is either success node or failure node once the request has been routed through inner tree node. (i.e., the answer must be compared for an outcome to be determined.)) based on matching the set of fact points about the known user to the response data, authenticating the entity. (Jones ¶ 75 teaches, other configurations of decision node 224 can supplement or supplant the behavior of decision node 224 described above, based on the customer's particular access policies or requirements. MFA node 225 presents an MFA challenge to the user, the MFA challenge requires that the client provide verification factors beyond the userID and password that the client previously presented such as providing answers to security questions that match previously provided security questions. The outcome determines how the request itself is further handled, and post-journey handling of the request itself is not a focus of this disclosure. The outcome is either success node or failure node once the request has been routed through inner tree node. (i.e., a success is a match.)) wherein comparing, using the artificial intelligence model, the data from the received input with the identity information from the one or more data sources further comprises identifying a set of fact points about a known user; (Jones ¶ 83 teaches, Access prediction service also provides feedback MFA authentication journey. The feedback provided by access prediction service includes the risk score. The risk score can be based on a combination of the ML model risk sub-score and a heuristic risk sub-score based on the results of one or more heuristic rules. The feedback can also include an explanation of the risk score. The explanation indicates a feature that was anomalous. Examples of explanations include unusual city, unusual OS version, unusual browser family, other features received from signal node (i.e., identity information with fact points) when those features are part of an unusual context for the userID. (i.e., the ML model compares the received information to a baseline to generate the score.)) Claim(s) 4, 12, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jones-Jayaraman-Naren-Ply-Muddu as applied to claim 1, in further view of Li (US 2024/0319990 A1), hereinafter Li. Regarding Claim(s) 4, 12, and 20 Jones-Jayaraman-Naren-Ply-Muddu teaches: The computing platform of claim 1, (Jones-Jayaraman-Naren-Ply-Muddu teaches the parent claim above) Jones-Jayaraman-Naren-Ply-Muddu does not appear to explicitly teach but in related art: wherein automatically and continuously updating the artificial intelligence model based on the feedback data comprises adjusting the risk score assigned by the artificial intelligence model based on the feedback data. (Li ¶ 101 teaches, Alternatively or in addition, the threshold is adjusted based on feedback 216 received by the machine learning platform 136 implementing the version difference algorithm 138.) It would have been obvious to one with ordinary skill the art, prior to the applicant's earliest effective filing date, to combine the teachings of Jones-Jayaraman-Naren-Ply-Muddu with Li, to modify the access prediction service method of Jones with the retrieval of user related information after verification of Jayaraman with the ai generated authentication questions of Naren with the financial alerts of Ply with the detail alert dashboard of Muddu with the adjusting threshold of Li. The motivation to do so constitutes applying a known technique of adjusting a threshold based on system feedback to known devices and/or methods for detecting for bot users ready for improvement to yield predictable results. Claim(s) 5-6 and 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jones-Jayaraman-Naren-Ply-Muddu as applied to claim 1, in further view of Smyth (US 2018/0034842 A1), hereinafter Smyth. Regarding Claim(s) 5 and 13 Jones-Jayaraman-Naren-Ply-Muddu teaches: The computing platform of claim 1, (Jones-Jayaraman-Naren-Ply-Muddu teaches the parent claim above.) Jones-Jayaraman-Naren-Ply-Muddu does not appear to explicitly teach but in related art: wherein the one or more data sources comprise historical data sources and publicly available data sources. (Smyth Fig. 2 ¶ 20-21 teaches the concept, the data sources 110 may be a shared public repository of information or a proprietary repository of information that is not available to the public. The data processor 120 creates a combined data set 123 of historical vulnerability information obtained from data sources) It would have been obvious to one with ordinary skill the art, prior to the applicant's earliest effective filing date, to combine the teachings of Jones-Jayaraman-Naren-Ply-Muddu with Smyth, to modify the access prediction service method of Jones with the retrieval of user related information after verification of Jayaraman with the ai generated authentication questions of Naren with the data sources of Smyth. The motivation to do so, Smyth ¶ 48, to improve the stability of the prediction. Regarding Claim(s) 6 and 14 Jones-Jayaraman-Naren-Ply-Muddu-Smyth teaches: The computing platform of claim 1, (Jones-Jayaraman-Naren-Ply-Muddu teaches the parent claim above.) wherein the one or more data sources comprise social media data sources. (Smyth ¶ 20-21 teaches, Other publicly data sources include VirusTotal Samples and Reports 230 and online discussions 240 including those on social media and discussion forums.) The motive given in Claim 5 is equally applicable to the above claim. Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jones-Jayaraman-Naren-Ply-Muddu as applied to claim 1, in further view of Palan(US 12,244,608 B2), hereinafter Palan. Regarding Claim(s) 8 and 16 Jones-Jayaraman-Naren-Ply-Muddu teaches: The computing platform of claim 1, (Jones-Jayaraman-Naren-Ply-Muddu teaches the parent claim above) Jones-Jayaraman-Naren-Ply-Muddu does not appear to explicitly teach but in related art: wherein determining that the entity is a human entity includes identifying an input speed associated with the received input. (Palan Col. 6 Ln. 40-67 and Col. 10 Ln. 37-57 teaches, these parameters may be compared with parameters associated with other users, verified human users, and/or automated bots, to determine whether a submitted response is anonymous relative to other human users. user interaction (e.g., via occasional direct interaction with a user in connection with and/or alternative to advertisement rendering) including user “fingerprints” generated based on user interaction (e.g., typing cadence, pointer click timing in connection with interactive activities, etc.), and/or other information and/or messages obtained devices proximate to a user and/or an associated device may be used in connection with generating Turing scores associated with a user and/or an associated device.) It would have been obvious to one with ordinary skill the art, prior to the applicant's earliest effective filing date, to combine the teachings of Jones-Jayaraman-Naren-Ply-Muddu with Palan, to modify the access prediction service method of Jones with the ai generated authentication questions of Naren with the detection of a human user based off of input speed of Palan. The motivation to do so, Palan Col. 8 Ln. 44-47, to identify anomalous user behavior. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 12,143,405 B2 - Malicious Computing Attacks During Suspicious Device Behavior US 11,757,914 B1 - Automated Responsive Message To Determine A Security Risk Of A Message Sender Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB BENEDICT KNACKSTEDT whose telephone number is (703)756-5608. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Linglan Edwards can be reached on (571) 270-5440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.B.K./Examiner, Art Unit 2408 /LINGLAN EDWARDS/Supervisory Patent Examiner, Art Unit 2408
Read full office action

Prosecution Timeline

Jun 21, 2023
Application Filed
Apr 15, 2025
Non-Final Rejection — §103
Jul 18, 2025
Response Filed
Aug 19, 2025
Final Rejection — §103
Nov 21, 2025
Request for Continued Examination
Dec 06, 2025
Response after Non-Final Action
Jan 12, 2026
Non-Final Rejection — §103
Feb 25, 2026
Interview Requested
Mar 11, 2026
Examiner Interview Summary
Mar 11, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596633
VULNERABILITY DETECTION METHOD AND DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12591692
METHODS FOR SECURING DATA
2y 5m to grant Granted Mar 31, 2026
Patent 12579300
ELECTRONIC APPARATUS AND CONTROL METHOD THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12579124
ZERO-CODE APPROACH FOR MODEL VERSION UPGRADES
2y 5m to grant Granted Mar 17, 2026
Patent 12566885
DATA PROCESSING SYSTEMS AND METHODS FOR AUTOMATICALLY DETECTING TARGET DATA TRANSFERS AND TARGET DATA PROCESSING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
88%
Grant Probability
99%
With Interview (+16.7%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 42 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month