Prosecution Insights
Last updated: April 19, 2026
Application No. 18/444,905

AUTOMATIC CREDENTIAL GENERATION FOR AUTHENTICATION PENETRATION TESTING

Final Rejection §102§103
Filed
Feb 19, 2024
Examiner
HO, DAO Q
Art Unit
2432
Tech Center
2400 — Computer Networks
Assignee
Cisco Technology Inc.
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
565 granted / 679 resolved
+25.2% vs TC avg
Strong +32% interview lift
Without
With
+32.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
31 currently pending
Career history
710
Total Applications
across all art units

Statute-Specific Performance

§101
11.6%
-28.4% vs TC avg
§103
36.3%
-3.7% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
19.9%
-20.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 679 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Response to Amendment This is a reply to the application filed on 9/25/2025, in which, claim(s) 1, 3-11, 13-15 and 17-23 is/are pending. Claim(s) 2, 12 and 16 is/are cancelled. Claim(s) 21-23 is/are newly added. Response to Arguments Claim Rejections - 35 U.S.C. § 102 and 35 U.S.C. § 103: Applicant’s arguments with respect to claim(s) 1, 3-11, 13-15 and 17-23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 3-11, 13-15 and 17-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ben David et al. (US 20220321550 A1; hereinafter Ben) in view of Romero Zambrano et al. (US 20220182397 A1; hereinafter Romero) further in view of Deardorff et al. (US 20200145446 A1; hereinafter Deardorff). Regarding claims 1, 11 and 15, Ben discloses a method comprising: gathering public credentials from a plurality of credential databases (scraping of the at least one external data source 203 to extract any leaked data as soon as it is published, and for instance storing scraped data 204 at a dedicated scraped data database for future analysis (e.g., to extract credentials). In some embodiments, scraping of data is carried out periodically at predetermined times [Ben; ¶32; Figs. 2-3 and associated texts]); generating, using machine learning logic, one or more sets of the public credentials for each of one or more credential categories (identify and extract only credentials from the data in the at least one external data source 203, for instance extract passwords from a long textual passage, e.g., by applying a machine learning algorithm. The processor 201 applies a machine learning algorithm 206 on (monitored) data of the at least one data source 203 in order to identify at least one potential leaked credential 205 using at least one neural network (NN) 207, for instance monitoring the at least one data source 203 to retrieve potential credentials. A neural network (e.g., NN implementing machine learning) may refer to an information processing paradigm that may include nodes, referred to as neurons, organized into layers, with links between the neurons. The links may transfer signals between neurons and may be associated with weights. A NN may be configured or trained for a specific task, e.g., pattern recognition or classification. Training a NN for the specific task may involve adjusting these weights based on examples [Ben; ¶33-34; Figs. 2-3 and associated texts]); gathering enterprise data for an enterprise, the enterprise data including user data of enterprise users of enterprise assets (manage credentials within the computer network 210. In some embodiments, the system 200 includes a virtual appliance embedded within the computer network 210 to create an interface between the active directory application 213 and the processor 201 [Ben; ¶30-34; Figs. 2-3 and associated texts]); categorizing, using the machine learning logic, the enterprise data [into at least one credential category of the one or more credential categories, wherein the at least one credential category includes credential generation rules for generating credentials associated with the at least one credential category] (parsed into groups of text such as terms within a sentence, that potentially include at least one credential therein [Ben; ¶36-38; Figs. 2-3 and associated texts]); and generating, using the machine learning logic, a plurality of testing credentials based on the enterprise data and the credential generation rules of the at least one credential category (identified potential credential samples are tagged in order to improve the identification by the machine learning algorithm 206. For example, “Password!” may resemble a password, but it may be in two different contexts. The first context can result in the term “Password!” being an actual password, a second context can result in the term “Password!” as being a phrase or other form of text. If the potential credential is for instance within the sentence: “It is known that Password! is one of the most common passwords in the world”, it may not be identified as a leaked credential, while for the sentence: “User: admin, Password: Password!” may be identified as a leaked credential. The input includes regular text, and the machine learning algorithm 206 may directly extract 305 at least one potential credential. Once at least one potential credential is extracted and/or identified, it may be added 306 to a dataset of credentials for future use by the machine learning algorithm 206 [Ben; ¶36-43; Figs. 3-4 and associated texts]). Ben discloses mitigating leakage of credentials of a user of a computer network, including monitoring at least one data source to scrape data that is compatible with credential data, applying a machine learning algorithm to the scraped data to identify at least one potential leaked credential, wherein the at least one potential leaked credential is identified using at least one neural network, authenticating the identified at least one potential leaked credential by a database of valid credentials of the computer network, and replacing credentials corresponding to the at least one leaked credential. Ben does not explicilty discloses categorizing, using the machine learning logic, the enterprise data into at least one credential category of the one or more credential categories, wherein the at least one credential category includes credential generation rules for generating credentials associated with the at least one credential category; however, in a related and analogous art, Romero teaches this feature. In particular, Romero teaches categories data in various groups, such as location, user/identification, credentials types, etc., under a predefined collection similarity metric, it may include data such as, for example, credential hash sets, credential hash lists, user agent reputation data, IP address reputation data, authorization protocol identification, client application identification, origin location familiarity, origin device familiarity, origin location repetition, legacy risk scores, or legacy security service data [Romero; ¶24, 49, 70-74, 95-96, 181, 284-285, 306, 319; Figs. 3, 6-7, 9-10 and associated texts]; performing penetration [testing] of the enterprise assets using the plurality of testing credentials (identity attack, also referred to as “identity spray attack”; typically performed by an external or insider attacker who is acting beyond the scope of the authority granted to them by the owner of a monitored network, but may also be part of a penetration test or quality control test, testing with data, tuning model parameters, selecting which signals to present for classification, and operably linking the model to other software through an interface [Romero; ¶215, 286]). It would have been obvious before the effective filing date of the claimed invention to modify Ben in view of Romero with the motivation to improve the machine learning model by adapting to changed attacker behavior through retraining with updated data, making the model-based approach more effective over time than rigid statistical or heuristic detection approaches [Romero; Abstract; ¶5-6]. Ben-Romero combination discloses leakage of user’s credentials and identify spry attack detection. Ben-Romero combination does not explicilty discloses performing penetration testing of the enterprise assets using the plurality of testing credentials; however, in a related and analogous art, Deardorff teaches this feature. In particular, Deardorff teaches penetration testing to detects vulnerabilities based on data, credentials and exploit [Deardorff; ¶43-47, 52-57]. It would have been obvious before the effective filing date of the claimed invention to modify Ben-Romero combination in view of Deardorff to use the spray attack method in the penetration testing with the motivation to detect and prevent unauthorized access with stolen credentials. Regarding claim 3, Ben-Romero-Deardorff combination discloses further comprising: updating the machine learning logic based on results of the penetration testing (the machine learning algorithm 206 may directly extract 305 at least one potential credential. Once at least one potential credential is extracted and/or identified, it may be added 306 to a dataset of credentials for future use by the machine learning algorithm 206 [Ben; ¶36-43; Figs. 3-4 and associated texts]). Regarding claims 4, 13 and 17, Ben-Romero-Deardorff combination discloses wherein categorizing, using the machine learning logic, the enterprise data into at least one credential category of the one or more credential categories includes: determining characteristics of the enterprise data or identifying terms included in the enterprise data; and matching the characteristics or terms of the enterprise data to at least one credential characteristic or term associated with the at least one credential category (determining that at least one potential credential identified 304 within the chunk of text corresponds to a context of credentials, and added 306 to a dataset of credentials for future use by the machine learning algorithm [Ben; ¶34-40]). Regarding claims 5, 14, 18 and 22, Ben-Romero-Deardorff combination discloses wherein the plurality of testing credentials include a credential spray comprising a plurality of passwords (identity spray attacks, a machine learning model classifies account access attempts as authorized or unauthorized, based on dozens of different pieces of information [Romero; ¶24, 49, 70-74, 95-96, 181, 284-285, 306, 319; Figs. 3, 6-7, 9-10 and associated texts]). The motivation to improve the machine learning model by adapting to changed attacker behavior through retraining with updated data, making the model-based approach more effective over time than rigid statistical or heuristic detection approaches [Romero; Abstract; ¶5-6]. Regarding claim 6, Ben-Romero-Deardorff combination discloses the method of claim 5, wherein the credential spray further comprises a plurality of usernames (perform identity spray attack detection steps, which include (a) noting an attempt to access an account, (b) determining whether the account is under an identity spray attack, (c) in response to determining that the account is under the identity spray attack, utilizing the machine learning model to classify the attempt, and (d) in response to classifying the attempt as an unauthorized attempt, applying a security enhancement to the account. The embodiment enhances cybersecurity by detecting behavior which indicates an identity attack and by applying the security enhancement. Unlike attack detection approaches that rely on statistics alone or heuristics alone, such an embodiment's attack detection utilizes the machine learning model, which can be continuously retrained to adapt to changes in attacker behavior. Some embodiments provide or use a method for adaptively detecting identity spray attacks, including: noting an attempt to access an account of a computing system; determining whether the account is under an identity spray attack; when the determining determines that the account is under the identity spray attack, utilizing a machine learning model to classify the attempt, the machine learning model configured according to training data which includes user agent reputation data and IP address reputation data; and when the classifying classifies the attempt as an unauthorized attempt, applying a security enhancement to the account. In particular, in some embodiments, the method enhances cybersecurity by detecting behavior which indicates an identity attack and by imposing an access restriction security enhancement in response to the behavior, e.g., by locking an account, blocking an IP address, or requiring additional authentication before access to an account is allowed [Romero; ¶5-6, 24, 49, 70-74, 95-96; Figs. 3, 6-7, 9-10 and associated texts]). The motivation to improve the machine learning model by adapting to changed attacker behavior through retraining with updated data, making the model-based approach more effective over time than rigid statistical or heuristic detection approaches [Romero; Abstract; ¶5-6]. Regarding claims 7 and 20, Ben-Romero-Deardorff combination discloses wherein the one or more credential categories include at least one of: a user name category; a date of birth category; a hobbies category; and an industry specific term category ((e.g., email addresses, usernames and/or passwords) [Ben; ¶30-34; Figs. 2-3 and associated texts]). Regarding claims 8 and 21, Ben-Romero-Deardorff combination discloses the method of claim 1, wherein gathering the enterprise data includes: gathering the enterprise data from public data sources including social media sources; and gathering the enterprise data from private data sources of an enterprise network (from external dta and from internal active directory [Ben; ¶30-34; Figs. 2-3 and associated texts]). Regarding claims 9 and 19, Ben-Romero-Deardorff combination discloses wherein generating, using the machine learning logic, the one or more sets of the public credentials for each of the one or more credential categories includes: training the machine learning logic to identify at least one credential characteristic of the public credentials in order to generate the one or more sets of the public credentials for each of the one or more credential categories (Once at least one potential credential is extracted and/or identified, it may be added to a dataset of credentials for future use by the machine learning algorithm [Ben; ¶36-43; Figs. 3-4 and associated texts]). Regarding claims 10 and 23, Ben-Romero-Deardorff combination discloses the method of claim 1, further comprising: gathering new enterprise data for the enterprise users; and performing the categorizing and the generating of the plurality of testing credentials based on the new enterprise data (the active directory and database are constantly updating [Ben; ¶36-43; Figs. 2-4 and associated texts]). Internet Communications Applicant is encouraged to submit a written authorization for Internet communications (PTO/SB/439, http://www.uspto.gov/sites/default/files/documents/sb0439.pdf) in the instant patent application to authorize the examiner to communicate with the applicant via email. The authorization will allow the examiner to better practice compact prosecution. The written authorization can be submitted via one of the following methods only: (1) Central Fax which can be found in the Conclusion section of this Office action; (2) regular postal mail; (3) EFS WEB; or (4) the service window on the Alexandria campus. EFS web is the recommended way to submit the form since this allows the form to be entered into the file wrapper within the same day (system dependent). Written authorization submitted via other methods, such as direct fax to the examiner or email, will not be accepted. See MPEP § 502.03. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAO Q HO whose telephone number is (571)270-5998. The examiner can normally be reached on 7:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey Nickerson can be reached on (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAO Q HO/Primary Examiner, Art Unit 2432
Read full office action

Prosecution Timeline

Feb 19, 2024
Application Filed
Jun 25, 2025
Non-Final Rejection — §102, §103
Sep 11, 2025
Applicant Interview (Telephonic)
Sep 11, 2025
Examiner Interview Summary
Sep 25, 2025
Response Filed
Jan 14, 2026
Final Rejection — §102, §103
Mar 25, 2026
Examiner Interview Summary
Mar 25, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603778
APPARATUS AND METHOD FOR GENERATING AN NFT VAULT
2y 5m to grant Granted Apr 14, 2026
Patent 12598169
System and Method for Early Detection of Duplicate Security Association of IPsec Tunnels
2y 5m to grant Granted Apr 07, 2026
Patent 12587852
METHOD AND APPARATUS FOR MANAGING LICENSES FOR DATA IN M2M SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12585736
SYSTEMS AND METHODS FOR AUTHENTICATION AND AUTHORIZATION FOR SOFTWARE LICENSE MANAGEMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12572378
SECURE ARBITRATION MODE TO BUILD AND OPERATE WITHIN TRUST DOMAIN EXTENSIONS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+32.5%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 679 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month