Prosecution Insights
Last updated: April 19, 2026
Application No. 18/777,854

PERFORM USER VALIDATION USING LOCAL RESOURCES

Final Rejection §103
Filed
Jul 19, 2024
Examiner
SHAIFER HARRIMAN, DANT B
Art Unit
2434
Tech Center
2400 — Computer Networks
Assignee
Kyndryl Inc.
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
98%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
625 granted / 771 resolved
+23.1% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
33 currently pending
Career history
804
Total Applications
across all art units

Statute-Specific Performance

§101
19.7%
-20.3% vs TC avg
§103
34.2%
-5.8% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
15.6%
-24.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 771 resolved cases

Office Action

§103
DETAILED ACTION Examiner's Note: The Examiner has pointed out particular references contained in the prior art of record within the body of this action for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply. Applicant, in preparing the response, should consider fully the entire reference as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s remarks filed on 01/28/2026 have been fully considered. Regarding claim[s] 1 – 20 under the anticipatory rejection, applicant’s remarks are moot because the new ground of rejection does not rely on all the references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Therefore, see the office action below. The examiner will address all other remarks that do not concern the prior art rejections, if any, in the office action below. Response to Amendment Status of the instant application: Claim[s] 1 – 20 are pending in the instant application. Regarding claim[s] 1 – 20 under the anticipatory rejection, applicant’s claim amendments have been considered, therefore, the rejections are withdrawn. However, there is a new prior art rejection issued on the claims to address applicant’s newly added claim amendments. See the office action below. Claim Interpretation Regarding claim[s] 5, 13, applicants claim amendments have been considered, therefore, the objections are withdrawn. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim(s) 1 – 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chitragar et al. [US PGPUB # 2024/0273177] in view of Anderson et al. [US PAT # 11924214] As per claim 1. Chitragar does teach a computer-implemented method [Chitragar, paragraph: 0004, lines 1 – 4, According to one embodiment, a method, computer system, and computer program product for CAPTCHA image generation and verification is provided] comprising: in response to receiving a request for access [Chitragar, paragraph: 0011, As previously described, authentication relates to determining whether someone or something is actually who or what it purported to be. In the realm of computing, authentication relates to a user providing credentials that verify the user's identity to operate a device or program or access data.] from a user [Chitragar, Figure # 2, steps: 206, 208, paragraph: 0040, Then, at 206, the CAPTCHA image generation and verification program 150 prompts the user with a set of random attribute options and one or more options from the generated set. Once the image is generated, the CAPTCHA image generation and verification program 150 may present the image to the user and prompt the user with a set of random attribute options where a subset of one or more options within the set of random attribute options is from the generated set of n attributes. The CAPTCHA image generation and verification program 150 may display the generated image and the set of random attribute options to the user on a device display screen with a prompt for the user to select one or more options within the set of random attribute options that are depicted within the generated and displayed image. Then further in paragraph: 0044, lines 1 – 10, Next, at 208, the CAPTCHA image generation and verification program 150 receives user [i.e. applicant’s user] selections to the prompt [i.e. applicant’s in response to….]. As previously described, the CAPTCHA image generation and verification program 150 may prompt the user to select one or more options from the displayed random attributes that are depicted in the generated image.], selecting a selected resource type from a plurality of resource types [Chitragar, paragraph: 0036, lines 8 – 16, For example, the user may type the prompts to the engine through user interactions with a peripheral device, such as a device within device set 123. However, the CAPTCHA image generation and verification program 150 may generate the prompts in the form of a set of n attributes to be input. The set of n attributes may include any preconfigured number of attributes. In one or more embodiments, the n attributes may comprise nouns, adjectives, adverbs, verbs, or any other grammatical parts of speech capable of being input to an artificial intelligence image generator to create an image]; selecting a selected user resource from a user resource pool…………………………………., the selected user resource having the selected resource type [Chitragar, paragraph: 0036, lines 8 – 16, However, the CAPTCHA image generation and verification program 150 may generate the prompts in the form of a set of n attributes to be input. The set of n attributes may include any preconfigured number of attributes. In one or more embodiments, the n attributes may comprise nouns, adjectives, adverbs, verbs, or any other grammatical parts of speech capable of being input to an artificial intelligence image generator to create an image]; executing a machine learning model to output a prompt in response to inputting the selected user resource [Chitragar, paragraph: 0016, lines 1 – 4, According to one embodiment, a CAPTCHA image generation and verification program may generate a CAPTCHA image and verify it automatically against a machine learning and/or artificial intelligence engine.]…………………………………..; causing execution of a generative artificial intelligence (AI) engine to output generated resources in response to inputting the prompt [Chitragar, paragraph: 0039, lines 23 – 33, Once the set of n attributes are inputted to the artificial intelligence image generator, the CAPTCHA image generation and verification program 150 may generate, or instruct the artificial intelligence image generator to generate, the image based on the five attributes. For example, continuing the above situation where the set of n attributes includes the words “ocean”, “boy”, “summer”, “cup”, and “hut”, the CAPTCHA image generation and verification program 150 may generate a beach scene-type image that depicts a boy playing by the ocean in summer with a cup and a hut in the background.]; and performing an authentication by presenting the generated resources and the selected user resource [Chitragar, paragraph: 0045, lines 18 – 22, Once the CAPTCHA image generation and verification program 150 verifies the user selection is correct based on the verification process, the CAPTCHA image generation and verification program 150 may allow the user to proceed to the desired destination or program.]. Chitragar does not clearly teach the claim limitations of: “……..from at least one of a user device of the user or user cloud resources of the user,……………………….;” “……..from at least one of a user device of the user or user cloud resources of the user,……………………….;” However, Anderson does teach the claim limitations of: “……..from at least one of a user device of the user or user cloud resources of the user,……………………….[Figure # 2, and col. 8, lines 33 – 42, Returning to method 200, at step 214, the access management client 105 receives and unpacks the authentication message received from the identity platform 104. This involves retrieving the list of roles assigned to the user [i.e. applicant’s…user resource pool], presenting the list on a display of the user device 102 [i.e. applicant’s..from at least of a user device], and prompting the user to select one or more roles from the list that the user wishes to assume. Upon receiving selection of a role [i.e. applicant’s…selecting a selected user resource from a user resource pool], the access management client 105 is configured to forward the authentication message and role selection to the cloud platform 106 at step 216. Further of Figure # 2, and col. 8, lines 50 – 60, At step 218 in response to receiving the authentication message and role selection, the cloud platform 106 validates the authentication message and returns security credentials associated with the selected role to the access management client 105. Security credentials are managed by the cloud platform 106 and are associated with a set of permissions for accessing resources on the cloud platform 106. These credentials are provided to a requesting party so that the requesting party can use the credentials when requesting access to cloud resources];” “……..from at least one of a user device of the user or user cloud resources of the user,……………………….[Figure # 2, and col. 6 thru col. 10];” It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to combine the teachings of Chitragar and Anderson in order for the generation and selection of n attributes by a user for implementation of the CAPTCHA authentication of Chitragar to include periodically generating and implementing the CAPTCHA of Anderson. This would allow for a fine grained and continuous authentication method to be sure the user is who they say they are. See Col. 4, lines 59 – 61 of Anderson. As per claim 2. Chitragar does teach the computer-implemented method of claim 1, further comprising, in response to receiving a selection of the selected user resource for the authentication, granting the access [Chitragar, paragraph: 0045, lines 18 – 22, Once the CAPTCHA image generation and verification program 150 verifies the user selection is correct based on the verification process, the CAPTCHA image generation and verification program 150 may allow the user to proceed to the desired destination or program.]. As per claim 3. Chitragar does teach the computer-implemented method of claim 1, further comprising, in response to receiving a selection of at least one of the generated resources for the authentication, denying the access [Chitragar, paragraph: 0045, lines 1 – 6, Then, at 210, the CAPTCHA image generation and verification program 150 verifies the user selections against the generated set. Upon receiving the user selections, the CAPTCHA image generation and verification program 150 may verify whether the selections are accurate and verify or incorrect and deny]. As per claim 4. Chitragar does teach the computer-implemented method of claim 1, further comprising, in response to receiving a selection of at least one of the generated resources for the authentication, performing a security action [Chitragar, paragraph: 0045, lines 1 – 6, Then, at 210, the CAPTCHA image generation and verification program 150 verifies the user selections against the generated set. Upon receiving the user selections, the CAPTCHA image generation and verification program 150 may verify whether the selections are accurate and verify or incorrect and deny]. As per claim 5. Chitragar does teach the computer-implemented method of claim 1, wherein the generated resources have the selected resource type the same as the selected user resource [Chitragar, paragraph: 0035, lines 1 – 9, According to at least one embodiment, the CAPTCHA image generation and verification program 150 may generate a set of n attributes that may in turn be used to generate an image using an artificial intelligence and machine learning engine. The CAPTCHA image generation and verification program 150 may then generate a second set of attributes to present to a user for the verification process. The separate set of attributes may include a subset that include one or more of the set of n attributes]. As per claim 6. Chitragar does teach the computer-implemented method of claim 1, wherein the selected user resource is configured to be perceptible by at least one of a plurality of human senses, the generated resources being equally perceptible by the at least one of the plurality of human senses [Chitragar, paragraph: 0014, text-to-image generation is now capable of ingesting one or more word prompts and generating an image, or a series of images, containing the characteristics of the prompt.]. As per claim 7. Chitragar does teach the computer-implemented method of claim 1, further comprising causing monitoring of the user device from which the user resource pool was derived [Chitragar, paragraph: 0036, lines 8 – 16, For example, the user may type the prompts to the engine through user interactions with a peripheral device, such as a device within device set 123. However, the CAPTCHA image generation and verification program 150 may generate the prompts in the form of a set of n attributes to be input. The set of n attributes may include any preconfigured number of attributes. In one or more embodiments, the n attributes may comprise nouns, adjectives, adverbs, verbs, or any other grammatical parts of speech capable of being input to an artificial intelligence image generator to create an image]. As per claim 8. Chitragar does teach the computer-implemented method of claim 7, further comprising in response to detecting an attempt to search the user device during the authentication, performing a security action [Chitragar, paragraph: 0045, lines 1 – 6, Then, at 210, the CAPTCHA image generation and verification program 150 verifies the user selections against the generated set. Upon receiving the user selections, the CAPTCHA image generation and verification program 150 may verify whether the selections are accurate and verify or incorrect and deny]. As per system claim 9 that includes the same or similar claim limitations as computer method claim 1 and is similarly rejected. ***The examiner notes that applicant’s recited: “memory,” “computer instructions,” and “one or more processors,” is taught by the prior art of Chiragar at paragraph: 0019. As per system claim 10 that includes the same or similar claim limitations as computer method claim 2 and is similarly rejected. As per system claim 11 that includes the same or similar claim limitations as computer method claim 3 and is similarly rejected. As per system claim 12 that includes the same or similar claim limitations as computer method claim 4 and is similarly rejected. As per system claim 13 that includes the same or similar claim limitations as computer method claim 5 and is similarly rejected. As per system claim 14 that includes the same or similar claim limitations as computer method claim 6 and is similarly rejected. As per system claim 15 that includes the same or similar claim limitations as computer method claim 7 and is similarly rejected. As per system claim 16 that includes the same or similar claim limitations as computer method claim 8 and is similarly rejected. As per computer program product that includes a computer readable storage medium claim 17, that includes the same or similar claim limitations as computer method claim 1 and is similarly rejected. ***The examiner notes that the recited computer program product is taught by the prior art of Chitragar at paragraph: 0019. As per computer program product that includes a computer readable storage medium claim 18, that includes the same or similar claim limitations as computer method claim 2 and is similarly rejected. As per computer program product that includes a computer readable storage medium claim 19, that includes the same or similar claim limitations as computer method claim 3 and is similarly rejected. As per computer program product that includes a computer readable storage medium claim 20, that includes the same or similar claim limitations as computer method claim 8 and is similarly rejected. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANT SHAIFER - HARRIMAN whose telephone number is (571)272-7910. The examiner can normally be reached M - F: 9am to 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kambiz Zand can be reached at 571- 272- 3811. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANT B SHAIFER HARRIMAN/ Primary Examiner, Art Unit 2434
Read full office action

Prosecution Timeline

Jul 19, 2024
Application Filed
Oct 31, 2025
Non-Final Rejection — §103
Jan 16, 2026
Interview Requested
Jan 22, 2026
Examiner Interview Summary
Jan 22, 2026
Applicant Interview (Telephonic)
Jan 28, 2026
Response Filed
Feb 16, 2026
Final Rejection — §103
Apr 02, 2026
Interview Requested
Apr 09, 2026
Applicant Interview (Telephonic)
Apr 09, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598179
Systems and methods for cloud-centric biometric step-up and authentication
2y 5m to grant Granted Apr 07, 2026
Patent 12598164
SYSTEM AND METHOD FOR ENCRYPTING AND DECRYPTING DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12587559
TIME-BASED APPROACHES IN MALWARE SIMULATION FOR RESPONSIVE MEASURE DEPLOYMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12556584
CUSTOMER-SECURED TELEMETRY IN A ZERO-TRUST COMPUTING ENVIRONMENT
2y 5m to grant Granted Feb 17, 2026
Patent 12537803
Using Tonal Bits for Secure Messaging
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
98%
With Interview (+17.2%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 771 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month