Prosecution Insights
Last updated: April 19, 2026
Application No. 18/526,353

METHODS AND SYSTEMS FOR ACOUSTIC AUTHENTICATION

Final Rejection §103§DP
Filed
Dec 01, 2023
Examiner
NGUYEN, NHAT HUY T
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Worldpay LLC
OA Round
4 (Final)
54%
Grant Probability
Moderate
5-6
OA Rounds
3y 5m
To Grant
79%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
185 granted / 341 resolved
-0.7% vs TC avg
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
59 currently pending
Career history
400
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
54.7%
+14.7% vs TC avg
§102
16.9%
-23.1% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 341 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The Status of the Claims Terminal Disclaimer is filed for patents 11,874,915 and 11,544,370. Claims 21-40 are pending for examinations. Claims 21-40 are rejected under 35 U.S.C. §103. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 21-25, 27-32 and 34-39 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park (U.S. 8,502,780 hereinafter Park) in view of Paul (U.S. 2014/0240262 hereinafter Paul) in further view of Assadollahi (U.S. 2008/0072143 hereinafter Assadollahi). As Claim 21, Park teaches a method for generating an acoustic authentication data entry interface, the method comprising: displaying a first section of a visual user interface element to a user that is without one or more a first visual indicia of a first character associated with the first section (Park (col. 10 line 20-36, fig. 7a-7b, col. 8 line 64-66), a protection layer is displayed on top of the keyboard (a first section). Therefore, the user cannot view the keyboard unless the user wears HMD. Data displayed on the first UI513 is unseen); logging the character as a part of authentication data of the user based on the user gesture corresponding to the first type of gesture (Park (col. 10 line 14-19 and 44-47, “In FIG. 7(b), when the user touches a specific key on the keypad 715 of the external device 710, the HMD 720 may 45 display a touched key 725 on the second UI, as a feedback indicating the touched key.”), window 721 output data corresponding to input signal from the keypad). displaying a second section of the visual user interface element to the user (Park (col. 10 line 44-47), “In FIG. 7(b), when the user touches a specific key on the keypad 715 of the external device 710, the HMD 720 may 45 display a touched key 725 on the second UI, as a feedback indicating the touched key”), Park does not explicitly disclose: playing an audio recording of the character associated with the section, based on displaying the section; based on playing the audio recording; detecting a user gesture performed in association with the section, based on prompting the user to make the selection of the character or the non-selection of the character; and based on determining whether the user gesture performed in association with the first section corresponds to the selection of the first character or the non-selection of the first character. Paul teaches: playing an audio recording of the character associated with the section, based on displaying the section (Paul (¶0068 line 1-9, ¶0069 line 1-3, ¶0061 line 1-4), user selects a letter by tapping the screen after the message corresponding to the letter is spoken); based on playing the audio recording (Paul (¶0068 line 1-9, ¶0069 line 1-3, ¶0061 line 1-4), user selects a letter by tapping the screen after the message corresponding to the letter is spoken); detecting a user gesture performed in association with the section, based on prompting the user to make the selection of the character or the non-selection of the character (Paul (¶0068 line 1-9, ¶0069 line 1-3, ¶0061 line 1-4), user selects a letter by tapping the screen after the message corresponding to the letter is spoken. Paul (¶0071 line 1-7), user non-selection (first and second touch on the screen) causes the display screen to scroll to the next section); and based on determining whether the user gesture performed in association with the first section corresponds to the selection of the first character or the non-selection of the first character (Paul (¶0068 line 1-9, ¶0069 line 1-3, ¶0061 line 1-4), user selects a letter by tapping the screen after the message corresponding to the letter is spoken. Paul (¶0071 line 1-7), user non-selection (first and second touch on the screen) causes the display screen to scroll to the next section). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the user interface of Park instead be a user interface taught by Paul, with a reasonable expectation of success. The motivation would be to “permit users who are visually impaired to understand what information is currently being display by the terminal and how this information is arranged on the terminal’s display” (Paul (¶0042)) (Teaching, Suggestion or Motivation). Park in view of Paul does not explicitly disclose: prompting the user to make a selection of the character by making a first type of user gesture or to make a non-selection of the character by making a second type of user gesture, Assadollahi teaches: prompting the user to make a selection of the character by making a first type of user gesture or to make a non-selection of the character by making a second type of user gesture (Assadollahi (¶0012 last 4 lines, “the processor may cause the display to prompt the user to navigate to and select one of the displayed candidate words or enter a desired word using the data entry device by selecting letters given as entries of the main menu”), system prompts user to select a character (selecting of the character)), It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the user interface of Park in view of Paul a user prompt taught by Assadollahi, with a reasonable expectation of success. The motivation would be to provide “an improved method for reducing the number of keystrokes necessary to enter text for devices without letter/character keys” (Assadollahi (¶0005)) (Teaching, Suggestion or Motivation). As Claim 22, besides Claim 21, Park in view Paul in further view of Assadollahi teaches wherein the visual user interface element is displayed in a virtual reality environment (Park (col. 10 line 20-36, fig. 7a-7b, col. 8 line 64-66), a protection layer is displayed on top of the keyboard. Therefore, the user cannot view the keyboard unless the user wears HMD. Data displayed on the first UI513 is unseen). As Claim 23, besides Claim 21, Park in view Paul in further view of Assadollahi teaches wherein the visual user interface element is displayed in a touchscreen of a mobile device (Paul (¶0019 line 9), touch screen). As Claim 24, besides Claim 21, Park in view Paul in further view of Assadollahi teaches wherein the first type of gesture is one of a grabbing motion, a dragging motion and a dropping motion, a pointing motion, or a tapping motion (Paul (¶0068 line 1-2), touch gesture on the keypad window). As Claim 25, besides Claim 21, Park in view Paul in further view of Assadollahi teaches wherein the second type of gesture is one of a swiping motion or a scrolling motion (Paul (¶0071 line 1-7), user non-selection (first and second touch on the screen) causes the display screen to scroll to the next section). As Claim 27, besides Claim 21, Park in view Paul in further view of Assadollahi teaches wherein the authentication data comprises one or more of a personal identification number (PIN), a password, and an answer to a security challenge question (Park (col. 14 line 1-3), the user ID and password are entered). As Claim 28, Park teaches a device comprising: a memory configured to store instructions (Park (col. 6 line 22), memory); and a processor configured to execute the instructions to perform operations (Park (col. 6 line 14), processor) comprising: The rest of the Claim is rejected for the same reasons as Claim 21. As Claim 29, the Claim is rejected for the same reasons as Claim 22. As Claim 30, the Claim is rejected for the same reasons as Claim 23. As Claim 31, the Claim is rejected for the same reasons as Claim 24. As Claim 32, the Claim is rejected for the same reasons as Claim 25. As Claim 34, the Claim is rejected for the same reasons as Claim 27. As Claim 35, the Claim is rejected for the same reasons as Claim 21. As Claim 36, the Claim is rejected for the same reasons as Claim 22. As Claim 37, the Claim is rejected for the same reasons as Claim 23. As Claim 38, the Claim is rejected for the same reasons as Claim 24. As Claim 39, the Claim is rejected for the same reasons as Claim 25. Claim(s) 26, 33 and 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park and Paul in view of Assadollahi in further view of Newman (U.S. 9,196,111 hereinafter Newman). As Claim 26, Park in view Paul in further view of Assadollahi does not explicitly disclose further comprising: generating a random sequence of characters that includes characters constituting the authentication data of the user; and generating the visual user interface element comprising a plurality of sections, each section of the visual user interface element being associated with a respective character of the generated random sequence of characters. Newman teaches: further comprising: generating a random sequence of characters that includes characters constituting the authentication data of the user (Newman (col. 8 line 2-5, col. 5 line 16-22, fig. 6, fig. 1 item 102), characters (keys) are dynamically displayed); and generating the visual user interface element comprising a plurality of sections, each section of the visual user interface element being associated with a respective character of the generated random sequence of characters (Newman (col. 8 line 2-5, col. 5 line 16-22, fig. 6, fig. 1 item 102), fig. 6 displays plurality of sections (keys) for the dynamically displayed keys). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the user interface of Park in view Paul in further view of Assadollahi instead be a user interface taught by Newman, with a reasonable expectation of success. The motivation would be to “make it difficult to obtain a customer’s PIN by tracking the hand movement of a customer using an ATM” (Newman (abstract)). As Claim 33, the Claim is rejected for the same reasons as Claim 26. As Claim 40, the Claim is rejected for the same reasons as Claim 26. Response to Arguments Section 103 rejections: As Claims 21, 28 and 35, Applicant argues that Assadollahi does not disclose “prompting the user …” and “displaying a second section of the visual user interface element …” (second paragraph of page 11). PNG media_image1.png 297 649 media_image1.png Greyscale Applicant’s arguments are not persuasive because current combination is based on teaching, suggestion or motivation found the prior art(s). As Claims 21, 28 and 35, Applicant argues that no reason for the expectation of success is given (third and fourth paragraph of page 10 and first paragraph of page 11 in the remarks). Applicant’s arguments are not persuasive because current combination is based on teaching, suggestion or motivation found the prior art(s). Reasons for expectation of success are not necessary. As Claims 21, 28 and 35, Applicant argues that motivation to combine with Magee is only apparent with the knowledge of the Applicant’s invention (last paragraph of page 11 in the remarks). PNG media_image2.png 187 642 media_image2.png Greyscale Applicant’s arguments are not persuasive because Assadollahi teaches “prompting the user to make a selection of the character by making a first type of user gesture or to make a non-selection of the character by making a second type of user gesture” (Assadollahi (¶0012 last 4 lines, “the processor may cause the display to prompt the user to navigate to and select one of the displayed candidate words or enter a desired word using the data entry device by selecting letters given as entries of the main menu”), system prompts user to select a character (selecting of the character)). Limitation “displaying a second section of the visual user interface element to the user …” is taught by Park and Paul. See the current rejection(s) for details. Double Patenting Rejections: Applicant filed the Terminal Disclaimers; therefore, double patenting rejection(s) on the Claims are respectfully withdrawn. In the future, the statutory double patenting might be raised if the instant Claims has the same scope with the cited patent 11,874,915 and/or 11,544,370. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NHAT HUY T NGUYEN whose telephone number is (571)270-7333. The examiner can normally be reached M-F: 12:00-8:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NHAT HUY T NGUYEN/Primary Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Dec 01, 2023
Application Filed
Dec 01, 2023
Response after Non-Final Action
Aug 24, 2024
Non-Final Rejection — §103, §DP
Jan 21, 2025
Response Filed
Mar 19, 2025
Final Rejection — §103, §DP
May 23, 2025
Response after Non-Final Action
Jun 25, 2025
Request for Continued Examination
Jun 28, 2025
Response after Non-Final Action
Sep 06, 2025
Non-Final Rejection — §103, §DP
Dec 10, 2025
Response Filed
Mar 16, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530116
MEDIA CAPTURE LOCK AFFORDANCE FOR GRAPHICAL USER INTERFACE
2y 5m to grant Granted Jan 20, 2026
Patent 12504866
AUTOMATED TAGGING OF CONTENT ITEMS
2y 5m to grant Granted Dec 23, 2025
Patent 12489720
INFERRING ASSISTANT ACTION(S) BASED ON AMBIENT SENSING BY ASSISTANT DEVICE(S)
2y 5m to grant Granted Dec 02, 2025
Patent 12463859
ENABLING AN OPERATOR TO RESOLVE AN ISSUE ASSOCIATED WITH A 5G WIRELESS TELECOMMUNICATION NETWORK USING AR GLASSES
2y 5m to grant Granted Nov 04, 2025
Patent 12443419
ADJUSTING EMPHASIS OF USER INTERFACE ELEMENTS BASED ON USER ATTRIBUTES
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
54%
Grant Probability
79%
With Interview (+25.1%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 341 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month