Prosecution Insights
Last updated: April 19, 2026
Application No. 18/391,886

MULTI-PARTICIPANT VOICE ORDERING

Non-Final OA §101
Filed
Dec 21, 2023
Examiner
VILLENA, MARK
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Soundhound AI Ip LLC
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 10m
To Grant
85%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
334 granted / 478 resolved
+7.9% vs TC avg
Strong +16% interview lift
Without
With
+15.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
22 currently pending
Career history
500
Total Applications
across all art units

Statute-Specific Performance

§101
13.7%
-26.3% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
20.4%
-19.6% vs TC avg
§112
5.0%
-35.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 478 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/10/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings were submitted on 12/21/2023. These drawings are reviewed and accepted by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) and does not include additional elements that amount to significantly more than the judicial exception. Step 2A, Prong One: The claims recite mathematical concepts. For example, in claim 1: “calculating a first voice feature vector,” “calculating a second voice feature vector,” and “determining that the second voice feature vector and the first voice feature vector have a difference greater than a threshold” (see also claims 2, 5, 8–10, 15–17). Limitations such as “computing a distance between points represented by the vectors in a multidimensional space” and “computing the voice feature vector as a vector of aggregate voice features” recite mathematical relationships and calculations. The claims recite mental processes, namely, observations/evaluations and decisions that could be performed conceptually in the human mind, including evaluating whether two feature sets differ by more than a threshold and then deciding which “item” to act upon. Step 2A, Prong Two: The claims are “computer-implemented” and include steps like “receiving a first spoken utterance,” “performing automatic speech recognition,” “storing,” “outputting an indication of the status,” “modifying a first item,” “modifying a second item,” and “ordering.” These are generic computer functions involving data gathering, processing (mathematical), decision-making, and output. Merely applying an abstract idea on a generic computer or using conventional speech recognition does not integrate the exception into a practical application. See Alice Corp. v. CLS Bank Int’l, 573 U.S. 208 (2014); Credit Acceptance Corp. v. Westlake Servs., 859 F.3d 1044 (Fed. Cir. 2017). The claims do not recite an improvement to the functioning of the computer or to another technology/technical field. The use of “performing automatic speech recognition,” “identifying a start of voice activity in audio,” “matching the recognized words to a word pattern,” and computing vector distances are invoked as tools to process information and make a decision about which “item” to modify or order. There is no recitation of a specific, technological improvement. Constraints like “the period of time being less than thirty seconds,” “members of a list,” or “outputting an indication of the status” are field-of-use and post-solution activity that do not meaningfully limit the abstract idea. Tying the decision outcome to “modifying a first item” versus “modifying a second item” or to “ordering or modifying” is simply using the result of the mathematical/mental evaluation in a business/workflow context, which is still an abstract application. Step 2B: Beyond the abstract ideas, the claims recite generic computer implementation: receiving audio, performing automatic speech recognition, storing, computing mathematical measures (vectors, distances, thresholds), and outputting indications/status. The specification, as reflected by the claim language, does not require any unconventional hardware or a particular machine. Allowable Subject Matter Claims 1-19 would be allowable if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 101 set forth in this Office action. Regarding claims 1, 6, and 13, Kim et al. (US 20200027456 A1) teaches: “receiving a first spoken utterance that specifies a type of item to modify” (par. 0317; ‘For example, after the electronic device 100 outputs a sound signal such as “A wants chanpon, B wants ganjajangmyeon, and C wants fried rice. Would you like to place an order in a Chinese restaurant?”, if user A enters a voice input such as “Change it to ganjajangmyeon”, the electronic device 100 may place an order for food with a changed menu item for user A.’); “calculating a first voice feature vector from the first spoken utterance” (par. 0257; Speech feature extraction); “in response to the first spoken utterance, modifying a first item of the specified type” (par. 0317; ‘In this case, the electronic device 100 may output a UI (user interface) or a UX (user experience) requesting user A to reconfirm the change request.’); “storing the first voice feature vector in relation to the first item” (par. 0277; ‘The prediction database 270 may store information about at least one of text data obtained from the voice data, the speaker, the weight related to the speaker information, applications executable in the electronic device 100, states of the applications, and/or keywords.’); “receiving a second spoken utterance to modify an item of the specified type” (par. 0173; ‘For example, the electronic device 100 may collect conversations between user A and user B located near the electronic device 100 prior to receiving a wake-up utterance.’); “calculating a second voice feature vector from the second spoken utterance” (par. 0257; ‘Speech feature extraction’). However, the Examiner deems the prior art of record, whether taken alone or in combination, fails to teach, inter alia, “in response to determining that the second voice feature vector and the first voice feature vector have a difference greater than a threshold, modifying a second item of the specified type” in combination with the other claimed features. Conclusion Other pertinent prior art are cited in the PTO-892 for the applicant's consideration. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK VILLENA whose telephone number is (571)270-3191. The examiner can normally be reached 10 am - 6pm EST Monday through Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MARK . VILLENA Examiner Art Unit 2658 /MARK VILLENA/Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Dec 21, 2023
Application Filed
Jan 24, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591407
ROBUST VOICE ACTIVITY DETECTOR SYSTEM FOR USE WITH AN EARPHONE
2y 5m to grant Granted Mar 31, 2026
Patent 12592232
SYSTEMS, METHODS, AND APPARATUSES FOR DETECTING AI MASKING USING PERSISTENT RESPONSE TESTING IN AN ELECTRONIC ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586581
ELECTRONIC DEVICE CONTROL METHOD AND APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12578922
Natural Language Processing Platform For Automated Event Analysis, Translation, and Transcription Verification
2y 5m to grant Granted Mar 17, 2026
Patent 12573394
ESTIMATION METHOD, RECORDING MEDIUM, AND ESTIMATION DEVICE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
85%
With Interview (+15.5%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 478 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month