Prosecution Insights
Last updated: April 19, 2026
Application No. 18/652,614

ROUTING NATURAL LANGUAGE COMMANDS TO THE APPROPRIATE APPLICATIONS

Non-Final OA §101§DP
Filed
May 01, 2024
Examiner
SHIN, SEONG-AH A
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Amazon Technologies, Inc.
OA Round
3 (Non-Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
321 granted / 409 resolved
+16.5% vs TC avg
Strong +20% interview lift
Without
With
+20.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
434
Total Applications
across all art units

Statute-Specific Performance

§101
20.8%
-19.2% vs TC avg
§103
45.2%
+5.2% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 409 resolved cases

Office Action

§101 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Status of Claims Claims 2-21 are pending in this application. Claim 1 is canceled. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Response to Arguments Regarding Rejection under 35 U.S.C. 101 Applicant’s arguments with respect to rejections have been fully considered but they are not persuasive. Regarding Claim 2, the Applicant argues that the rejection under 35 U.S.C. 101 is improper because the amended claim would be humanly impossible to determine the probability of the next command/input being directed at each of the applications of the electronic device even before the command/input is received (REMARKS, on page 8 of 14, 2nd paragraph – page 10 of 14, 1st paragraph). However, Examiner respectfully disagrees that the rejection under 35 U.S.C. 101 is improper because the newly amended claim 2 is still directed to abstract idea. The patent-eligibility analysis below follows 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence, dated July, 2024, and Memorandum on Subject Matter Eligibility Declarations, dated December 4, 2025. Applicant’s invention is helping to achieve better human functionality in the field of language processing using a generic computer. Even though the disclosed invention is described in the background as improving computer technology, the claim provides no meaningful limitations such that this improvement is realized. Under the broadest reasonable interpretation, the terms of the claim are presumed to have their plain meaning consistent with the specification as it would be interpreted by one of ordinary skill in the art. Regarding claim limitation, “determine … command history data, and environment context data of the electronic device… wherein the determining is by the electronic device prior to receiving the natural language input”, does not include any details about how a probability is determined and could be performed in the human mind and requires no more than a performing of generic computer functions (e.g. collecting data, calculating). Moreover, claim 2 recites additional element of “electronic device”. The computer is recited at a high-level of generality (i.e., as performing a generic computer function and being used as an applying) such that it amounts no more than mere instructions to apply the exception using a generic computer. With respect to independent claims 9 and 16, the claims 9 and 16 are similar to claim 2. With respect to dependent claims 3-8, 10-15, and 17-21 are also directed to processes which manipulate data which are processes which can be performed by a human and implemented by a generic computer. Accordingly, the limitations of the Claims are not sufficient to add significantly more to improve technological functionality. As such, claims 2-21 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Thus, the rejection is maintained at this time. Please see the rejection below for the whole analysis. Regarding Rejection under 35 U.S.C. 103 Applicant’s amendment and arguments with respect to rejections have been fully considered but are moot because the arguments do not apply to any of the references being used in the current rejection. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 2-21 are rejected on the ground of nonstatutory double patenting over claims 1, 3, and 10 of U.S. Patent No. 9,734,839. Although the claims at issue are not identical, they are not patentably distinct from each other because adding inherent and/or unnecessary limitations/step and rearranging the claims would be within the level of one of ordinary skill in the art. It is well settled that the insertion of an element, e.g. “select, based at least in part on the first application probability, the second application probability, the first matching probability, and the second matching probability, the first application to receive the command in the one or more words and to perform at least one operation associated with the next command” and its function is an obvious expedient if the remaining elements perform the same function as before. In re Karlson, 136 USPQ 184 (CCPA 1963). Also note Ex parte Rainu, 168 USPQ 375 (Bd. App. 1969). Insertion of a reference element or step whose function is not needed would be obvious to one of ordinary skill in the art. Instant Application No. 18,652,614 U.S. Patent No. 9,734,839 2. A method comprising: receiving a natural language input generated from an electronic device of a user; determining, based on the electronic device, user-profile data associated with the user; determining, from the user-profile data, historical data associated with the user; determining, by the electronic device and based on the user-profile data, command history data, and environment context data of the electronic device, based on the user-profile data, a first probability for a first application to present on the electronic user device, the first probability representing the likelihood the user will interact with the first application, wherein the determining is by the electronic device prior to receiving the natural language input; and identifying, by the language model, an application to invoke based at least in part on the natural language input, the first probability, and the historical data. 3. The method as recited in claim 2, further comprising: determining a command based at least in part on the natural language input, wherein identifying the application is further based at least in part on the command. 4. The method as recited in claim 2, further comprising: determining an identity of the user associated with the natural language input, wherein determining the user-profile data is further based at least in part on the identity. 5. The method as recited in claim 2, further comprising: storing command data representing a command previously received by the electronic device, the command associated with the application, wherein identifying the application is further based at least in part on the command data. 6. The method as recited in claim 2, further comprising: determining content data based at least in part on the natural language input; and causing output of content corresponding to the content data. 7. The method as recited in claim 6, wherein determining the content data comprises: sending at least a portion of the natural language input to the application; and receiving the content data from the application. 8. The method as recited in claim 6, wherein causing output of the content represented by the content data comprises at least one of: outputting, using one or more speakers, sound corresponding to the content data; or displaying, using a display, an image corresponding to the content data. 1. A voice controlled system comprising: one or more processors; computer-readable media accessible by the one or more processors; a first application and a second application stored on the computer-readable media to be executed by the one or more processors; a microphone to receive audio input; a speech recognition module to identify first data from a signal representing the audio input, the first data including text representing one or more words; and a command router to determine, using second data that is different from the first data, a first application probability of the first application being a recipient of a next command, wherein the second data is available to the command router prior to identification of the first data, determine, using the second data, a second application probability of the second application being a recipient of the next command, provide, to the first application, the text, receive, from the first application, a first matching probability indicating a degree of matching between the one or more words and a command which the first application can interpret, provide, to the second application, the text, receive, from the second application, a second matching probability indicating a degree of matching between the one or more words and a command which the second application can interpret, and select, based at least in part on the first application probability, the second application probability, the first matching probability, and the second matching probability, the first application to receive the command in the one or more words and to perform at least one operation associated with the next command. 3. The voice controlled system of claim 1, wherein the first application probability is based on at least one of a command history, a user profile, or an environmental context of the voice controlled device. 10. The voice controlled system of claim 1, wherein the first matching probability is determined based at least in part on a context specific to the first application, the context including at least one of a state of the first application, a history of commands implemented by the first application, or a user profile associated with the first application. Claims 2-21 are rejected on the ground of nonstatutory double patenting over claims 7, 12, and 13 of U.S. Patent No. 11,152,009. Although the claims at issue are not identical, they are not patentably distinct from each other because adding inherent and/or unnecessary limitations/step and rearranging the claims would be within the level of one of ordinary skill in the art. It is well settled that the insertion of an element, e.g. “score indicating a first correspondence between the first text data and the second text data”, and its function is an obvious expedient if the remaining elements perform the same function as before. In re Karlson, 136 USPQ 184 (CCPA 1963). Also note Ex parte Rainu, 168 USPQ 375 (Bd. App. 1969). Insertion of a reference element or step whose function is not needed would be obvious to one of ordinary skill in the art. Instant Application No. 18,652,614 U.S. Patent No. 11,152,009 2. A method comprising: receiving a natural language input generated from an electronic device of a user; determining, based on the electronic device, user-profile data associated with the user; determining, from the user-profile data, historical data associated with the user; determining, by the electronic device and based on the user-profile data, command history data, and environment context data of the electronic device, based on the user-profile data, a first probability for a first application to present on the electronic user device, the first probability representing the likelihood the user will interact with the first application, wherein the determining is by the electronic device prior to receiving the natural language input; and identifying, by the language model, an application to invoke based at least in part on the natural language input, the first probability, and the historical data. 3. The method as recited in claim 2, further comprising: determining a command based at least in part on the natural language input, wherein identifying the application is further based at least in part on the command. 4. The method as recited in claim 2, further comprising: determining an identity of the user associated with the natural language input, wherein determining the user-profile data is further based at least in part on the identity. 5. The method as recited in claim 2, further comprising: storing command data representing a command previously received by the electronic device, the command associated with the application, wherein identifying the application is further based at least in part on the command data. 6. The method as recited in claim 2, further comprising: determining content data based at least in part on the natural language input; and causing output of content corresponding to the content data. 7. The method as recited in claim 6, wherein determining the content data comprises: sending at least a portion of the natural language input to the application; and receiving the content data from the application. 8. The method as recited in claim 6, wherein causing output of the content represented by the content data comprises at least one of: outputting, using one or more speakers, sound corresponding to the content data; or displaying, using a display, an image corresponding to the content data. 7. A method comprising: receiving audio data from an electronic device; generating first text data based at least in part on the audio data; determining a first application from a plurality of applications; determining a second application from the plurality of applications; determining a word represented by the first text data; determining that second text data also represents the word, the second text data associated with a first group of commands that the first application can process; determining, based at least in part on the second text data also representing the word, a first score indicating a first correspondence between the first text data and the second text data; determining a second score indicating a second correspondence between the first text data and third text data, the third text data associated with a second group of commands that the second application can process; and sending an output to the electronic device according to the first score and the second score. 12. The method of claim 7, further comprising: determining a history of commands implemented by the first application, wherein the determining the first application is based at least in part on the history of commands. 13. The method of claim 7, further comprising: storing a user profile associated with the first application, wherein the determining the first application is based at least in part on the user profile. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 2-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The independent claim 2 recites “receiving a natural language input generated from an electronic device of a user; determining, based on the electronic device, user-profile data associated with the user; determining, from the user-profile data, historical data associated with the user; determining, by the electronic device and based on the user-profile data, command history data, and environment context data of the electronic device, based on the user-profile data, a first probability for a first application to present on the electronic user device, the first probability representing the likelihood the user will interact with the first application, wherein the determining is by the electronic device prior to receiving the natural language input; and identifying, by the language model, an application to invoke based at least in part on the natural language input, the first probability, and the historical data”. The limitation of “receiving…”, “determining…”, “determining…”, “determining…” and “identifying” is a process that, under its broadest reasonable interpretation, could be performed in the human mind and requires no more than a performing of generic computer functions (e.g. collecting data, calculating). More specifically, a human reads/listens to the message from another person, determining the person’s previous message/comment and identifying the job/application to invoke. This judicial exception is not integrated into a practical application. In particular, claim 2 recites additional element of “electronic device”. The computer is recited at a high-level of generality (i.e., as performing a generic computer function and being used as an applying) such that it amounts no more than mere instructions to apply the exception using a generic computer. Accordingly, there additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using a computer amounts to no more than mere instructions to apply an exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. With respect to claims 9 and 16, the claim is similar to claim 2 and claims 9 and 16 recite additional element of “processor” and “computer-readable media”. The processor and memory are recited at a high-level of generality (i.e., as a generic processor performing generic computer functions and being used as an applying) such that it amounts no more than mere instructions to apply the exception using a generic computer component as well. These claims further do not remedy the judicial exception being integrated into a practical application and further fail to include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to dependent claims 3-8, 10-15, and 17-21, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Therefore, claims 2-21 are rejected. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see attached form PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEONG-AH A. SHIN whose telephone number is (571)272-5933. The examiner can normally be reached 9 AM-3PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Seong-ah A. Shin Primary Examiner Art Unit 2659 /SEONG-AH A SHIN/Primary Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

May 01, 2024
Application Filed
Apr 24, 2025
Non-Final Rejection — §101, §DP
Jul 29, 2025
Response Filed
Sep 19, 2025
Final Rejection — §101, §DP
Nov 13, 2025
Response after Non-Final Action
Dec 05, 2025
Request for Continued Examination
Dec 20, 2025
Response after Non-Final Action
Feb 05, 2026
Non-Final Rejection — §101, §DP
Mar 23, 2026
Applicant Interview (Telephonic)
Apr 02, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598095
DISPLAY DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12591452
INVOKING AN AUTOMATED ASSISTANT TO PERFORM MULTIPLE TASKS THROUGH AN INDIVIDUAL COMMAND
2y 5m to grant Granted Mar 31, 2026
Patent 12585696
REDUCING METADATA TRANSMITTED WITH AUTOMATED ASSISTANT REQUESTS
2y 5m to grant Granted Mar 24, 2026
Patent 12555568
DEVICE CONTROL METHOD AND APPARATUS, READABLE STORAGE MEDIUM AND CHIP
2y 5m to grant Granted Feb 17, 2026
Patent 12554935
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+20.5%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 409 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month