Prosecution Insights
Last updated: April 19, 2026
Application No. 18/361,791

METHOD AND SYSTEM FOR PROACTIVE INTERACTION

Non-Final OA §103
Filed
Jul 28, 2023
Examiner
SIRJANI, FARIBA
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Soundhound Inc.
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
414 granted / 547 resolved
+13.7% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
578
Total Applications
across all art units

Statute-Specific Performance

§101
14.1%
-25.9% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 547 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-8, 10-14 and 16-21 are pending. Claims 1, 12 and 16 are independent and have been amended. Claim 9 is canceled by the most recent amendments and its substance included in the independent Claims as a portion of the language of the amendments. This Application was published as U.S. 20240046923. Apparent priority: 2 August 2022. Applicant’s amendments and arguments are considered but are either unpersuasive or moot in view of the new grounds of rejection. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/31/2025 has been entered. Response to Amendments and Arguments While the language of the amended Claim is clear, there are two issues the clarification of which may help further prosecution: 1) The added limitation states “determining the query domain within a list of domains applicable to the user;” which is interpreted as “determining that the query domain is present within a list of domains applicable to the user;” and yet Applicant points to the NO branch of step S120 in Figure 18 and to Figure 19 which pertains to “Instruction to give notification about failure of setting.” The language of the Claim, as amended does not say that “determining that the query domain is not present within a list of domains applicable to the user.” Claim is examined according to its language. 2) The added language is not connected to the remainder of the limitations specially that the arguments cast doubt on the intent. For example, the limitation following the above could say: “storing the query and the condition as registration information in association with the user, in response to the determining that the query domain is present with the list domains application to the user;” to show that the “storing” is linked to and in response to the “determining” and will not happen whether or not the “query domain” is actually in the list. Applicant’s arguments are moot in view of the new or modified grounds of rejection that address the added language noting that the Applicant has addressed only Yao and has not provided any arguments with respect to Li. Please address all of the references applied to the Claim. Claim 1 is amended as follows: 1. A computer-implemented method of query processing comprising: receiving, from a user, a setting expression including a query representing information to be provided or an action to be performed, and a condition specifying a situation for triggering the query; extracting, by natural language interpretation, the query and the condition; determining a query domain that identifies a domain to which the query belongs; determining the query domain within a list of domains applicable to the user; storing the query and the condition as registration information in association with the user; monitoring, by obtaining data corresponding to a trigger type defined in the registration information, for occurrence of the situation specified by the condition; and in response to determining that the situation has occurred, initiating a proactive interaction with the user by outputting an inquiry expression including a question that requests an answer meaning affirmative or negative from the user; and upon receiving an affirmative answer from the user, generate the information to be provided or an action to be performed within the query domain. Claim 16 is a system counterpart of method Claim 1 and Claim 12 is another method claim like 1 but broader by missing a limitation. Applicant has referred to [0129]-0130] of the Specification as filed and Figures 18-19 as support. Figure 16 is also added by the Examiner. PNG media_image1.png 796 484 media_image1.png Greyscale PNG media_image2.png 796 532 media_image2.png Greyscale PNG media_image3.png 246 344 media_image3.png Greyscale PNG media_image4.png 332 624 media_image4.png Greyscale Paragraph 129-130 correspond to the following paragraphs of the published Application: [0115] Referring to FIG. 18, in step S120, main server 100 determines whether or not the domain specified in step S118 is included in a list. The list means a list of domains applicable to user 300. In one implementation, the list is stored in storage 103. When main server 100 determines that the domain is included in the list (YES in step S120), control proceeds to step S122. When main server 100 determines that the domain is not included in the list (NO in step S120), control proceeds to step S140. [0116] Referring to FIG. 19, in step S140, main server 100 instructs user terminal 200 to give a notification about failure of setting. Thereafter, main server 100 ends the process. Published Application. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8, 12-14, 16-19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Yao (U.S. 20190129938) in view of Li (U.S. 20230153348) and further in view of Badr (U.S. 20200184156). Regarding Claim 1, Yao teaches: 1. A computer-implemented method of query processing comprising: [Yao, Figure 1, input of “voice input 203” or “text input 204” at the “input interface 201” and receiving “voice output 205, “text output 206” or “network command output 208” at the “output interface 202.” Figure 9, shows the hardware including processing modules 1528, processors 1501, memory 1503, storage device 1508 and other peripherals. “Embodiments of the invention provide a natural language understanding (NLU) system that performs relatively complex task automations via verbal or voice instructions. Normally, task automations have a trigger condition, and a series of one or more actions that would require a user's selection of an option via manual input. This is because the underlying machine logic requires knowledge and classification capabilities from multiple domains that conventional personal assistants are not capable of. Embodiments of the inventive system and methods provide a solution to this complex task by analyzing trigger and action domains, pinpointing appropriate APIs, extracting corresponding API parameters, and automatically fulfilling API calls for the user. The system is configured to parse natural language commands into API calls. It analyzes both the trigger and action APIs and prompts a user for any missing information if necessary.” Abstract.] receiving, from a user, a setting expression including a query representing information to be provided or an action to be performed, and a condition specifying a situation for triggering the query; [Yao, Figure 7, “receive an input text originated from a user to perform an action in response to a condition 701.” The action teaches the query and the condition teaches the condition of the Claim. Figure 6: “When I get home, turn on my lights.”] extracting, by natural language interpretation, the query and the condition; [Yao, Figure 7, 702 performs NLP on the input: “[0052] …In operation 702, processing logic performs an NLP operation on the input text to determine a first domain associated with the condition and a second domain associated with the action….” See also Figure 8, 802. Note that “query” of the Claim is taught by “action” of Yao and “condition” of the Claim is taught by the “trigger condition” of Yao. “[0018] Accordingly, embodiments of the invention provide a natural language understanding (NLU) system that performs relatively complex task automations via verbal or voice instructions. Normally, task automations have a trigger condition, and a series of one or more actions that would require a user's selection of an option via manual input…”] determining a query domain that identifies a domain to which the query belongs; [Yao, the challenge that Yao sets out to resolve is identifying the domain of the command/query/task stated by the user: “[0003] … Challenges arise when a system needs to identify multiple intentions from various domains in a single complex sentence. ….” Figure 6 shows a mapping of the Domains to Tasks and Actions associated with the tasks. Figure 7, 702, the domains pertaining to the input command/query are determined by performing NLP on the input. Figure 8, 802 also includes determining the domain/API from the voice input at 801. “[0023] Based on the NLP operation, a first domain associated with the condition and a second domain associated with the task are determined. A domain may simply be a name or term mentioned in the input text. Alternatively, a domain may be determined based on the NLP operation that understands the user's intent.”] determining the query domain within a list of domains applicable to the user; [Yao does not teach a list of domains applicable to the user. Rather, Yao refers to a list of domains that satisfy a trigger condition that is present at the time of the command. Figure 7, 704 and 705.] storing the query and the condition as registration information in association with the user; [Yao, Figure 7, the remaining steps require that the input (including the command/action/query and the condition) to be stored on the memory first. See also Figure 6 and “[0041] The task configuration and their APIs are then stored in task configuration database 230. An example of task configuration database 230 is shown in FIG. 6 according to one embodiment….” Figure 6 includes the Task 601 including the Trigger 602 and the Action/Command/Query 603. This step can also be mapped to “task configuration database 230” of Figure 3 shown in Figure 6, “[0041] The task configuration and their APIs are then stored in task configuration database 230. An example of task configuration database 230 is shown in FIG. 6 according to one embodiment….”] monitoring, by obtaining data corresponding to a trigger type defined in the registration information, for occurrence of the situation specified by the condition; and [Yao: “[0048] For example, extended from the above example, a user may say “when I get home, turn on my lights and send a text to my wife that I arrived home.” In this particular example, there is one trigger API for location services, a first action API to turn on the smart lights, and a second action API to send a text to user's wife…” “[0049] One or more actions can also be performed in response to multiple trigger API calls. For example, a user can say “when I get home and at sunset, turn on my lights and turn my thermostat to 70 degrees.” In this situation, there will be at least two trigger API calls: 1) location service, 2) weather service, and/or 3) time service. There will be at least two action API calls: 1) smart lights and 2) thermostat.” The conditions of getting home and sunset have to be satisfied before the command/action is carried out. “[0024] The first API is referred to as a trigger API and the second API is referred to as an action API. Task manager 120 is configured to monitor the trigger API to detect any notification or event, for example, in a form of a message or signal, received from a first application via the first API. In response to a notification or event received from the trigger API, task manager 120 and/or NLU system 110 examines the notification or event to determine whether the condition specified in the input text has been satisfied. If it is determined that the condition has been satisfied, task manager 120 transmits a control command, for example, in a form of a request, to a second application via the second API. The control command includes information or parameter(s) to request the second application to perform the task specified by the input text.” “[0041] The task configuration and their APIs are then stored in task configuration database 230. An example of task configuration database 230 is shown in FIG. 6 according to one embodiment. Task manager 120 then monitors the API events and sends control commands to proper APIs to perform the requested tasks in response to the proper API events.”] in response to determining that the situation has occurred, initiating a proactive interaction with the user by outputting an inquiry expression including a question that requests an answer meaning affirmative or negative from the user; and [Yao, Figure 8, 808: “Provide voice/visual Feedback/Confirmation.” According to the examples provided by the instant Application (and Figure 15 of the instant Application) the intent from “inquiry expression” is a “confirmation question” which is also taught by Yao: “[0040] …In addition, to confirm with user, the system can provide a voice feedback via voice output 205 if a speaker is present, and/or a visual feedback via visual output 206 if a display is present. The feedback includes enough details to notify the user with what the system has understood and options to cancel/modify API calls should the user desires.” Figure 8, “[0054] In operation 803, processing logic determine whether there is any ambiguity regarding the domains associated with the input. If so, in operation 804, processing logic prompts the user to clarify and resolve the ambiguity. Otherwise in operation 805, processing logic determines API parameters for the domain APIs. In operation 806, processing logic determines whether there is any parameter missing. If so, in operation 807, processing logic may prompt the user for missing parameter. Alternatively, processing logic invokes an external information provider or third-party vendor to obtain the missing parameters. Otherwise in operation 808, processing logic optionally provides audio and/or visual feedback to the user to confirm what the system understands and the actions will be performed. In operation 809, processing logic monitor a trigger API and in response to a trigger event, performs the configured action via an action API.”] upon receiving an affirmative answer from the user, generate the information to be provided or an action to be performed within the query domain. [Yao performs the task once the condition is satisfied or the ambiguities are resolved by answers from the user. Figures 7 and 8 or Yao. “[0024] The first API is referred to as a trigger API and the second API is referred to as an action API. Task manager 120 is configured to monitor the trigger API to detect any notification or event, for example, in a form of a message or signal, received from a first application via the first API. In response to a notification or event received from the trigger API, task manager 120 and/or NLU system 110 examines the notification or event to determine whether the condition specified in the input text has been satisfied. If it is determined that the condition has been satisfied, task manager 120 transmits a control command, for example, in a form of a request, to a second application via the second API. The control command includes information or parameter(s) to request the second application to perform the task specified by the input text.”] A confirmation is often in the shape of a question: You asked for the lights to be turned on; right? However, Yao does not teach question-format confirmations and a reference is added which teaches this feature expressly. Li teaches and the teaching suggests: in response to determining that the situation has occurred, initiating a proactive interaction with the user by outputting an inquiry expression including a question that requests an answer meaning affirmative or negative from the user; and [Li, Figure 4, “task-specific dialog editor 408” asks questions from the user. Li, [0047] provided below and Figure 3 teach Questions A and B ask for a Yes or No answer. “[0051] The task-specific dialog editor 408 includes a user interface to interactive receive data associated with a dialog from a user. In aspects, the task-specific dialog editor 408 generates a dialog tree that includes rules and conditions associated with a dialog associated with a task….” Figure 3 shows the confirmation questions that Li asks from the user and the rules for asking those questions not to annoy the user. “[0047] The candidate responses 324 includes a list of candidate responses for responding to a query. For example, the candidate responses 324 includes three candidates: A) “The Peony Kitchen is a fancy Chinese food restaurant. Would you like to book a table for five there?” B) “Anything else?” C) “How many people are in your party?” In aspects, the classification layer may generate the candidate responses 324 by combining the task-specific, rule-based classification and the transformer-based dialog embedding. ...”] upon receiving an affirmative answer from the user, generate the information to be provided or an action to be performed within the query domain. [Li, Figure 3, the question: “B) Anything else? 324” is a Yes or No question and if the user answers No, the system will proceed to perform the task of reserving a place. The question could have been “Is that all?” to which an answer of Yes causes the task to be performed. This scenario is not taught but is suggested by the teachings of Li.] Yao and Li pertain to query and response or task assignment dialog systems and it would have been obvious to combine the expressly shown question-type confirmation of Li with the system of Yao which teaches confirmation feedback but does not expressly include that the confirmation is in the form of question for completeness. A confirmation must almost always (inherently) be in the form of a question but a feedback may be a mere announcement. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Yao has a list of actions corresponding to a trigger condition. But Yao and Li do not teach a list of domains applicable to the user. Badr teaches: determining the query domain within a list of domains applicable to the user; [Badr teaches an “access control list” for each user which determines to which domains a particular user may have access. “[[0005] An automated assistant that serves a first user may not have access to user-controlled resources of another user. For example, the first user may not be able to instruct an automated assistant that serves the first user to add an item to someone else's shopping list, or to determine whether someone else is available for a meeting at a particular time/location. Moreover, some tasks may require engagement by multiple users…..” “[0006] … An access control list may include resources to which the automated assistant serving the second user device has access, as well as at least one or more subsets of those resources to which automated assistants serving other users have access. The automated assistant serving the first user may check (or as described below may have one or more cloud-based “services” check) the access control list associated with the second user to determine whether the first user has appropriate access rights as regards the second user. If the user has appropriate access, then action may be taken in response to the task request (e.g., responded to, undertaken, etc.)”] Yao/Li and Badr pertain to voice commands and to conditional voice commands and it would have been obvious to combine the features of Badr which provide an access control list that permits a certain user issue commands to a PDA with the system of the combination to include user restrictions as one of the conditions of the command. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Regarding Claim 2, Yao teaches: 2. The computer-implemented method according to claim 1, wherein the inquiry expression further comprises added content based on a type of the query. [Yao, Figure 3, “API Parameter Determination Module 214” collects data/parameters/content to be added to the Query/Command/Domain API.] Regarding Claim 3, Yao teaches: 3. The computer-implemented method according to claim 1, wherein the setting expression is received as voice input and converted into text using speech recognition. [Yao, Figure 8, 801: “receive a voice/text input.” See also Figures 2 and 3 and “voice input 203.” “[0047] … The user can directly speak to its mobile phone and transcribed text can be captured by NLU system 110 and processed by task manager 120 as described above….” Transcription teaches speech recognition. See also: “[0053] … Referring to FIG. 8, in operation 801, processing logic receives a text input. The text input may be converted from a voice phrase or sentence spoken by a user or a recorded audio stream using speech recognition….”] Regarding Claim 4, Yao teaches: 4. The computer-implemented method according to claim 1, further comprising obtaining an input of a specific message, wherein accepting the setting expression is performed in response to obtaining the specific message. [Yao, Figure 8, if the trigger and the action are considered ambiguous at 803, the system goes to “Prompt user to clarify 804” the response to which at “receive a voice/text input 801” teaches “obtaining an input of a specific message” of the Claim.] (The “specific message” requires to be defined with further particularity in the Claim. The specification appears to be looking for pre-stored formats. Figure 16, S102, and [0102] and [0143]. “[0102] …An exemplary registration message is “set a query and condition.”….”) Regarding Claim 5, Yao teaches: 5. The computer-implemented method according to claim 1, further comprising: identifying grammar with which the query matches by natural language interpretation of the query; [Yao, Figure 3 shows the flow of information in an “NLU system 110” which receives the input from the user (204 or 207). “[0029] …In one embodiment, domain determination module 211 determines one or more domains using a domain predictive model 221, which is configured to predict a domain based on a phrase, term, or sentence of the input text….” Predicting/determining a domain based on “sentence” teaches “identifying grammar” of the Claim.] (Note the definition in the instant Application which refers to a sentence structure. [0047].) identifying a domain to which the grammar belongs; [Yao, Figure 3, “Domain Determination Module 211.” “[0029] In response to a text input, domain determination module 211 is configured to determine one or more domains associated with the input text using an NLP process. …”] determining whether the domain is registered in a list stored in the memory; and [Yao, Figure 5 showing a correspondence between a Domain 501 and or more APIs 502. “[0031] API determination module 212 may perform a search in an API database based on a domain to obtain an API corresponding to the domain. Specifically, according to one embodiment, API determination module 212 performs a lookup operation in domain/API mapping table 222 based on a domain identifier (ID) determined by domain determination module 211. An example of domain/API mapping table 222 is shown in FIG. 5 according to one embodiment. Referring to FIG. 5, domain/API mapping table 500 may represent domain/API mapping table 222 of FIGS. 2 and 3. Domain/API mapping table 500 contains a number of mapping entries.….”] avoiding registration of the query and the condition in the memory in response to the domain not being registered in the list. [Yao, Figure 3, this limitation is the inevitable result of not finding the corresponding API for a domain from the table of Figure 5. In Figure 3, if the process goes well and the Domain and its corresponding APIs are determined, the process moves to “Task Manager 120” and “[0041] The task configuration and their APIs are then stored in task configuration database 230. …” However, in Figure 3, there has to be an “API Determination 212” as a result of “Domain/API mapping tables 222” in order for the process to move forward to collecting parameters and then to the execution of the API. When no API is determined at 212, the process cannot progress; no alternative path is provided. Accordingly, no storing of the task configuration 230 / “registration of the query and condition” can take place.] Regarding Claim 6, Yao teaches: 6. The computer-implemented method according to claim 5, wherein the obtaining a setting expression includes receiving information that specifies a user corresponding to the setting expression among at least two users, the list is associated with information that specifies at least one user among the at least two users, and the determining whether the domain is registered in a list includes: specifying a user corresponding to the setting expression on which the domain is based, and specifying the list associated with the user. [Yao disambiguates the command according to the user because if the user asks the directions home, the system had to know who the user is before it can determine where the home would be. Figure 3, “User preferences 223.” Thus, user profile or user preferences, are consulted which indicates more than one user. Additionally, some commands such as “call Jeff” require access to contacts which includes a list of second users. “[0038] In the above example when the user says “when I get home turn on my lights,” at least one of the parameters will be the address of user's home. In this example, parameter determination module 214 parses the input text in view of the rules or grammars 224 to determine that a home address of the user is needed as a parameter. Typically, the home address should have been previously registered and stored either under the user contact, address book, or user profile. Parameter determination module 214 may communicate with proper components to obtain the home address of the user, for example, via a corresponding API. For example, parameter determination module 214 may access the address book or contacts of the user to obtain the home address of the user via a particular API that can be utilized to access the address book or contacts.”] Yao, however, does not expressly mention two or more users. Neither does Li. Badr teaches: wherein the obtaining a setting expression includes receiving information that specifies a user corresponding to the setting expression among at least two users, the list is associated with information that specifies at least one user among the at least two users, and the determining whether the domain is registered in a list includes: specifying a user corresponding to the setting expression on which the domain is based, and specifying the list associated with the user. [Badr teaches that the automated assistant distinguishes between users and each user is authorized to dictate a particular set of tasks such that the automated assistant rejects a user’s command depending on the identity of the user. Figure 1 shows users 140A and 140B one being a man and another a woman (ie two distinct users). See Figure 6 where the user issues a command and the device responds with “Unfortunately I don’t have permission to do that….” Figure 9, shows the flow chart: 908: “Determine that task request relates to second user and check access control list to determine whether first user has appropriate access rights regarding second user.”] Yao/Li and Badr pertain to voice commands and to conditional voice commands and it would have been obvious to combine the features of Badr which demonstrate one type of interaction between two different users when a device is used by both with the system of the combination to include user restrictions as one of the conditions of the command. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Regarding Claim 7, Yao teaches: 7. The computer-implemented method according to claim 1, further comprising: specifying a type of the query based on the setting expression; and [Yao, Figure 3, Domain Determination at 211 and 221. See [0048]-[0049] for examples of Domain/Query Type determination for “when I get home, turn on my lights and send a text to my wife that I arrived home,” “when I get home, turn on my lights and turn my thermostat to 70 degrees,” “when I get home and at sunset, turn on my lights and turn my thermostat to 70 degrees.”] generating the inquiry expression based on the type. [Yao, Figure 3, the API calls by the “Task Manager 120” are generated in response to the Domain/Query type that is determined at 211 and 212 and the API parameters that are collected by the NLU system 110.] Regarding Claim 8, Yao teaches: 8. The computer-implemented method according to claim 7, wherein the type identifies contents to be added to the query in the inquiry expression. [Yao, Figure 3, “API Parameter Determination Module 214” collects data/parameters/content to be added to the Query/Command/Domain API.] Regarding Claim 12, Yao teaches: 12. A computer-implemented method of query processing comprising: receiving, from a user, a setting expression including a query and a condition specifying a situation for triggering the query; [Yao, refer to the rejection of Claim 1.] determining a query domain that identifies a domain to which the query belongs; [Yao, refer to the rejection of Claim 1.] determining the query domain within a list of domains applicable to the user; [Yao, refer to the rejection of Claim 1.] storing the query and the condition as registration information in association with the user; [Yao, refer to mapping of Claim 1.] monitoring, by obtaining data corresponding to a trigger type defined in the registration information, for occurrence of the situation specified by the condition; and [Yao, refer to the rejection of Claim 1.] in response to determining that the situation has occurred, initiating a proactive interaction with the user by outputting an inquiry expression including a question that requests an answer meaning affirmative or negative from the user; and [Yao, refer to the rejection of Claim 1.] upon receiving an affirmative answer from the user, generate the information to be provided or an action to be performed within the query domain. [Yao, refer to the rejection of Claim 1.] This Claim is like Claim 1 except that it has been broadened to exclude the “extracting, by natural language interpretation, the query and the condition” which is part of Claim 1. Regarding Claim 13, Yao teaches: 13. The computer-implemented method according to claim 12, further comprising: receiving, by a terminal, the setting expression via voice; and [Yao, Figures 2 and 3, “Voice Input 203.”] transmitting, by the terminal, the setting expression to a server. [Yao, Figure 1, “Servers 103” and “Information Sources 104” and “IoT devices 102” all of which can be servers ([0021]) in communication with “other devices (e.g., smart phones) 105.” Figure 4 shows the communication with “External Information Sources 104” which are servers and [0021] teaches that commands to household items are transmitted to “IoT device 102” which can be servers. “[0021] … IoT devices can be any device that is accessible via the Internet, such as, for example, smart lights, thermostats)…..”] Regarding Claim 14, Yao teaches: 14. The computer-implemented method according to claim 13, further comprising transmitting, by the terminal to the server, data for determination as to whether the situation specified by the condition has occurred. [Yao, Figure 3, “external information sources 104” which can be servers may perform some determinations on the input command that is outside the capabilities of the NLU System 110 such as image identification. “[0050] According to one embodiment, in some situations, if there is an API parameter that cannot be determined, the system may invoke an external help by accessing an external information provider to obtain the information that is helpful to determine the missing API parameter. For example, a user may say “if I add a photo to Instagram, change my room lights to match its theme or subject matter.” In this example, there is a trigger API of Instagram to determine a new photo has been added and an action API to configure smart lights. However, one of the parameters associated with the smart lights in this example will be: 1) color and/or 2) emitting or flashing patterns. In order to determine the parameters of the smart lights, the theme or subject matter of the photo has to be determined. The system may or may not be equipped with such a capability to determine the theme or subject matter of a photo (e.g., indoor vs. outdoor, city vs. rural area, or sunny day vs. raining day). Accordingly, the system can invoke an external or third-party service provider (e.g., information provider system 104) to perform an image analysis on the photo via a separate API communication protocol.”] Claim 16 is a system claim with limitations corresponding to the limitations of Claim 1 and is rejected under similar rationale. Additionally: Yao teaches: 16. A system comprising: memory storing instructions that are executable; and [Ya, Figure 9, “storage devices 1508.”] one or more processing devices to execute the instructions to perform operations comprising: [Yao, Figure 9, “processors 1501.”] … Claim 17 is a system claim with limitations corresponding to the limitations of Claim 5 and is rejected under similar rationale. Regarding Claim 18, Yao teaches: 18. The system of claim 16, wherein the operations further comprise: identifying one or more of a query type, a query domain, a trigger type, a trigger value, a trigger repeat, and a trigger rule of the query. [Yao, Figure 8, “determine trigger/action domains/APIS 802.” And “Determine API parameters 805” which are domain parameters for both actions/commands/queries and triggers.] Claim 19 is a system claim with limitations corresponding to the limitations of Claim 2 and is rejected under similar rationale. 19. The system of claim 18, wherein trigger type is extracted via natural language interpretation. [Yao, Figure 7, 702 performs NLP on the input: “[0052] …In operation 702, processing logic performs an NLP operation on the input text to determine a first domain associated with the condition and a second domain associated with the action….” See also Figure 8, 802. Note that action is query/command and condition is condition/trigger.] Claim 21 is a system claim with limitations corresponding to the limitations of Claim 3 and is rejected under similar rationale. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Yao and Li and Badr and further in view of Kurani (U.S.11321688). Regarding Claim 10, Yao, Figure 5, “location services” are included in the domains 501. “A location service (e.g., GPS application)” ([0030]) generally pertains to driving but not necessarily. Neither do Li or Badr. Kurani teaches: 10. The computer-implemented method according to claim 1, wherein the situation includes a situation relating to a vehicle. [Kurani is directed to context-aware vehicle-based operations. Figure 5 of Kurani shows that the system receives operating data relating to the vehicle (504) and determines the context/conditions based on the operating data (506) and then either facilitates the performance of a task by the user or not (514). Thus the condition/situation of the task relates to the operating data of the vehicle.] Yao/Li/Badr and Kurani pertain to conditional performance of tasks and functions and it would have been obvious to combine the data from the vehicle as the condition for the task from Kurani with the system of the combination which hints at a user in transit to his home. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Claims 11 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Yao and Li and Badr and further in view of Novitchenko (U.S. 20220284901). Regarding Claim 11, Yao’s examples of conditional commands include satisfaction of a time (at sunset) or location (when I get home) conditions or performing a first act as a condition for a second command (if I add a photo … change my room lights …”) The conditions of Yao are not evaluated based on the frequency of occurrence of a situation. Li and Badr do not teach this feature either. Novitchenko teaches: 11. The computer-implemented method according to claim 1, wherein the condition defines a frequency of occurrence of the situation. [Novitchenko, Figure 8, 808 “In accordance with a determination that the suggestion criteria are satisfied, providing a suggestion indicating that the determined task may be performed using the digital assistant …” indicates that the performance of the task is conditioned on context which is input at 802: “receive context data associated with the electronic device.” The context data is dependent on the number of times/ frequency of occurrence of certain events. “[0250] In some examples, context data indicates a number of times that the electronic device has provided a suggestion indicating that a particular task may be performed by a digital assistant of the electronic device to the user. In some examples, the context data further indicates a number of times the electronic device has provided a suggestion for the task in a particular period of time (e.g., a week, a month, etc.). For example, the context data can indicate a number of times in the past month that the electronic device has provided a suggestion indicating that the task of sending a text message (e.g., via a messaging application stored on the electronic device) can be performed using a digital assistant of the electronic device. ….” “[0252] In some examples, the context data indicates a frequency of one or more user behaviors. For example, the context data can indicate the frequency at which the user sets an alarm in a clock application (as well as values for the alarm time parameter of the set alarms), the frequency at which the electronic device communicates with each user-specific contact stored on the electronic device (e.g., via text message and/or phone call), the frequency at which certain websites are visited in a browser application (e.g., browser module 247), the frequency at which the user visits certain locations with the electronic device, the frequency at which the user interacts with software applications stored on the electronic device (e.g., opens a software application, uses a feature within a software application, etc.), and so forth.”] Yao/Li/Badr and Novitchenko are directed to conditional commands and it would have been obvious to include the frequency of occurrence of a certain event from Novitchenko as a type of condition prerequisite of performance of a command in order to either make the more frequent events a precondition to perform the command or prevent the command from being performed depending on policy. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Claim 20 is a system claim with limitations corresponding to the limitations of Claim 11 and is rejected under similar rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARIBA SIRJANI whose telephone number is (571)270-1499. The examiner can normally be reached on 9 to 5, M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Desir can be reached on 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Fariba Sirjani/ Primary Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jul 28, 2023
Application Filed
May 13, 2025
Non-Final Rejection — §103
Aug 19, 2025
Response Filed
Oct 01, 2025
Final Rejection — §103
Dec 31, 2025
Request for Continued Examination
Jan 20, 2026
Response after Non-Final Action
Mar 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603099
SELF-ADJUSTING ASSISTANT LLMS ENABLING ROBUST INTERACTION WITH BUSINESS LLMS
2y 5m to grant Granted Apr 14, 2026
Patent 12579482
Schema-Guided Response Generation
2y 5m to grant Granted Mar 17, 2026
Patent 12572737
GENERATIVE THOUGHT STARTERS
2y 5m to grant Granted Mar 10, 2026
Patent 12537013
AUDIO-VISUAL SPEECH RECOGNITION CONTROL FOR WEARABLE DEVICES
2y 5m to grant Granted Jan 27, 2026
Patent 12492008
Cockpit Voice Recorder Decoder
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+31.0%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 547 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month