DETAILED ACTION
Notice of Pre-AIA or AIA Status
1.The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
2.The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
3.Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Fan (US Pub.No.2025/0349291) in view of Channapattan (US Pub.No.2025/0110976).
4. Regarding claims 1,11 and 20 Fan teaches a method, an apparatus and a tangible, non-transitory, computer-readable medium storing program instructions comprising: identifying, by a device, a type of task and its context indicated by a prompt sent by a user for input to a language model; authorizing, by the device, the prompt for input to the language model based on the type of task and its context (Para:0016-0019 teaches a system may receive a user input as speech. For example, a user may speak an input to a device. The device may send audio data, representing the spoken input, to the system. The system may perform ASR processing on the audio data to generate ASR data (e.g., text data, token data, etc.) representing the user input. The system may perform processing on the ASR data to determine an action responsive to the user input. The system may be configured to process the ASR data using one or more language models (e.g., one or more large language models (LLMs) to determine one or more components configured to perform one or more functions potentially responsive to the user input (e.g., generate a potential response/action responsive to the user input). For example, in response to the user input “book me a flight to Seattle,” the system may book a flight to Seattle and output information of the booked flight. For further example, in response to the user input “lock the front door,” the system may actuate a “front door” smart lock to a locked position. As another example, in response to the user input “Please plan a 4-person trip to [Location] from [Date 1] to [Date 2],” the language model(s) may determine one or more components (e.g., an API, a skill component, a LLM agent component, etc.) configured to book a flight ticket and book a hotel. To select one or more of the components to respond to the user input, the system may request data from the one or more components including a potential response to the user input. The system may cause the one or more component to perform the actions (e.g., booking the identified flight and hotel), for example, in response to the user authorizing the system to do so. The potential response include natural language data that may be used to respond to the user input. The component is configured to/will perform with respect to the user input);
Fan teaches all the above claimed limitations but fails to teach making, by the device, a determination as to whether a response of the language model to the prompt is authorized to be returned to the user based on the response, the type of task, and its context; and preventing, by the device, the response from being returned to the user when the determination indicates that the response is not authorized to be returned.
Channapattan teaches making, by the device, a determination as to whether a response of the language model to the prompt is authorized to be returned to the user based on the response, the type of task, and its context (Fig.2 and Para:0083 teaches the prompt injection prevention module 240 may perform techniques that support the prevention of the insertion of a prompt or other language into the natural language user query. In some cases, the prompt injection prevention module 240 may parse the natural language user query to determine whether the query includes language or prompts that may potentially be malicious or that may violate a constraint configured by the natural language interface system 230. For example, a user may include unintentional or malicious content in the natural language user query that may cause the natural language interface system 230 to inject potentially harmful information into the machine learning model and to output malicious output, such as a malicious model-generated machine-readable query. Such malicious or unintentional language or prompts may cause an output that may include sensitive information, information not associated with the client organization, information that the client organization may not have access to, or other information that violates a constraint configured by the natural language interface system 230, thus creating security vulnerabilities for the identity management system 220);
and preventing, by the device, the response from being returned to the user when the determination indicates that the response is not authorized to be returned (Fig.2 and Para:0083 teaches the data identifying potential malicious language or prompts or constraints or rules for identifying such language/prompts may be maintained in the database 290 and the prompt injection prevention module 240 may access the data in the database 290 to analyze the user query to identify any language or prompts that may potentially be malicious or that may violate a constraint. In some cases, the prompt injection prevention module 240 may reject the user query when such language is identified.
The prompt injection prevention module 240 may output a notification, such as via the natural language UI 210, indicating the removal of the identified language from the user query or the rejection of the user query).
Therefore, it would have been obvious to one of ordinary skill in the art before the invention was filing to modify Thomas to include making, by the device, a determination as to whether a response of the language model to the prompt is authorized to be returned to the user based on the response, the type of task, and its context; and preventing, by the device, the response from being returned to the user when the determination indicates that the response is not authorized to be returned, as taught by Channapattan such a setup will prevent or reduce a likelihood of bad or malicious data being injected into the machine learning model which may degrade the performance and accuracy of subsequent outputs of the machine learning model (para:0083).
5. Regarding claims 2,12 Channapattan teaches the method and the apparatus wherein control over the prompt is retained by an intermediate layer prior to sending the prompt to its input to the language model (Para:0083 teaches prior to sending the request, determining that the prompt meets safety criteria. The prompt injection prevention module 240 may perform techniques that support the prevention of the insertion of a prompt or other language into the natural language user query. In some cases, the prompt injection prevention module 240 may parse the natural language user query to determine whether the query includes language or prompts that may potentially be malicious or that may violate a constraint configured by the natural language interface system 230. For example, a user may include unintentional or malicious content in the natural language user query that may cause the natural language interface system 230 to inject potentially harmful information into the machine learning model and to output malicious output, such as a malicious model-generated machine-readable query. Such malicious or unintentional language or prompts may cause an output that may include sensitive information, information not associated with the client organization, information that the client organization may not have access to, or other information that violates a constraint configured by the natural language interface system 230, thus creating security vulnerabilities for the identity management system 220. In some cases, data identifying potential malicious language or prompts or constraints or rules for identifying such language/prompts may be maintained in the database 290 and the prompt injection prevention module 240 may access the data in the database 290 to analyze the user query to identify any language or prompts that may potentially be malicious or that may violate a constraint. In some cases, the prompt injection prevention module 240 may modify the user query to remove the identified language from the user query. In some cases, the prompt injection prevention module 240 may reject the user query when such language is identified. The prompt injection prevention module 240 may output a notification, such as via the natural language UI 210, indicating the removal of the identified language from the user query or the rejection of the user query. Such techniques may prevent or reduce a likelihood of bad or malicious data being injected into the machine learning model which may degrade the performance and accuracy of subsequent outputs of the machine learning model. Para:0084 teaches The anonymization module 250 may perform techniques that support anonymizing the user query. For instance, the anonymization module 250 may parse the natural language user query to identify personally-identifiable or sensitive information (such as user names, login credentials, account numbers, social security numbers, financial information, or the like) included in the user query. When identified, the anonymization module 250 may modify the user query to replace the personally-identifiable information with a placeholder value. By way of example, if a user name “Jack Smith” is identified in the user query, the anonymization module 250 may modify the user query to replace “Jack Smith” with the placeholder “$user_name_1$.” The anonymization module 250 may further cache, or otherwise persist or store, the identified personally-identifiable information. Such techniques may prevent or reduce a likelihood of having personal or sensitive information injected into the machine learning model. In some cases, the anonymization module 250 may perform the described anonymization techniques prior to the prompt injection prevention module 240 performing the described prompt injection prevention technique).
6. Regarding claims 3,13 Channapattan teaches the method and the apparatus, wherein control over the response of the language model to the prompt is retained by an intermediate layer while making the determination as to whether the response of the language model to the prompt is authorized to be returned to the user (Para:0083 teaches prior to sending the request, determining that the prompt meets safety criteria. The prompt injection prevention module 240 may perform techniques that support the prevention of the insertion of a prompt or other language into the natural language user query. In some cases, the prompt injection prevention module 240 may parse the natural language user query to determine whether the query includes language or prompts that may potentially be malicious or that may violate a constraint configured by the natural language interface system 230. For example, a user may include unintentional or malicious content in the natural language user query that may cause the natural language interface system 230 to inject potentially harmful information into the machine learning model and to output malicious output, such as a malicious model-generated machine-readable query. Such malicious or unintentional language or prompts may cause an output that may include sensitive information, information not associated with the client organization, information that the client organization may not have access to, or other information that violates a constraint configured by the natural language interface system 230, thus creating security vulnerabilities for the identity management system 220. In some cases, data identifying potential malicious language or prompts or constraints or rules for identifying such language/prompts may be maintained in the database 290 and the prompt injection prevention module 240 may access the data in the database 290 to analyze the user query to identify any language or prompts that may potentially be malicious or that may violate a constraint. In some cases, the prompt injection prevention module 240 may modify the user query to remove the identified language from the user query. In some cases, the prompt injection prevention module 240 may reject the user query when such language is identified. The prompt injection prevention module 240 may output a notification, such as via the natural language UI 210, indicating the removal of the identified language from the user query or the rejection of the user query. Such techniques may prevent or reduce a likelihood of bad or malicious data being injected into the machine learning model which may degrade the performance and accuracy of subsequent outputs of the machine learning model. Para:0084 teaches The anonymization module 250 may perform techniques that support anonymizing the user query. For instance, the anonymization module 250 may parse the natural language user query to identify personally-identifiable or sensitive information (such as user names, login credentials, account numbers, social security numbers, financial information, or the like) included in the user query. When identified, the anonymization module 250 may modify the user query to replace the personally-identifiable information with a placeholder value. By way of example, if a user name “Jack Smith” is identified in the user query, the anonymization module 250 may modify the user query to replace “Jack Smith” with the placeholder “$user_name_1$.” The anonymization module 250 may further cache, or otherwise persist or store, the identified personally-identifiable information. Such techniques may prevent or reduce a likelihood of having personal or sensitive information injected into the machine learning model. In some cases, the anonymization module 250 may perform the described anonymization techniques prior to the prompt injection prevention module 240 performing the described prompt injection prevention technique).
.
7. Regarding claims 4,14 Fan teaches the method and the apparatus, further comprising: making a determination as to whether the prompt is authorized for the input to the language model based on a comparison of the type of task and its context to a task control policy (Para:0016-0019 teaches a system may receive a user input as speech. For example, a user may speak an input to a device. The device may send audio data, representing the spoken input, to the system. The system may perform ASR processing on the audio data to generate ASR data (e.g., text data, token data, etc.) representing the user input. The system may perform processing on the ASR data to determine an action responsive to the user input. The system may be configured to process the ASR data using one or more language models (e.g., one or more large language models (LLMs) to determine one or more components configured to perform one or more functions potentially responsive to the user input (e.g., generate a potential response/action responsive to the user input). For example, in response to the user input “book me a flight to Seattle,” the system may book a flight to Seattle and output information of the booked flight. For further example, in response to the user input “lock the front door,” the system may actuate a “front door” smart lock to a locked position. As another example, in response to the user input “Please plan a 4-person trip to [Location] from [Date 1] to [Date 2],” the language model(s) may determine one or more components (e.g., an API, a skill component, a LLM agent component, etc.) configured to book a flight ticket and book a hotel. To select one or more of the components to respond to the user input, the system may request data from the one or more components including a potential response to the user input. The system may cause the one or more component to perform the actions (e.g., booking the identified flight and hotel), for example, in response to the user authorizing the system to do so. The potential response include natural language data that may be used to respond to the user input. The component is configured to/will perform with respect to the user input).
8. Regarding claims 5,15 Fan in view of Channapattan teaches the method and the apparatus, further comprising: blocking the prompt from being input to the language model when the type of task and its context violate the task control policy (Channapattan: Para:0083 teaches the prompt injection prevention module 240 may perform techniques that support the prevention of the insertion of a prompt or other language into the natural language user query. In some cases, the prompt injection prevention module 240 may parse the natural language user query to determine whether the query includes language or prompts that may potentially be malicious or that may violate a constraint configured by the natural language interface system 230. For example, a user may include unintentional or malicious content in the natural language user query that may cause the natural language interface system 230 to inject potentially harmful information into the machine learning model and to output malicious output, such as a malicious model-generated machine-readable query. Such malicious or unintentional language or prompts may cause an output that may include sensitive information, information not associated with the client organization, information that the client organization may not have access to, or other information that violates a constraint configured by the natural language interface system 230, thus creating security vulnerabilities for the identity management system 220. In some cases, data identifying potential malicious language or prompts or constraints or rules for identifying such language/prompts may be maintained in the database 290 and the prompt injection prevention module 240 may access the data in the database 290 to analyze the user query to identify any language or prompts that may potentially be malicious or that may violate a constraint. In some cases, the prompt injection prevention module 240 may modify the user query to remove the identified language from the user query. In some cases, the prompt injection prevention module 240 may reject the user query when such language is identified. The prompt injection prevention module 240 may output a notification, such as via the natural language UI 210, indicating the removal of the identified language from the user query or the rejection of the user query. Such techniques may prevent or reduce a likelihood of bad or malicious data being injected into the machine learning model which may degrade the performance and accuracy of subsequent outputs of the machine learning model).
9. Regarding claims 6,16 Fan in view of Channapattan teaches the method and the apparatus further comprising: causing the prompt to be reengineered prior to being input to the language model when the type of task and its context violate the task control policy (Channapattan: Para:0083 teaches the data identifying potential malicious language or prompts or constraints or rules for identifying such language/prompts may be maintained in the database 290 and the prompt injection prevention module 240 may access the data in the database 290 to analyze the user query to identify any language or prompts that may potentially be malicious or that may violate a constraint. In some cases, the prompt injection prevention module 240 may modify the user query to remove the identified language from the user query).
10. Regarding claims 7,17 Channapattan teaches the method and the apparatus, further comprising: logging a task control policy violation associated with the prompt (Channapattan: Para:0083 teaches the prompt injection prevention module 240 may perform techniques that support the prevention of the insertion of a prompt or other language into the natural language user query. In some cases, the prompt injection prevention module 240 may parse the natural language user query to determine whether the query includes language or prompts that may potentially be malicious or that may violate a constraint configured by the natural language interface system 230. For example, a user may include unintentional or malicious content in the natural language user query that may cause the natural language interface system 230 to inject potentially harmful information into the machine learning model and to output malicious output, such as a malicious model-generated machine-readable query. Such malicious or unintentional language or prompts may cause an output that may include sensitive information, information not associated with the client organization, information that the client organization may not have access to, or other information that violates a constraint configured by the natural language interface system 230, thus creating security vulnerabilities for the identity management system 220. In some cases, data identifying potential malicious language or prompts or constraints or rules for identifying such language/prompts may be maintained in the database 290 and the prompt injection prevention module 240 may access the data in the database 290 to analyze the user query to identify any language or prompts that may potentially be malicious or that may violate a constraint. In some cases, the prompt injection prevention module 240 may modify the user query to remove the identified language from the user query. In some cases, the prompt injection prevention module 240 may reject the user query when such language is identified. The prompt injection prevention module 240 may output a notification, such as via the natural language UI 210, indicating the removal of the identified language from the user query or the rejection of the user query. Such techniques may prevent or reduce a likelihood of bad or malicious data being injected into the machine learning model which may degrade the performance and accuracy of subsequent outputs of the machine learning model).
11. Regarding claims 8,18 Channapattan teaches the method and the apparatus, further comprising: generating, based on logs of task control policy violations, characterizations of task control policy violations across a plurality of prompts (Para:0083 teaches prior to sending the request, determining that the prompt meets safety criteria. The prompt injection prevention module 240 may perform techniques that support the prevention of the insertion of a prompt or other language into the natural language user query. In some cases, the prompt injection prevention module 240 may parse the natural language user query to determine whether the query includes language or prompts that may potentially be malicious or that may violate a constraint configured by the natural language interface system 230. For example, a user may include unintentional or malicious content in the natural language user query that may cause the natural language interface system 230 to inject potentially harmful information into the machine learning model and to output malicious output, such as a malicious model-generated machine-readable query. Such malicious or unintentional language or prompts may cause an output that may include sensitive information, information not associated with the client organization, information that the client organization may not have access to, or other information that violates a constraint configured by the natural language interface system 230, thus creating security vulnerabilities for the identity management system 220. In some cases, data identifying potential malicious language or prompts or constraints or rules for identifying such language/prompts may be maintained in the database 290 and the prompt injection prevention module 240 may access the data in the database 290 to analyze the user query to identify any language or prompts that may potentially be malicious or that may violate a constraint. In some cases, the prompt injection prevention module 240 may modify the user query to remove the identified language from the user query. In some cases, the prompt injection prevention module 240 may reject the user query when such language is identified. The prompt injection prevention module 240 may output a notification, such as via the natural language UI 210, indicating the removal of the identified language from the user query or the rejection of the user query. Such techniques may prevent or reduce a likelihood of bad or malicious data being injected into the machine learning model which may degrade the performance and accuracy of subsequent outputs of the machine learning model.Para:0084 teaches The anonymization module 250 may perform techniques that support anonymizing the user query. For instance, the anonymization module 250 may parse the natural language user query to identify personally-identifiable or sensitive information (such as user names, login credentials, account numbers, social security numbers, financial information, or the like) included in the user query. When identified, the anonymization module 250 may modify the user query to replace the personally-identifiable information with a placeholder value. By way of example, if a user name “Jack Smith” is identified in the user query, the anonymization module 250 may modify the user query to replace “Jack Smith” with the placeholder “$user_name_1$.” The anonymization module 250 may further cache, or otherwise persist or store, the identified personally-identifiable information. Such techniques may prevent or reduce a likelihood of having personal or sensitive information injected into the machine learning model. In some cases, the anonymization module 250 may perform the described anonymization techniques prior to the prompt injection prevention module 240 performing the described prompt injection prevention technique).
12. Regarding claims 9,19 Fan teaches the method and the apparatus, further comprising: parsing the prompt to generate a prompt characterization, wherein the prompt characterization includes an indication of the type of the task (Para:0022 teaches using one or more language models to select from one or more potential responses provided by one or more different types of components. The system is configured to receive and process potential responses from different types of components, such as APIs, skill components, and LLM-based agent components in order to perform an action responsive to the user input. The system may process to determine one or more components configured to generate responses associated with a user request, and receive, from the one or more components, potential responses from the components. The system may process the potential responses, as well as contextual information associated with the user input, to select one or more of the potential responses that are responsive to the user request. In some cases, the system may generate a summary of the one or more selected responses and/or, if the one or more selected responses include a potential action(s), cause the potential action(s) to be performed by the corresponding components. For example, in response to receiving a user input of “What is the weather for today,” the system may process to determine one or more components configured to generate potential responses associated with the user input (e.g., weather skill components, LLM agents finetuned for weather inquiries) and receive, from the one or more components, a potential response of “It is currently 70 degrees, with a high of 75 and a low of 68” from a first component and a potential response of “The weather for today is expected to be mostly sunny, but with a chance of rain in the late afternoon” from a second component. The system may determine that the potential responses from both components are responsive to the user input and generate a summary of the responses such as “It is expected to be mostly sunny today, with a high of 75 and a low of 68, but with a chance of rain in the late afternoon,” which may be output to the user e.g., as audio or visual information).
13. Regarding claim 10 Fan teaches the method, wherein the context includes a file from a retrieval-augmented generation system (Para:0016-0019 teaches a system may receive a user input as speech. For example, a user may speak an input to a device. The device may send audio data, representing the spoken input, to the system. The system may perform ASR processing on the audio data to generate ASR data (e.g., text data, token data, etc.) representing the user input. The system may perform processing on the ASR data to determine an action responsive to the user input. The system may be configured to process the ASR data using one or more language models (e.g., one or more large language models (LLMs) to determine one or more components configured to perform one or more functions potentially responsive to the user input (e.g., generate a potential response/action responsive to the user input). For example, in response to the user input “book me a flight to Seattle,” the system may book a flight to Seattle and output information of the booked flight. For further example, in response to the user input “lock the front door,” the system may actuate a “front door” smart lock to a locked position. As another example, in response to the user input “Please plan a 4-person trip to [Location] from [Date 1] to [Date 2],” the language model(s) may determine one or more components (e.g., an API, a skill component, a LLM agent component, etc.) configured to book a flight ticket and book a hotel. To select one or more of the components to respond to the user input, the system may request data from the one or more components including a potential response to the user input. The system may cause the one or more component to perform the actions (e.g., booking the identified flight and hotel), for example, in response to the user authorizing the system to do so. The potential response include natural language data that may be used to respond to the user input. The component is configured to/will perform with respect to the user input).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEREENA T CATTUNGAL whose telephone number is (571)270-0506. The examiner can normally be reached Mon-Fri : 7:30 AM-5 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynn Feild can be reached at 571-272-2092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DEREENA T CATTUNGAL/ Primary Examiner, Art Unit 2431