DETAILED ACTION
Notice of Pre-AIA or AIA Status
1.The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
2. Claims 1, 2-10, 11 and 12-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
In determining whether the claims are subject matter eligible, the Examiner applies the 2019 USPTO Patent Eligibility Guidelines. (2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, Jan. 7, 2019.)
Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 1 recites a method, which is a process.
Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes—the limitation identified below are each, as drafted, a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind:
receiving, using a communication device, a prompt data from a user device;
receiving, using a communication device, a policy data from an organization device associated with the organization;
analyzing, using a processing device, the prompt data based on the policy data;
10generating, using the processing device, an output data based on the analyzing, wherein the generating is based on large language model (LLM);
storing, using a storage device, each of the prompt data, the output data, and an identifier associated with at least one of the user device, the organization device, and the organization;
30transmitting...the output data to the user device.
Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No—the judicial exception is not integrated into a practical application. The preamble recites “A method for facilitating analysis of a model”. Models are among the basic tools of scientific and technological work, and they are used in every technological problem-solving area. The claim does not tie the mental processes identified above to any particular real-world technological problem.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No. The additional limitations are as follows:
receiving, using a communication device, at least one model data...5 (insignificant pre-solution activity, i.e. data gathering)
transmitting, using the communication device, the at least one result to the at least one user device; (insignificant post-solution activity, i.e. transmitting a result)
generating, using the processing device, an output based on the analyzing of the LLM (insignificant person of ordinary skill-solution activity, i.e. generating outcome)
20storing, using a storage device, the each of the prompt data, the output data, and an identifier associated with at least one of the user device, the organization device, and the organization; (insignificant post-solution activity)
transmitting, using the communication device, the output data to the user device; (insignificant post-solution activity, i.e. transmitting results)
None of these additional limitations modifies the abstract ideas identified above such that the claim as a whole amounts to significantly more than these abstract ideas.
For the reasons above, claim 1 is rejected as being directed to non-patentable subject matter under §101. This rejection applies equally to independent claim 11, which recite a system, as well as to dependent claims 2-10 and 12-20.
Dependent claims 2 and 12 each recite additional mental processes and insignificant extra-solution activity:
5generating,....a citation data based on the output data; (mental process)
transmitting,… the citation data to the user device; (insignificant extra-solution activity, i.e. receiving data)
Dependent claims 3 and 13 each recite additional mental processes and insignificant extra-solution activity:
5receiving, using the communication device, an organization document from the organization device; (insignificant extra-solution activity, i.e. receiving input data)
analyzing... the organization document; (mental process)
generating...the citation data based on the analyzing of the organization document; (mental process)
Dependent claims 154 and 14 each recite additional mental processes and insignificant extra-solution activity:
generating, using the processing device, a query based on the prompt data; (mental process) and
transmitting, using the communication device, the query data to at least one external databases; and. (insignificant extra-solution activity, i.e. transmitting query data)
receiving,…..a response data from the at least one external databases, wherein the output data comprises the response data (insignificant post-solution activity, i.e. receiving a result)
Dependent claims 5 and 15 each recite additional mental processes and insignificant extra-solution activity:
receiving, using the communication device, an audit request from the organization device; and (insignificant extra-solution activity, i.e. receiving request)
25generating...an audit trail data. (mental process)
teeefgvbnbnmnm,.m,.,transmitting,….the audit trail to the organization device (insignificant post-solution activity, i.e. transmitting the data)transmittingtrtttiikkktr
Dependent claims 6 and 16 each recite additional mental processes and insignificant extra-solution activity:
53generating,…an alert based on the analyzing; and transmitting,…the alert to at least one of the user device and the organization device transmitting (insignificant person of ordinary skill-solution activity, i.e., alerting the user of a particular outcome).5
Dependent claims 7 and 17 each recite additional mental processes and insignificant extra-solution activity:
storing, is performed on a blockchain (insignificant post-solution activity, i.e. storing a result)
Dependent claims 8 and 18 each recite additional mental processes and insignificant extra-solution activity:
storing,…a session identifier data in association with the prompt data. (insignificant post-solution activity, , i.e. storing)
Dependent claims 9 and 19 each recite additional mental processes and insignificant extra-solution activity:
identifying...sensitive data in the prompt data; and (mental process)
generating...a place holder data based on the sensitive data. (insignificant post-solution activity)
replacing,… the sensitive data with the place holder data in the prompt data (insignificant extra-solution activity)
Dependent claims 1510 and 20 each recite additional mental processes and insignificant extra-solution activity:
generating...a modified prompt data based on each of the prompt data and the policy data; and (mental process)
transmitting,….the modified prompt data to the user device; and receiving, using the communication device, the output data based on the modified prompt data. (insignificant post-solution activity, i.e. transmitting a result)
Taken alone, their additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
Claim Rejections - 35 USC § 102
3. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
4. Claim(s) 1-4,8,10-14,18 and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Wang (US Pub.No.2025/0006196)
5.Regarding claims 1 and 11 Wang teaches a method and a system of managing an interaction with a large language model (LLM), the method comprising: receiving, using a communication device, a prompt data from a user device associated with an organization; receiving, using the communication device, a policy data from an organization device associated with the organization, wherein the policy data is associated with the organization; analyzing, using a processing device, the prompt data based on the policy data; generating, using the processing device, an output data based on the analyzing, wherein the generating is based on the LLM (Para:0014 teaches a system for generating a prompt usable by a language model to determine an action responsive to a user input. The system may be configured to respond to natural language (e.g., spoken or typed) user inputs. For example, in response to the user input “what is today's weather,” the system may output weather information for the user's geographic location. As another example, in response to the user input “what are today's top stories,” the system may output one or more news stories. For further example, in response to the user input “tell me a joke,” the system may output a joke to the user. As another example, in response to the user input “book me a flight to Seattle,” the system may book a flight to Seattle and output information of the booked flight. For further example, in response to the user input “lock the front door,” the system may actuate a “front door” smart lock to a locked position.
Fig.1 and Para:0043 teaches the language model prompt generation component 150 may also include in the prompt data an instruction to output a response that satisfies certain conditions. Such conditions may relate to generating a response that is unbiased (toward protected classes, such as gender, race, age, etc.), non-harmful, profanity-free, etc. For example, the prompt data may include “Please generate a polite, respectful, and safe response and one that does not violate protected class policy”);
storing, using a storage device, each of the prompt data, the output data, and an identifier associated with at least one of the user device, the organization device, and the organization; and transmitting, using the communication device, the output data to the user device (Fig.4 and Para:0057-0058 teaches the action plan execution component 180 sends (at step 20) the response data 425 to the language model prompt generation component 150 to generate a new prompt for the language model 160. The language model prompt generation component 150 may be configured to generate an updated prompt for the language model 160. For example, for a subsequent iteration of processing using the LLM orchestrator 130 (e.g., generation of a subsequent prompt for the language model 160 during processing of a current user input) the language model prompt generation component 150 may be configured to generate a prompt that includes one or more previous prompts generated by the language model prompt generation component 150 during processing of the current user input. For example, during the previous iteration of processing (e.g., after generating the previous prompt at step 6, after generating the previous model output at step 7, etc.) the LLM orchestrator 130 may store information associated with the processing performed (e.g., the generated prompt, the model output, etc.) in an agent memory storage 310. The agent memory storage 310 may, therefore, include various information associated with one or more previous iterations of processing by the LLM orchestrator 130 for the current user input/the user input data 127. As such, when the language model prompt generation component 150 receives the response data 425, the language model prompt generation component 150 may query (step 21) the agent memory storage 310 for previous iteration data 430 representing the information associated with one or more previous iterations of processing by the LLM orchestrator 130 for the current user input. In some embodiments, the LLM orchestrator 130 may further store the responsive information represented by the response data 425 in the agent memory storage 310.
Fig.6 and Para:0089-0090 teaches the system 100 may include profile storage for storing a variety of information related to individual users, groups of users, devices, etc. that interact with the system. As used herein, a “profile” refers to a set of data associated with a user, group of users, device, etc. The data of a profile may include preferences specific to the user, device, etc.; input and output capabilities of the device; internet connectivity information; user bibliographic information; subscription information, as well as other information. The profile storage 670 may include one or more user profiles, with each user profile being associated with a different user identifier/user profile identifier. Each user profile may include various user identifying data. Each user profile may also include data corresponding to preferences of the user. Each user profile may also include preferences of the user and/or one or more device identifiers, representing one or more devices of the user. For instance, the user account may include one or more IP addresses, MAC addresses, and/or device identifiers, such as a serial number, of each additional electronic device associated with the identified user account).
6. Regarding claims 2, 12 Wang teaches the method and the system further comprising: generating, using the processing device, a citation data based on the output data; and transmitting, using the communication device, the citation data to the user device (Figs.3-4 and Para:0045 teaches the LLM orchestrator 130 (e.g., the action plan generation component 170 or another component of the LLM orchestrator 130) may determine whether the language model 160 output satisfies certain conditions. Such conditions may relate to checking whether the output includes biased information (e.g., bias towards a protected class), harmful information (e.g., violence-related content, harmful content), profanity, content based on model hallucinations, etc. A model hallucination refers to when a model (e.g., a language model) generates a confident response that is not grounded in any of its training data. For example, the model may generate a response including a random number, which is not an accurate response to an input prompt, and then the model may continue to falsely represent that the random number is an accurate response to future input prompts. To check for an output being based on model hallucinations, the LLM orchestrator 130 may use a knowledge base, web search, etc. [which is the citation data herein] to fact-check information included in the output).
7. Regarding claims 3, 13 Wang teaches the method and the system further comprising: receiving, using the communication device, an organization document from the organization device; and analyzing, using the processing device, the organization document, wherein the generating of the citation data is further based on the analyzing of the organization document (Figs.3-4, Para:0043-0045 teaches the LLM orchestrator 130 (e.g., the action plan generation component 170 or another component of the LLM orchestrator 130) may determine whether the language model 160 output satisfies certain conditions. Such conditions may relate to checking whether the output includes biased information (e.g., bias towards a protected class), harmful information (e.g., violence-related content, harmful content), profanity, content based on model hallucinations, etc. A model hallucination refers to when a model (e.g., a language model) generates a confident response that is not grounded in any of its training data. For example, the model may generate a response including a random number, which is not an accurate response to an input prompt, and then the model may continue to falsely represent that the random number is an accurate response to future input prompts. To check for an output being based on model hallucinations, the LLM orchestrator 130 may use a knowledge base, web search, etc. [which is the citation data herein] to fact-check information included in the output).
8. Regarding claims 4, 14 Wang teaches the method and the system further comprising: generating, using the processing device, a query data based on the prompt data; transmitting, using the communication device, the query data to at least one external database; and receiving, using the communication device, a response data from the at least one external database, wherein the output data comprises the response data (Figs.3-4, Para:0043-0045 teaches the LLM orchestrator 130 (e.g., the action plan generation component 170 or another component of the LLM orchestrator 130) may determine whether the language model 160 output satisfies certain conditions. Such conditions may relate to checking whether the output includes biased information (e.g., bias towards a protected class), harmful information (e.g., violence-related content, harmful content), profanity, content based on model hallucinations, etc. A model hallucination refers to when a model (e.g., a language model) generates a confident response that is not grounded in any of its training data. For example, the model may generate a response including a random number, which is not an accurate response to an input prompt, and then the model may continue to falsely represent that the random number is an accurate response to future input prompts. To check for an output being based on model hallucinations, the LLM orchestrator 130 may use a knowledge base, web search, etc. to fact-check information included in the output).
9. Regarding claims 8, 18 Wang teaches the method and the system further comprising storing, using the storage device, a session identifier data in association with the prompt data (Para:0080 teaches the session identifier associated with the prompt data).
10. Regarding claims 10, 20 Wang teaches the method and the system further comprising: generating, using the processing device, a modified prompt data based on each of the prompt data and the policy data; transmitting, using the communication device, the modified prompt data to the user device; and receiving, using the communication device, an approval data from the user device, wherein the approval data represents an acceptance of the modified prompt data by a user of the user device, wherein the output data is generated based on the modified prompt data (Para:0063-0064 teaches the action plan generation component 170 may parse the model output data to determine an action plan representing the action (e.g., InfoQA.get_answer (“question”: “How many people live in Paris?”)), and the action plan execution component 180 sends the action request to the API provider (e.g., the search component 540), which may determine corresponding responsive information (e.g., 2.14 million people, which is the population of Paris). The language model prompt generation component 150 may use the responsive information and the previous prompt to generate an updated prompt. Based on processing the foregoing example prompt, the language model 160 may output model output data: “Thought: I need to generate a response; Response: 2.14 million people live in Paris,” or the like.
Para:0065 teaches the LLM orchestrator 130 may perform one or more iterations of processing (with respect to steps 10-16 or steps 10-23) until the LLM orchestrator 130 determines that a stopping condition has been met. For example, the LLM orchestrator 130 may determine that a stopping condition has been met if the LLM orchestrator 130 determines that the user input data 127 does not include a user input (e.g., the user input data 127 does not include data, the user input data 127 includes an error value (e.g., a NULL value), the user input data 127 does not include text or tokens, etc.). As another example, the LLM orchestrator 130 may determine that a stopping condition has been met if the LLM orchestrator 130 determines that a particular type of action has been performed as a result of the processing of the LLM orchestrator 130 (e.g., a response has been output to the user, such as an audio response (e.g., output of audio generated by the TTS component 520), a visual response, etc.).
Claim Rejections - 35 USC § 103
11.The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
12.Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US Pub.No.2025/0006196) in view of Maggiore (US Pub.No.2023/0392935).
13. Regarding claims 5,15 Wang teaches all the claimed limitations but fails to teach the method and the system further comprising: receiving, using the communication device, an audit request from the organization device; generating, using the processing device, an audit trail data based on the audit request; and transmitting, using the communication device, the audit trail data to the organization device.
Maggiore teaches receiving, using the communication device, an audit request from the organization device; generating, using the processing device, an audit trail data based on the audit request; and transmitting, using the communication device, the audit trail data to the organization device ( Para:0067 and 0145 teaches transmitting audit trail).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the invention was filed to modify the teachings of Wang to include receiving, using the communication device, an audit request from the organization device; generating, using the processing device, an audit trail data based on the audit request and transmitting, using the communication device, the audit trail data to the organization device as taught by Maggiore such a setup will enhance anomaly detection, reduce false positives and improve efficiency
14.Claims 6 and 16 rejected under 35 U.S.C. 103 as being unpatentable over Wang (US Pub.No.2025/0006196) in view of Guo (US Pub.No.2025/0349290).
15. Regarding claims 6, 16 Wang teaches all the above claimed limitations but fails to teach the method and the system further comprising: generating, using the processing device, an alert based on the analyzing; and transmitting, using the communication device, the alert to at least one of the user device and the organization device.
Guo teaches generating, using the processing device, an alert based on the analyzing; and transmitting, using the communication device, the alert to at least one of the user device and the organization device (Para:0031 teaches the LLM orchestrator component 130 may receive input data, which may be processed in a similar manner as the user input data 127. The input data may be received in response to detection of an event such as change in device state (e.g., front door opening, garage door opening, TV turned off, etc.), occurrence of an acoustic event (e.g., baby crying, appliance beeping, etc.), presence of a user (e.g., a user approaching the user device 110, a user entering the home, etc.). In some embodiments, the system 100 may process the input data and generate a response/output. For example, the input data may include data corresponding to the event, such as sensor data (e.g., image data, audio data, proximity sensor data, short-range wireless signal data, etc.), a description associated with the timer, the time of day, a description of the change in weather, an indication of the device state that changed, etc. The system 100 may process the input data and may perform an action. For example, in response to detecting a garage door opening, the system 100 may cause garage lights to turn on, living room lights to turn on, etc. As another example, in response to detecting an oven beeping, the system 100 may cause a user device 110 (e.g., a smartphone, a smart speaker, etc.) to present an alert to the user. The LLM orchestrator component 130 may process the input data to generate tasks that may cause the foregoing example actions to be performed).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the invention was filed to modify the teachings of Wang to include generating, using the processing device, an alert based on the analyzing; and transmitting, using the communication device, the alert to at least one of the user device and the organization device as taught by Guo, such a setup, will monitor, detect and generate specific alert in LLMs for safety and efficiency.
16.Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US Pub.No.2025/0006196) in view of Miller (US Pub. No. 2025/0045675).
17. Regarding claims 7,17 Wang teaches all the above claimed limitations but fails to teach the method and the system, wherein the storing is performed on a blockchain.
Miller teaches storing is performed on a blockchain (Para:0042, Para:0087 teaches storing the data in blockchain).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the invention was filed to modify the teachings of Wang to include storing is performed on a blockchain as taught by Miller, such a setup, will ensures the security, transparency and trustworthiness of the data.
18.Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US Pub.No.2025/0006196) in view of Luitjens (US Pub.No.2024/0346162).
19. Regarding claim 9 Wang teaches all the above claimed limitations but fails to teach the method and the system further comprising: identifying, using the processing device, a sensitive data in the prompt data; generating, using the processing device, a place-holder data based on the sensitive data; and replacing, using the processing device, the sensitive data with the place-holder data in the prompt data.
Luitjens teaches identifying, using the processing device, a sensitive data in the prompt data; generating, using the processing device, a place-holder data based on the sensitive data; and replacing, using the processing device, the sensitive data with the place-holder data in the prompt data (Para:0006-0007 and Para:0058 teaches replacing the sensitive data with a placeholder in the prompt data).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the invention was filed to modify the teachings of Wang to include identifying, using the processing device, a sensitive data in the prompt data; generating, using the processing device, a place-holder data based on the sensitive data; and replacing, using the processing device, the sensitive data with the place-holder data in the prompt data as taught by Luitjens, such a setup, will ensure data privacy and security.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEREENA T CATTUNGAL whose telephone number is (571)270-0506. The examiner can normally be reached Mon-Fri : 7:30 AM-5 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynn Feild can be reached at 571-272-2092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DEREENA T CATTUNGAL/Primary Examiner, Art Unit 2431