Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of claims
2. Claims 1-20 have been examined.
Claim Interpretation
Optional Language
3. Claim 1, “selecting the example… based on whether the automated agent action in the example is validated to be proper based on the control”.
Claim 3, “wherein a scenario is generated… in response to a prompt provided to the large language model… and a prompt request to generate…”.
Claim 4, “evaluating whether the automated agent… when responding to a simulated user request…” The “selecting” is conditional as it is based on whether or not “the example is validated to be proper based on the control”. Similarly, “wherein a scenario is generated…” and “evaluating…” are also conditional as they depend on the reception of a “prompt provided to the large language model… and, prompt request to generate one or more scenarios…” and “responding to a simulated user request”, respectively. As a result, the language will not differentiate their respective claims from the prior art (MPEP 2103 I C).
Intended Use
4. Claim 2, “… one or more parameters set to test whether the automated agent follows the control in response to a simulated user request”. Claims 10 and 19 recite similar language.
Claim 9, “A non-transitory computer readable storage medium… the program being executable by a processor to generate training data using a simulated user, the method comprising: …” Claim 17, “A system … comprising:…; and one or more modules… and executed by the one or more processors to generate… provide… access… and select…” The limitation “set to test…”, “the method” and “to generate…” describe the intended use of the “one or more parameters”, execution of a program and generation of training data, and execution of “one or more modules”, respectively, which according to the MPEP, will not differentiate the claims from the prior art (MPEP 2103 I C).
Not positively recited
5. Claim 4, “… the simulated user request generated based on the scenario”. Claim 7, “wherein the automated agent action is evaluated…” Claim 8, “wherein the example is stored…”
The language “generated based on the scenario”, “evaluated” and “stored” are not positively recited (e.g. do not refer to a previous method step of “generating”, “evaluating” and “storing”) and, therefore, will not differentiate the claims from the prior art.
35 U.S.C. 101
6. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
7. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
8. Claim 1 recites-
generating one or more scenarios by a first… on a first … based on a control;
providing a simulated user based on the scenario, the simulated user provided by a
[second user]
accessing an example of an interaction between an … agent and the simulated
users by the first…, wherein each example is associated with an action by the … agent,
wherein the action is associated with the control, wherein the example including a subset
of the interaction; and, selecting the example as training data for a subsequent learning process based on
whether the … agent action in the example is validated to be proper based on the control.
Therefore, the claim recites “training a user”, which is a commercial or legal interaction and/or managing personal behavior or relationships or interactions between people, (including social activities, teaching, and following rules or instructions) (i.e. organizing human activity) and an abstract idea.
The additional elements of “application”, “server”, and “automated” represent the use of a computer, or computer technology, as a tool to implement the training of a user and/or generally link the abstract idea to a particular technological environment or field of use. And, as the additional elements do no more than represent the use of a computer, or computer technology, as a tool to perform the training of a user and/or generally link the abstract idea to a particular technological environment or field of use, they do not improve computer functionality or provide an improvement to another technology or technological field.
Hence, claim 1 is not patent eligible.
Claims 9 and 17 also recite the abstract idea of “training a user”. In addition to the additional elements of claim 1, claims 9 and 17 recite a “non-
transitory computer readable storage medium having embodied thereon a program, the program being executable by a processor to generate training data using a simulated user” and a “system”, “one or more servers, wherein each server includes a memory and a processor” and “one or more modules stored in the memory and executed by at least one of the one or more processors”, respectively. These additional elements represent the use of a computer, or computer technology, as a tool to implement the training of a user and/or generally link the abstract idea to a particular technological environment or field of use.
And, as they do no more than represent the use of a computer, or computer technology, as a tool to perform the training of a user and/or generally link the abstract idea to a particular technological environment or field of use, they do not improve computer functionality or provide an improvement to another technology or technological field.
Claims 9 and 17 are also patent ineligible.
9. Claim 2 recites “wherein a scenario includes one or more parameters set to test whether the … agent follows the control in response to a simulated user request”, which further describes the abstract idea of “training a user”. Claims 10 and 18 recite similar language.
The additional element of “automated” represents the use of a computer, or computer technology, as a tool to implement the generation of company reports and/or generally link the abstract idea to a particular technological environment or field of use. And, as the additional elements do no more than represent the use of a computer, or computer technology, as a tool to perform the training of a user and/or generally link the abstract idea to a particular technological environment or field of use, they do not improve computer functionality or provide an improvement to another technology or technological field.
10. Claim 3 recites “wherein a scenario is generated by a … model in response to [a first input], [the first input] including the control, the role of the … agent, and a [request] to generate one or more scenarios based on the control and role”, which further describes the abstract idea of “training a user”. Claims 11 and 19 recite similar language.
The additional element of “large language”, “prompt” and “automated” represent the use of a computer, or computer technology, as a tool to implement the training of a user and/or generally link the abstract idea to a particular technological environment or field of use. And, as the additional elements do no more than represent the use of a computer, or computer technology, as a tool to perform the training of a user and/or generally link the abstract idea to a particular technological environment or field of use, they do not improve computer functionality or provide an improvement to another technology or technological field.
11. Claim 4 recites “evaluating whether the … agent followed the control when responding to a simulated user request, the simulated user request generated based on the scenario” , which further describes the abstract idea of “training a user”. Claims 12 and 20 recite similar language.
The additional element of “automated” represents the use of a computer, or computer technology, as a tool to implement the training of a user and/or generally link the abstract idea to a particular technological environment or field of use. And, as the additional element does no more than represent the use of a computer, or computer technology, as a tool to perform the training of a user and/or generally link the abstract idea to a particular technological environment or field of use, it does not improve computer functionality or provide an improvement to another technology or technological field.
12. Claim 5 recites “wherein evaluating includes processing interaction data and the control by a… model” which further describes the abstract idea of “training a user”. Claim 13 recites similar language.
The additional element of “machine learning” represents the use of a computer, or computer technology, as a tool to implement the training of a user and/or generally link the abstract idea to a particular technological environment or field of use. And, as the additional element does no more than represent the use of a computer, or computer technology, as a tool to perform the training of a user and/or generally link the abstract idea to a particular technological environment or field of use, it does not improve computer functionality or provide an improvement to another technology or technological field.
13. Claim 6 recites the additional element of “wherein the machine learning model includes a large language model”. Claim 14 recites similar language. The additional element represents the use of a computer, or computer technology, as a tool to implement the training of a user and/or generally link the abstract idea to a particular technological environment or field of use. And, as the additional element does no more than represent the use of a computer, or computer technology, as a tool to perform the training of a user and/or generally link the abstract idea to a particular technological environment or field of use, it does not improve computer functionality or provide an improvement to another technology or technological field.
14. Claim 7 recites “… agent action is evaluated during the conversation with the simulated user” , which further describes the abstract idea of “training a user”. Claim 15 recites similar language.
The additional element of “automated” represents the use of a computer, or computer technology, as a tool to implement the training of a user and/or generally link the abstract idea to a particular technological environment or field of use. And, as the additional element does no more than represent the use of a computer, or computer technology, as a tool to perform the training of a user and/or generally link the abstract idea to a particular technological environment or field of use, it does not improve computer functionality or provide an improvement to another technology or technological field.
15. Claim 8 recites “wherein the example is stored as training data in a validated pool”, which further describes the abstract idea of “training a user”. Claim 16 recites similar language.
35 U.S.C. 112(b)
16. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
17. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Lack of antecedent basis
18. Claim 1 recites the limitation “the simulated users” in lines 5 and 6. There is insufficient antecedent basis for this limitation in the claim. Claims 9 and 17 also, as each recites similar language.
19. Claim 5 recites the limitation “wherein evaluating includes” in line 1. The term “evaluating” is not preceded by an article “an” and does not further describe a previous mention of “evaluating” in claim 1. There is insufficient antecedent basis for this limitation in the claim.
Claims 13 and 20 also, as each recites similar language.
20. Claim 9 recites the limitation “the method” in line 9. There is insufficient antecedent basis for this limitation in the claim.
More than one interpretation
21. Claim 9 recites “A non-transitory computer readable storage medium having embodied thereon a program, the program being executable by a processor to generate training data using a simulated user, the method comprising: …”. It is unclear whether the “method” is performed by the “one or more processors” or another device (MPEP 2173.02 I).
22. Claim 11 recites “wherein a scenario is generated by a large language model…” The “large language model”, however, as claimed, is neither part of, nor executed by, the “processor” of claim 9. Therefore, it is unclear whether the generation is performed by the “processor” or another device (MPEP 2173.02 I). Claim 19 is also rejected as it recites similar language and the lack of clarity is due to the “large language model” not being part of, nor executed by, the “one or more processors”.
23. Claim 17 recites “A system … comprising:…; and one or more modules… and executed by the one or more processors to generate… provide… access… and select…”. Therefore, to one of ordinary skill, the claim’s functions are attributed to “one or more processors”. However, the claim also recites “to generate one or more scenarios by a first application on a first server” and “the simulated user provided by a simulated user application”. Therefore, it is unclear whether the claimed functions are performed by the “one or more processors” alone or additionally by the “server” and whatever device runs the “simulated user application” (MPEP 2173.02 I).
Unattributed Functionality
24. Claim 12 recites “evaluating whether the automated agent…”. However, the claim does not identify what is performing the functionality of “evaluating”. Therefore, the scope of the claim is unclear (MPEP 2173.05(g) “Notwithstanding…”). Claim 20 is also rejected as it recites similar language.
25. Claim 15 recites “wherein the automated agent is evaluated…” However, the claim does identify what is performing the evaluation. Therefore, the scope of the claim is unclear (MPEP 2173.05(g) “Notwithstanding…”)
26. Claim 16 recites “wherein the example is stored … in a validated pool”. However, the claim neither identifies what performs the storing nor where the “validated pool” is stored (MPEP 2173.05(g) “Notwithstanding…”).
Dependent claims
27. Claims 2-8, 10-16 and 18-20 are also rejected as each depends from either claims 1, 5, 9, 13 or 17.
35 U.S.C. 102
28. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
29. Claims 9-20 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Steedman Henderson et al., US 20200152184.
30. Claim 9 recites “A non-transitory computer readable storage medium having embodied thereon a program, the program being executable by a processor to generate training data using a simulated user”. However, “to generate…” is an intended use of a program that is “executable” and will not differentiate the claim from the prior art (MPEP 2103 I C). The claimed preamble also recites “… the method comprising: generating…”. It too is intended use as it is not a positive recitation, nor a further description, of a functional relationship between the storage medium and the “program” (MPEP 2103 I C, 2111.05 III, 2114). Therefore, as Steedman Henderson et al. teach a “non-transitory computer readable storage medium having embodied thereon a program, the program being executable by a processor” (paras 117, 118, and 350-354), it is sufficient in terms of prior art.
Similarly, claims 10-16 do not positively recite limitations that provide a positive recitation, or further describe, a functional relationship between the “non-transitory computer readable storage medium” and the “program” and, therefore, will not differentiate the claims from the prior art (‘184, paras 117, 118, and 350-354).
31. Claim 17 recites “A system for generating training data using a simulated user comprising: one or more servers, wherein each server includes a memory and a processor; and one or more modules stored in the memory and executed by the one or more processors to generate… provide… access… and select…”. The language “to generate… provide… access … and select…” represents the intended use of the execution of the “one or more modules” and will not differentiate the claim from the prior art (MPEP 2103 I C). Therefore, as Steedman Henderson et al. teach a system comprising “one or more servers, wherein each server includes a memory and a processor; and one or more modules stored in the memory and executed by the one or more processors” (paras 117, 118, and 350-354) it is sufficient as prior art.
32. Claim 18 is directed to a “scenario” which describes data (“non-functional descriptive material”, MPEP 2111.05 III). While claim 19 and 20, recite “a scenario is generated…” and “evaluating…” neither of which provide a positive recitation of a functional relationship between the modules and the memory on which the modules are stored (MPEP 2103 I C, 2111.05 III, 2114). Hence, claims 18-20 fail to recite limitations that will differentiate claims from the prior art.
35 U.S.C. 103
33. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
34. Claims 1-4 and 7-20 are rejected under 35 U.S.C. 103 as being unpatentable over Steedman
Henderson et al., US 20200152184 in view of Kumar et al., U.S. Patent No. 11,855,860.
As per claims 1, 8, 9 and 17, Steedman Henderson et al. teach a method for generating
training data using a simulated user, comprising:
generating one or more scenarios (paras 360, 367-368 and 446) by a first application on a first server (paras 117, 118, and 350-354) based on a control (“booking”, paras 126, 129 and 357-358)
providing a simulated user based on the scenario, the simulated user provided by a simulated user application (paras 434, 440, 446, 510 and 511)
accessing an example of an interaction between an automated agent and the simulated users by the first application (para 434), wherein each example is associated an action by the automated agent (paras 434, 436 and 438), wherein the action is associated with the control (paras 357-358), wherein the example including a subset of the interaction (para 434); and
selecting the example as training data for a subsequent learning process (paras 434 and 481)…
As per claim 8, Steedman Henderson et al. teach “wherein the example is stored in a
validated pool” (paras 434 and 481).
As per claims 9 and 17, Steedman Henderson et al. also teach a “non-transitory
computer read storage medium having embodied thereon a program, the program
executable by a processor” (paras 117, 118, and 350-354) and a “system for generating
training data using a simulated user, comprising: one or more servers, wherein each server includes a memory and a processor; and one or more modules stored in the memory and executed by at least one of the one or more processors” (paras 117, 118, and 350-354).
Steedman Henderson et al. do not specifically disclose “selecting the example…” based on whether the automated agent action in the example is validated to be proper based on the control. However, Kumar et al. teach selecting the example as training data for a subsequent learning process based on whether an agent action in the example is validated to be proper based on the control” (fig. 8, items 822, 824 and 828; col 7, lines 57-63; col. 8, lines 24-41; claim 9). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Steedman Henderson et al. and Kumar et al. in order to improve the performance of a chatbot or other conversational application (‘184, para 3) by training the application (‘184, paras 355-358 and 502) using high quality training data (‘860, col. 3, lines 28-36) that represent the accomplishment of agent tasks such as booking a restaurant or purchasing a movie ticket (‘184, para 358).
35. As per claims 2, 10 and 18, Steedman Henderson et al. teach, wherein a scenario includes one or more parameters set (paras 379-411, 433 and 446) … and in response to a simulated user request (paras 435-440) and an automated agent following a control (para 358) but not the purpose (“… to test…”) of the parameters. However, this is intended use and according to the MPEP (MPEP 2103 I C), such language (“… to test…”) will not differentiate the claims from the prior art of Steedman Henderson et al.
Nonetheless, Kumar et al. teach “set to test whether the… agent follows the control…” (fig. 8, items 822, 824 and 828; col. 8, lines 24-41; claim 9). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Steedman Henderson et al. and Kumar et al. in order to improve the performance of a chatbot or other conversational application (‘184, paras 3 and 502) by training the chatbot (‘184, paras 355-358 and 502) using higher quality training data (‘860, col. 3, lines 28-36) such as simulated conversations (e.g. scenarios) between a user and an agent (‘184, paras 3, 360, 367-368 and 446) that produce the best outcomes (‘184, fig. 8(b)(e) “Ok, booking confirmed”; paras 297, 358, 365 and 411; ‘860, claim 9).
36. As per claims 3, 11 and 19, Steedman Henderson et al. teach, wherein a scenario is generated by a … model (paras 355 and 441) in response to a prompt provided to the … model (paras 335 and 441), the prompt including the control, the role of the automated agent, and a prompt request to generate one or more scenarios based on the control and role (fig. 7, items S701 and S702; paras 357, 360, 507, 508 514). While Steedman Henderson et al. disclose generating scenarios using a model such as a “scenario generator” (para 378), they do not disclose the model’s underlying algorithm. Kumar et al. disclose utilizing a large language model to generate scenarios (col. 5, lines 23-30; col. 10, lines 40-44 and 55-58). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Steedman Henderson et al. and Kumar et al. as it is no more than substituting one known methodology for generating scenarios (‘184, para 378 “scenario generator”) with another (‘860, col. 5, lines 23-30, “domain-specific LLM”) (MPEP 2144.06).
37. As per claims 4, 12 and 20, Steedman Henderson et al. teach an automated agent responding to a simulated user request (paras 434, 436 and 438) where the simulated user request is generated based on the scenario (paras 360, 367-368 and 446). They also disclose “controls” (paras 126, 129 and 357-358). However, Steedman Henderson et al. do not explicitly disclose evaluating whether the agent followed the control when responding to a user request. Kumar et al. teach evaluating whether an agent followed the control when responding to a user request (col. 3, lines 1-20; col. 4, lines 8-17; col. 18, lines 28-50; claim 9). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Steedman Henderson et al. and Kumar et al. in order to improve the performance of a chatbot or other conversational application (‘184, para 3) by training the application (‘184, paras 355-358 and 502) using high quality training data (‘860, col. 3, lines 28-36) that represent the accomplishment of agent tasks such as booking a restaurant or purchasing a movie ticket (‘184, para 358).
38. As per claim 7, Steedman Henderson et al. teach an automated agent in conversation with a simulated user (paras 434-438). Steedman Henderson et al. do not teach evaluating the agent during the conversation. Kumar et al. teach evaluating an agent during a conversation with a user (col. 3, lines 1-20; col. 4, lines 8-17; col. 18, lines 28-50; claim 9). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Steedman Henderson et al. and Kumar et al. in order to improve the performance of a chatbot or other conversational application (‘184, para 3) by training the application (‘184, paras 355-358 and 502) using high quality training data (‘860, col. 3, lines 28-36) gathered during a conversation between an automated agent and a simulated user (paras 434-438) and that represent the accomplishment of agent tasks such as booking a restaurant or purchasing a movie ticket (‘184, para 358).
39. Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Steedman
Henderson et al., US 20200152184 in view of Kumar et al., U.S. Patent No. 11,855,860 and further
in view of Bly et al., US 20230060252.
40. As per claims 5-6, Steedman Henderson et al. disclose training a chatbot or other dialogue system by simulating a conversation between a user and an agent (para 434). Kumar et al. disclose training a machine learning model (abstract) using high quality training data (fig. 8, items 822, 824, 828) such as data based on the evaluation of agent performance (claim 9). However, neither Steedman Henderson et al. nor Kumar et al. explicitly disclose such an evaluation being performed by a machine learning or a machine learning model including a large language model. Bly et al. teach evaluating data using a machine learning technique such as a large language model (para 150). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Steedman Henderson et al., Kumar et al. and Bly et al. as it is no more than substituting one known methodology for evaluating data (‘860, col. 18, lines 29-50) with another (‘252, para 150) (MPEP 2144.06).
Conclusion
41. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Akkiraju et al., U.S. Patent No. 11,190,464, teach customer service agent training using customer chatbots and scenarios
Sullivan et al., U.S. Patent 10,554,817, teach training an automated service agent engine
Lee et al., US 20230315856, teach augmenting training data.
42. Any inquiry concerning this communication or earlier communications from the
Examiner should be directed to Calvin L. Hewitt II whose telephone number is (571) 272-6709.
The Examiner can normally be reached during 10:00AM-6:00PM Monday-Friday, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using
a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is
encouraged to use the USPTO Automated Interview Request (AIR) at
http://www.uspto.gov/interviewpractice.
Information regarding the status of an application may be obtained from the Patent
Application Information Retrieval (PAIR) system. Status information for published applications
may be obtained from either Private PAIR or Public PAIR. Status information for unpublished
applications is available through Private PAIR only. For more information, about the PAIR
system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll free). If you would
like assistance from a USPTO Customer Service Representative or access to the automated
information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CALVIN L HEWITT II/
Supervisory Patent Examiner, Art Unit 3692