DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Step 1:
According to the first part of the analysis, in the instant case, claims 1-24 are directed to a computer-implemented method of auditing an agent-under-test (AUT). Thus, each of the claims falls within one of the four statutory categories (i.e. process, machine, manufacture, or composition of matter).
Regarding claim 1:
A computer-implemented method of auditing an agent-under-test (AUT), including:
inducing an agent-under-test (AUT) to disclose respective outputs in response to processing a target input probe;
analyzing the respective outputs and generating one or more analytics corresponding to the target input probe,
wherein the analytics identify a distribution pattern of features associated with the target input probe, wherein the distribution pattern includes frequencies at which the features occur, and wherein the frequencies are determined by percentages; and
causing a topic large language model (LLM) to identify topics by sampling the respective outputs, and storing the topics in memory for further use.
Step 2A Prong 1:
“inducing an agent-under-test (AUT) to disclose respective outputs in response to processing a target input probe” is directed to mental step of sampling data (see specification, para. [0040]: one or more methods may be used by the LLM determining topics 103 or the topic module 107 to sample the output for topics until it has enough data to stop sampling for the interval. ..... In the DP or HDP, as you sample the outputs from the Al to be tested..).
“analyzing the respective outputs and generating one or more analytics corresponding to the target input probe” is directed to mental step of analyzing data.
“wherein the analytics identify a distribution pattern of features associated with the target input probe, wherein the distribution pattern includes frequencies at which the features occur, and wherein the frequencies are determined by percentages” is directed to math because distribution pattern describes the possible values a variable can take and how often they occur. Common examples include the normal distribution (bell curve) and uniform distribution. Identifying these patterns requires mathematical models and formulas. Frequencies are raw counts, and percentages are a way of expressing these counts as a proportion of a whole (a form of relative frequency). Calculating and interpreting these involves fundamental arithmetic and statistical analysis. The process of associating specific data features with a target often employs mathematical techniques such as classification algorithms, pattern recognition, and machine learning models, all of which are built upon mathematical principles.
Each limitation recites in the claim is a process that, under BRI covers performance of the limitation in the mind but for the recitation of a generic “sensor, body part, and measurement” which is a mere indication of the field of use. Nothing in the claim elements precludes the steps from practically being performed in the mind. Thus, the claim recites a mental process.
Further, the claim recites the step of “wherein the analytics identify a distribution pattern of features associated with the target input probe, wherein the distribution pattern includes frequencies at which the features occur, and wherein the frequencies are determined by percentages” which as drafted, under BRI recites a mathematical calculation. The grouping of "mathematical concepts” in the 2019 PED includes "mathematical calculations" as an exemplar of an abstract idea. 2019 PEG Section |, 84 Fed. Reg. at 52. Thus, the recited limitation falls into the "mathematical concept" grouping of abstract ideas. This limitation also falls into the “mental process” group of abstract ideas, because the recited mathematical calculation is simple enough that it can be practically performed in the human mind, e.g., scientists and engineers have been solving the Arrhenius equation in their minds since it was first proposed in 1889.
Note that even if most humans would use a physical aid (e.g., pen and paper, a slide rule, or a calculator) to help them complete the recited calculation, the use of such physical aid does not negate the mental nature of this limitation. See October Update at Section I(C)(i) and (iii).
Additional Elements:
Step 2A Prong 2:
“A computer-implemented method of auditing an agent-under-test (AUT)” recited in the preamble does not integrate the judicial exception into a practical application. This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
“inducing an agent-under-test (AUT) to disclose respective outputs in response to processing a target input probe” does not integrate the judicial exception into a practical application. This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
“analyzing the respective outputs and generating one or more analytics corresponding to the target input probe” does not integrate the judicial exception into a practical application. This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
“causing a topic large language model (LLM) to identify topics by sampling the respective outputs, and storing the topics in memory for further use” is directed to insignificant activity and does not integrate the judicial exception into a practical application. See MPEP 2106.05(g).
The claim is merely sampling data, manipulating or analyzing the data using math and mental process, and displaying the results.
This is similar to electric power: MPEP 2106.05(h) vi. Limiting the abstract idea of collecting information, analyzing it, and displaying certain results of the collection and analysis to data related to the electric power grid, because limiting application of the abstract idea to power-grid monitoring is simply an attempt to limit the use of the abstract idea to a particular technological environment, Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016).
Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, "claiming the improved speed or efficiency inherent with applying the abstract idea on a computer" does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). In contrast, a claim that purports to improve computer capabilities or to improve an existing technology may integrate a judicial exception into a practical application or provide significantly more. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). See MPEP §§ 2106.04(d)(1) and 2106.05(a) for a discussion of improvements to the functioning of a computer or to another technology or technical field.
The claim as a whole does not meet any of the following criteria to integrate the judicial exception into a practical application:
An additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
Step 2B:
“A computer-implemented method of auditing an agent-under-test (AUT)” recited in the preamble does not amount to significantly more than the judicial exception in the claim. This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
“inducing an agent-under-test (AUT) to disclose respective outputs in response to processing a target input probe” does not amount to significantly more than the judicial exception in the claim. This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
“analyzing the respective outputs and generating one or more analytics corresponding to the target input probe” does not amount to significantly more than the judicial exception in the claim. This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
“causing a topic large language model (LLM) to identify topics by sampling the respective outputs, and storing the topics in memory for further use” is directed to insignificant activity and does not amount to significantly more than the judicial exception in the claim. See MPEP 2106.05(g) and 2106.05(d)(ii), third list, (iv).
The claim is therefore ineligible under 35 USC 101.
Claims 20 and 21 are directed to a computer-implemented method of auditing an agent-under-test (AUT) including the steps as in claim 1. Therefore, claims 20 and 21 are directed to an abstract idea.
Regarding claim 2, “wherein the AUT is an artificial intelligence (Al) system” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 3, “wherein the AUT is a large language model (LLM)” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 4, “wherein the distribution pattern includes priminalities at which the respective outputs occur” is directed to math because the general concept of analyzing patterns of occurrence in outputs is a core area of study in statistics and number theory.
Regarding claim 5, “wherein the priminalities are determined by percentiles” is directed to math because the concept of percentiles is a fundamental concept in the field of mathematics, specifically within statistics.
Regarding claim 6, “wherein the analytics include a share of voice percentage of the features associated with the target input probe, a share of voice percentile of the features associated with the target input probe, and a share of voice proportion of the features associated with the target input probe” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 7, “inducing the AUT at periodic intervals” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 8, “the periodic intervals are second- wise, minute-wise, hour-wise, day-wise, week-wise, month-wise, and/or year-wise” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 9, “wherein the periodic intervals are retrospective and apply to the respective outputs disclosed in prior time periods” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 10, “wherein an inducing agent sends the target input probe to the AUT” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 11, “wherein the target input probe is a prompt to the AUT” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 12, “wherein the respective outputs are answers to the target input probe” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 13, “including displaying the respective outputs for the target input probe” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 14, “including displaying the analytics” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 15, “wherein the topic LLM identifies the topics using at least one of Dirichlet Process (DP), Hierarchical Dirichlet Process (HDP), and Chinese Restaurant Process (CRP)” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 16, “wherein the topic LLM identifies the topics using a mixture model” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 17, “wherein the topic LLM identifies the topics using a Gibbs Sampling and/or Variational Inference process” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 18, “wherein the topic LLM identifies the topics using a Generative Adversarial Network (GAN)” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 19, “displaying the topics” does not integrate the judicial exception into a practical application. It does not amount to significantly more than the judicial exception in the claim This additional element is merely using a computer as a tool to perform an abstract idea (see MPEP 2106.05(h)).
Regarding claim 22, “wherein the analytics identify a distribution pattern of features associated with the target input probe” is directed to math because the analysis of data distributions relies heavily on statistical models, which are mathematical models that embody assumptions about how data is generated.
Regarding claim 23, “wherein the distribution pattern includes frequencies at which the features occur” is directed to math because a frequency distribution is a mathematical way to summarize data by showing the number of times each value or category appears, and this can be represented through tables or graphs like histograms. In mathematics, "frequency" is simply the count of how many times a specific data value occurs.
Regarding claim 24, “wherein the frequencies are determined by percentages” is directed to math because in mathematics and statistics, frequency is the count of how many times a specific value or event occurs within a dataset. A percentage is a way of expressing a proportion or ratio as a fraction of 100. It helps provide context to the frequency data.
Hence the claims 1-24 are treated as ineligible subject matter under 35 U.S.C. § 101.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. See In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970);and, In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent is shown to be commonly owned with this application. See 37 CFR 1.130(b).
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
Claims 1-5 and 20-24 are rejected under the judicially created doctrine of obviousness-type double patenting as being unpatentable over claims 1-10 of U.S. Patent No. 12,386,718. Although the conflicting claims are not identical, they are not patentably distinct from each other because claim 1 of prior art anticipate claim 1 of instant application as follows:
US application 19/300,329
1. A computer-implemented method of auditing an agent-under-test (AUT), including: inducing an agent-under-test (AUT) to disclose respective outputs in response to processing a target input probe; analyzing the respective outputs and generating one or more analytics corresponding to the target input probe, wherein the analytics identify a distribution pattern of features associated with the target input probe, wherein the distribution pattern includes frequencies at which the features occur, and wherein the frequencies are determined by percentages; and causing a topic large language model (LLM) to identify topics by sampling the respective outputs, and storing the topics in memory for further use.
2. The computer-implemented method of claim 1, wherein the AUT is an artificial intelligence (Al) system.
3. The computer-implemented method of claim 2, wherein the AUT is a large language model (LLM).
4. The computer-implemented method of claim 1, wherein the distribution pattern includes priminalities at which the respective outputs occur.
5. The computer-implemented method of claim 4, wherein the priminalities are determined by percentiles.
20. A computer-implemented method of auditing an agent-under-test (AUT), including: inducing an agent-under-test (AUT) to disclose respective outputs in response to processing a target input probe; and analyzing the respective outputs and generating one or more analytics corresponding to the target input probe, wherein the analytics identify a distribution pattern of features associated with the target input probe, wherein the distribution pattern includes frequencies at which the features occur, and wherein the frequencies are determined by percentages.
21. A computer-implemented method of auditing an agent-under-test (AUT), including: inducing an agent-under-test (AUT) to disclose respective outputs in response to processing a target input probe; and analyzing the respective outputs and generating one or more analytics corresponding to the target input probe.
22. The computer-implemented method of claim 21, wherein the analytics identify a distribution pattern of features associated with the target input probe.
23. The computer-implemented method of claim 22, wherein the distribution pattern includes frequencies at which the features occur.
24. The computer-implemented method of claim 23, wherein the frequencies are determined by percentages.
US Patent No. 12,386,718
1. A system for constructing a probeable output generative space of an agent-under-test (AUT) for a target input probe, comprising: an agent sampling logic configured to induce an agent-under-test (AUT) to disclose a plurality of outputs in response to processing a target input probe; a space construction logic, having access to the plurality of outputs, and configured to construct a probeable output generative space based on the plurality of outputs; a space probing logic, having access to the probeable output generative space, and configured to probe the probeable output generative space for a query, and to make available results of the probing for further analysis; wherein the AUT is a large language model (LLM), wherein the LLM can be trained with minimal queries; and wherein the agent sampling logic is further configured to induce the AUT to disclose the plurality of outputs in response to concurrently processing the target input probe over multiple parallel instances.
2. The system of claim 1, wherein the query requires identifying a distribution pattern of features associated with the target input probe.
3. The system of claim 2, wherein the features include mentions, in the probeable output generative space, of the target input probe.
4. The system of claim 2, wherein the features include mentions, in the probeable output generative space, of one or more variations of the target input probe.
5. The system of claim 4, wherein the agent sampling logic is further configured to induce the AUT to disclose the plurality of outputs in response to processing the target input probe and the variations of the target input probe.
6. The system of claim 2, wherein the features include mentions, in the probeable output generative space, of concepts related to the target input probe.
7. The system of claim 2, wherein the distribution pattern includes frequencies at which the features occur in the probeable output generative space.
8. The system of claim 7, wherein the frequencies are determined by percentages.
9. The system of claim 2, wherein the distribution pattern includes priminalities at which the features occur in the probeable output generative space.
10. The system of claim 9, wherein the priminalities are determined by percentiles.
Claim 1-3, 6-14, and 16-24 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of copending Application No. 19/296,890 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because both comprising substantially the same elements.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
US application 19/300,329
1. A computer-implemented method of auditing an agent-under-test (AUT), including: inducing an agent-under-test (AUT) to disclose respective outputs in response to processing a target input probe; analyzing the respective outputs and generating one or more analytics corresponding to the target input probe, wherein the analytics identify a distribution pattern of features associated with the target input probe, wherein the distribution pattern includes frequencies at which the features occur, and wherein the frequencies are determined by percentages; and causing a topic large language model (LLM) to identify topics by sampling the respective outputs, and storing the topics in memory for further use.
2. The computer-implemented method of claim 1, wherein the AUT is an artificial intelligence (Al) system.
3. The computer-implemented method of claim 2, wherein the AUT is a large language model (LLM).
6. The computer-implemented method of claim 1, wherein the analytics include a share of voice percentage of the features associated with the target input probe, a share of voice percentile of the features associated with the target input probe, and a share of voice proportion of the features associated with the target input probe.
7. The computer-implemented method of claim 1, further including inducing the AUT at periodic intervals.
8. The computer-implemented method of claim 7, wherein the periodic intervals are second- wise, minute-wise, hour-wise, day-wise, week-wise, month-wise, and/or year-wise.
9. The computer-implemented method of claim 7, wherein the periodic intervals are retrospective and apply to the respective outputs disclosed in prior time periods.
10. The computer-implemented method of claim 1, wherein an inducing agent sends the target input probe to the AUT.
11. The computer-implemented method of claim 1, wherein the target input probe is a prompt to the AUT.
12. The computer-implemented method of claim 11, wherein the respective outputs are answers to the target input probe.
13. The computer-implemented method of claim 1, further including displaying the respective outputs for the target input probe.
14. The computer-implemented method of claim 13, further including displaying the analytics.
16. The computer-implemented method of claim 1, wherein the topic LLM identifies the topics using a mixture model.
17. The computer-implemented method of claim 1, wherein the topic LLM identifies the topics using a Gibbs Sampling and/or Variational Inference process. 18. The computer-implemented method of claim 1, wherein the topic LLM identifies the topics using a Generative Adversarial Network (GAN).
19. The computer-implemented method of claim 1, further including displaying the topics.
20. A computer-implemented method of auditing an agent-under-test (AUT), including: inducing an agent-under-test (AUT) to disclose respective outputs in response to processing a target input probe; and analyzing the respective outputs and generating one or more analytics corresponding to the target input probe, wherein the analytics identify a distribution pattern of features associated with the target input probe, wherein the distribution pattern includes frequencies at which the features occur, and wherein the frequencies are determined by percentages.
21. A computer-implemented method of auditing an agent-under-test (AUT), including: inducing an agent-under-test (AUT) to disclose respective outputs in response to processing a target input probe; and analyzing the respective outputs and generating one or more analytics corresponding to the target input probe.
22. The computer-implemented method of claim 21, wherein the analytics identify a distribution pattern of features associated with the target input probe.
23. The computer-implemented method of claim 22, wherein the distribution pattern includes frequencies at which the features occur.
24. The computer-implemented method of claim 23, wherein the frequencies are determined by percentages.
US application 19/296,890
1. A computer-implemented method, including: inducing an agent-under-test (AUT) to disclose respective outputs in response to processing a target input probe; analyzing the respective outputs to determine whether the respective outputs represent a sample for the target input probe, and/or to determine a confidence level of the respective outputs being the sample for the target input probe; and analyzing the respective outputs to generate analytics for the sample.
2. The computer-implemented method of claim 1, wherein the AUT is an artificial intelligence (AI) system.
3. The computer-implemented method of claim 2, wherein the AUT is a large language model (LLM).
4. The computer-implemented method of claim 1, wherein the confidence level reflects a reliability index indicating distributional completeness of the respective outputs through one or more statistical models.
5. The computer-implemented method of claim 1, further including inducing the AUT at periodic intervals.
6. The computer-implemented method of claim 5, further including for each of the periodic intervals, determining whether the respective outputs represent the sample for the target input probe, and/or determining the confidence level of the respective outputs representing the sample for the target input probe.
7. The computer-implemented method of claim 5, wherein the periodic intervals are second- wise.
8. The computer-implemented method of claim 5, wherein the periodic intervals are minute- wise, hour-wise, day-wise, week-wise, month-wise, and/or year-wise.
9. The computer-implemented method of claim 5, wherein the periodic intervals are retrospective and apply to the respective outputs disclosed in prior time periods.
10. The computer-implemented method of claim 1, wherein an inducing agent sends the target input probe to the AUT.
11. The computer-implemented method of claim 1, wherein the target input probe is a prompt to the AUT.
12. The computer-implemented method of claim 11, wherein the respective outputs are answers to the target input probe.
13. The computer-implemented method of claim 1, wherein the analyzing the respective outputs to generate the analytics for the sample further includes determining a share of voice percentage of features associated with the target input probe, a share of voice percentile of the features associated with the target input probe, and a share of voice proportion of the features associated with the target input probe.
14. The computer-implemented method of claim 1, wherein the analyzing the respective outputs to generate the analytics for the sample further includes performing sentiment analysis, bias detection, topics identification, source identification, and/or inaccuracy detection.
15. The computer-implemented method of claim 1, further including displaying the respective outputs for the target input probe.
16. The computer-implemented method of claim 1, further including displaying the determined confidence level of the respective outputs being the sample for the target input probe.
17. The computer-implemented method of claim 1, further including determining the confidence level for the generated analytics.
18. The computer-implemented method of claim 17, further including displaying the determined confidence level for the generated analytics.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Maly et al. (US 7,818,646).
Regarding claim 21, Maly et al. disclose a computer-implemented method of auditing an agent-under-test (AUT), including:
inducing an agent-under-test (AUT) to disclose respective outputs in response to processing a target input probe (e.g. Col.2, lines 35-53, Col.3, line 59-Col.4, line 2); and
analyzing the respective outputs and generating one or more analytics corresponding to the target input probe (Col.1, lines 39-53, claim 1).
Other Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Bell et al. (US 2024/0404687) disclose methods and systems for building and deploying agents include large language models (LLMs), such as ChatGPT by OpenAI, have become increasingly popular for answering a variety of questions using human language.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN H LE whose telephone number is (571)272-2275. The examiner can normally be reached on Monday-Friday from 7:00am – 3:30pm Eastern Time.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shelby A. Turner can be reached on (571) 272-6334. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHN H LE/Primary Examiner, Art Unit 2857