DETAILED ACTION
This is responsive to the application filed 16 April 2024.
Claims 1-20 are currently pending and considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Further, this judicial exception is not integrated into a practical application.
A method for producing generative artificial intelligence (AI) output using one or more generative AI models, the method comprising: receiving a natural language description of a requested output to be generated by the one or more generative AI models; generating, with at least one of the one or more generative AI models and based at least in part on the natural language description, a plurality of generation plans, each generation plan comprising a first plurality of instructions for generating the requested output; ranking, with at least one of the one or more generative AI models, the plurality of generation plans to select a candidate generation plan from the plurality of generation plans; generating, with at least one of the one or more generative AI models and in accordance with the first plurality of instructions comprised in the candidate generation plan, a plurality of outputs; ranking, with at least one of the one or more generative AI models, the plurality of outputs to select a candidate output from the plurality of outputs; and validating the candidate output in accordance with one or more validation parameters associated with the requested output.
In claims 1, 11 and 20, the limitations
That is, other than reciting a “one or more generative AI models” (claims 1, 9 and 20), an “apparatus for producing generative artificial intelligence (AI) output using one or more generative AI models, comprising: one or more memories storing processor-executable code; and one or more processors coupled with the one or more memories and individually or collectively operable to execute the code to cause the apparatus to” (claim 11) and a “non-transitory computer-readable medium storing code for producing generative artificial intelligence (AI) output using one or more generative AI models, the code comprising instructions executable by one or more processors to” (claim 20) nothing in the claims precludes the steps from practically being performed in the mind.
For example, a person may generate, based at least in part on a natural language description, a plurality of generation plans, each generation plan comprising a first plurality of instructions for generating the requested output (e.g. a human may generate multiple generation plans); rank the plurality of generation plans to select a candidate generation plan from the plurality of generation plans (e.g. a human may rank the plans to choose a particular plan); generate, in accordance with the first plurality of instructions comprised in the candidate generation plan, a plurality of outputs (e.g. a human may generate outputs based on the particular plan); rank the plurality of outputs to select a candidate output from the plurality of outputs (e.g. a human may rank the outputs to select a particular output); and validate the candidate output in accordance with one or more validation parameters associated with the requested output (e.g. a human may verify the particular output).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements – a “one or more generative AI models” (claims 1, 9 and 20), an “apparatus for producing generative artificial intelligence (AI) output using one or more generative AI models, comprising: one or more memories storing processor-executable code; and one or more processors coupled with the one or more memories and individually or collectively operable to execute the code to cause the apparatus to” (claim 11) and a “non-transitory computer-readable medium storing code for producing generative artificial intelligence (AI) output using one or more generative AI models, the code comprising instructions executable by one or more processors to” (claim 20) which are recited at a high-level of generality (i.e., as generic processors performing generic computer functions) such that they amount to no more than mere instructions to apply the exception using a generic computer components.
The claims also recite the additional elements “receiving a natural language description of a requested output to be generated by the one or more generative AI models”. The claims do not impose any limits on how the natural language description is received. In other words, the claims recite only the idea of a solution or outcome i.e., the claims fail to recite details of how a solution to a problem is accomplished. These limitations therefore represent extra-solution activity because they are mere nominal or tangential addition to the claims. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are therefore directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. As stated above, the claims recite the additional limitations of a “one or more generative AI models” (claims 1, 9 and 20), an “apparatus for producing generative artificial intelligence (AI) output using one or more generative AI models, comprising: one or more memories storing processor-executable code; and one or more processors coupled with the one or more memories and individually or collectively operable to execute the code to cause the apparatus to” (claim 11) and a “non-transitory computer-readable medium storing code for producing generative artificial intelligence (AI) output using one or more generative AI models, the code comprising instructions executable by one or more processors to” (claim 20). However, these are recited at a high level of generality and are recited as performing generic computer functions routinely used in computer applications (see Applicant’s specification [0143]-[0145]). Generic computer components recited as performing generic computer functions that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system.
The claims also recite the additional elements “receiving a natural language description of a requested output to be generated by the one or more generative AI models”. The claims do not impose any limits on how the natural language description is received. In other words, the claims recite only the idea of a solution or outcome i.e., the claims fail to recite details of how a solution to a problem is accomplished. These limitations represent the extra-solution activity of gathering data which is well-understood, routine and conventional activity. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible.
The dependent claims, when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitations fail to establish that the claims are not directed to an abstract idea.
The dependent claims recite:
wherein generating the plurality of outputs further comprises: providing a previous successful generation plan, a previously-validated candidate output, or both to the one or more generative AI models.
wherein generating the plurality of outputs comprises: providing an unsuccessfully-validated output and an indication of an error associated with the unsuccessfully-validated output to the one or more generative AI models.
wherein each generation plan of the plurality of generation plans comprises natural language instructions for generating the plurality of outputs, context information for generating the plurality of outputs, machine instructions for generating the plurality of outputs, or any combination thereof.
wherein ranking the plurality of generation plans to select the candidate generation plan comprises: providing, to the one or more generative AI models, a ranking request comprising an indication of a ranking system based at least in part on one or more ranking criteria, a natural language request to rank the plurality of generation plans and provide reasoning for one or more generated rankings, and a request output format for presenting generated rankings.
wherein ranking the plurality of outputs to select the candidate output comprises: providing, to the one or more generative AI models, a ranking request comprising an indication of a ranking system based at least in part on one or more ranking criteria, a natural language request to rank the plurality of outputs and provide reasoning for one or more generated rankings, and a requested output format for presenting generated rankings.
wherein generating the plurality of generation plans comprises: providing, to the one or more generative AI models, a natural language request to generate a second plurality of instructions for generating each generation plan and a requested output format for the plurality of generation plans.
wherein: generating the plurality of generation plans is based at least in part on a consensus of the one or more generative AI models; and generating the plurality of outputs is based at least in part on a consensus of the one or more generative AI models.
wherein: the requested output comprises a requested cloud substrate security policy; and the candidate output comprises a cloud substrate security policy.
wherein: the requested output comprises requested unit test code; and the candidate output comprises unit test code.
The additional recited limitations further narrow the steps of the independent claims without however providing “a practical application of” or "significantly more than" the underlying “Mental Processes” abstract idea. Therefore, the dependent claims are also not patent eligible.
Moreover, see Recentive Analytics, Inc. v. Fox Corp. (Fed. Cir. April 18, 2025)- “Machine learning is a burgeoning and increasingly important field and may lead to patent-eligible improvements in technology. Today, we hold only that patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.”
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
The closest prior art of record, Kotikalapudi et al. (US 2024/0394471), discloses a method for producing generative artificial intelligence (AI) output using one or more generative AI models (Abstract), the method comprising: receiving a natural language description of a requested output to be generated by the one or more generative AI models (“an LLM can be used to process an NL based input to generate a plurality of responses”, [0004], see also [0052]); generating, with at least one of the one or more generative AI models and based at least in part on the natural language description, a plurality of generation plans, each generation plan comprising a first plurality of instructions for generating the requested output (“instructions included in the NL based response. For instance, the NL based input can request that the LLM generate a response that includes 6 lines and in the style of a particular writer”, [0054], see also [0052]); generating, with at least one of the one or more generative AI models, a plurality of outputs (“generate a plurality of responses”, [0004], see also [0053]); ranking, with at least one of the one or more generative AI models, the plurality of outputs to select a candidate output from the plurality of outputs (“A response can thus be considered to be “high quality” if it is determined that the response follows all (or at least above a certain threshold number) of the instructions in the NL based input”, [0004], see also [0060]); and validating the candidate output in accordance with one or more validation parameters associated with the requested output (“the most promising (e.g., the highest quality) candidate response is chosen to be refined”, [0005]).
However, Kotikalapudi, individually or in combination with the prior art, does not disclose ranking, with at least one of the one or more generative AI models, the plurality of generation plans to select a candidate generation plan from the plurality of generation plans; generating, with at least one of the one or more generative AI models and in accordance with the first plurality of instructions comprised in the candidate generation plan, a plurality of outputs.
Chen et al. (US 2024/0420239) discloses a method for plan generation can include: determining a set of plans, selecting a plan, optionally determining a set of tasks for the selected plan, optionally performing the set of tasks, and optionally determining a set of explanations for a primary model.
Brenner et al. (US 2025/0165463) discloses obtaining training data for a retrieval-augmented generation (RAG) architecture having retriever and generative models. The retriever model is configured to identify information chunks relevant to input queries, and the generative model is configured to generate outputs based on the information chunks and the input queries. The method also includes generating a prompt for the generative model and generating multiple sets of queries for the retriever model. Each query in the multiple sets of queries is configured to cause the retriever model to select a set of information chunks associated with the prompt. The method further includes generating multiple responses to the prompt using the generative model and the sets of information chunks and determining rewards associated with the RAG architecture based on the responses.
Jiang et al. ("Llm-blender: Ensembling large language models with pairwise ranking and generative fusion." Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023) presents LLM-BLENDER, an ensembling framework designed to attain consistently superior performance by leveraging the diverse strengths of multiple open-source large language models (LLMs). Our framework consists of two modules: PAIRRANKER and GENFUSER, addressing the observation that optimal LLMs for different examples can significantly vary. PAIRRANKER employs a specialized pairwise comparison method to distinguish subtle differences between candidate outputs.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMUEL G NEWAY whose telephone number is (571)270-1058. The examiner can normally be reached Monday-Friday 9:00am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAMUEL G NEWAY/ Primary Examiner, Art Unit 2657