DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the response to this office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application.
Claim Objections
Claims 1-10 are objected to because of the following informalities:
Claim 1 recited “provide the natural language response as output” which should be -- provide the natural language response as an output -- because “output” herein is not plural. Claims 2-10 are objected due to the dependencies to claim 1.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites an application of “large language models LLMs” and “logic reasoning engine” and wherein the “language model” is broadly claimed to perform “processing a representation” of “statements” that would be interpreted as a text or a sentence in English from a document or a utterance from a user for generating “logical statements expressed in a logic specification language” (LSL), and further broadly claimed to perform “processing” “a representation” of “logical assessment” performed by “a logic reasoning engine corresponding to the LSL”, applied to the generated “logic statements”, and generating, by the “LLMs”, for generating and outputting “a natural language response” and wherein “input statements” comprising “a representation of query” and the “response” comprising one of “an answer to a question about” and “an explanation of” “a portion of the query”, etc.
The limitation of “processing” “input statements” and “processing” “representation of … logical assessment”, the limitation of “generating” “representation” of “logical assessment” to the generated “logical statements”, and the limitation of “providing” “the natural language output” for the “response”, as drafted, are processes that, under their broadest reasonable interpretation, covers performance of the limitation in the mind with an aid of pencil and paper, because it is nothing in the claim element precludes the step from practically being performed in the mind with an aid of pencil and paper. For example, human mind can perform logic analyses applied to the heard and the read sentences to translate the inputs into logic statement such as “if-else” and morphological “happy”/”unhappy” in certain conditions, etc., i.e., claimed “processing” “statements” for generating “logical statements” expressed “in a logic specification language”. Further, human mind can perform evaluation or verification of the generate “logic statement” of how good and how reasonable it is, i.e., claimed “processing” “using a logical reasoning engine”. Further, human mind can explain of what it has been done and “provide” an output in natural language by speaking using his/her mouth or writing on the paper, i.e., claimed “generate” “a natural language response” and “provide the natural language response as output”. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components such as claimed “one or more processors” in claim preamble, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim merely recites “provide the natural language response as output” and this element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, using generic computer components such as “one or more processors comprising processing circuitry” in claim cannot provide an inventive concept. The claim is not patent eligible, see Sample Rejection of Example 37 – claim 3 – under 2019 Revised Patent Subject Matter Eligibility Guidance, “2019 PEG”.
Claim 11 recited a further broad with no “large language models”, but merely “a system comprising one or more processors” to perform every aspect of what “generating” is performed based on what “a logical reasoning engine” to process on “translated representation” of “statements. As discussed in claim 1 above, for example, human can perform assessment based on the knowledge, experiences, and rules for a “translated statement” and further recitation of generic computer components such as “circuitry” and “processors” is nothing in the claim element precludes the step from practically being performed in the mind as discussed in claim 1 above. The additional elements “circuitry” and “processors” are recited in a high-level of generality, i.e., a generic processor and circuitry in performing the abstract idea such that if it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, the additional elements above do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Claim 19 recited method steps as recited as functions in claim 1 and has been further analyzed and rejected according to claims 1, 11 above.
Claim 2 depends on claim 1 and further recited “inserting” “input statements into” “template prompts” to instruct “LLMs” to “convert” from “input statements” into “the logic specification language”, which is a further process of “generating” by using “logical reasoning engine”, etc. with no technology to have been used, but merely practiced in mind, e.g., human can refer remind or prompt self to perform language conversion, etc., under its BRI. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception and thus, the claim does not rectify the 101 issue of parent claim 1 and is not patent eligible.
Claim 3 depends on claim 1 and further recited “a first large language model” of “the LLMs” and tuned to a specific “logic specification language” which is a further process of “generating” “logic statements expressed in the logic specification language” by using the “first LLM” with no technology to have been used, but merely practiced in mind, e.g., human can use language capability or experiences as “model” to perform the function as recited in parent claim so that language capability or model can be tuned to be adapted to a “logic specification language”, etc., under its BRI. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception and thus, the claim does not rectify the 101 issue of parent claim 1 and is not patent eligible.
Claim 4 depends on claim 1 and further recites “solvers” implemented by “the logic reasoning engine” which is merely claimed “processing” self with no technology to have been used, but merely practiced in mind as “processing” in mind, e.g., human can use logic reasoning to solve problems, etc. under its BRI. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception and thus, the claim does not rectify the 101 issue of parent claim 1 and is not patent eligible.
Claim 5 depends on claim 1 and further recites what possible inclusion of data “logical assessment”, e.g., “a proof or refutation” of “the statements”, “a deduct fact”, or “consistent check” which is merely claimed data content self with no technology to have been used, but merely practiced in mind as data written on the paper, etc. under its BRI. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception and thus, the claim does not rectify the 101 issue of parent claim 1 and is not patent eligible.
Claim 6 depends on claim 1 and further recites limitations “prompts” generated by “using retrieval augmented generation and augmented with a representation of content explaining the logic specification language” with no technology to have been used to recite how “retrieval”, “augment”, “explain” are performed and what they are, but merely practiced in mind, for example, human can perform “retrieve”, “augment” and “explain” the “statements” based on rules, knowledge, and experiences, etc. under their BRIs. Those additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception and thus, the claim does not rectify the 101 issue of parent claim 1 and is not patent eligible.
Claim 7 depends on claim 1 and further recites what is included in the recited “the natural language response” such as “an explanation” of the “logical assessments”, or “an answer to a question” based on “logic assessment”, with no technology to have been used, but merely practiced in mind because human can speak or write in natural language, logic thinking and analyses, concluding, and representing, etc., under its BRI. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception and thus, the claim does not rectify the 101 issue of parent claim 1 and is not patent eligible.
Claim 8 depends on claim 1 and further recites what “input statements” represented and what “response” looks like with no technology to have been used, but merely practiced in mind, e.g., human can sense and input video image and sounds, based on which, can organize the response and practice the presentation of the response in natural language, etc., under its BRI. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception and thus, the claim does not rectify the 101 issue of parent claim 1 and is not patent eligible.
Claim 9 depends on claim 1 and further recites “at least a portion of the time-series data” about “an explanation” in the “natural language response” with no technology to have been used, but merely practiced in mind, e.g., human can pick up portion of what he/she heard or read for providing natural language explanation, etc., under its BRI. This additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception and thus, the claim does not rectify the 101 issue of parent claim 1 and is not patent eligible.
Claim 10 depends on claim 1 and further recites generic computer components as “system” comprising “the circuitry” by Markush style, e.g., the system “for an autonomous or semi-autonomous machine” or …, which does not place the claimed abstract idea to be away from using “generic computer components” and cannot provide an inventive concept as discussed in claim 1 above and does not include additional elements that are sufficient to amount to significantly more than the judicial exception and thus, the claim does not rectify the 101 issue of parent claim 1 and is not patent eligible.
Claim 12 has been analyzed and rejected according to claims 11, 2 above.
Claim 13 has been analyzed and rejected according to claims 11, 3 above.
Claim 14 has been analyzed and rejected according to claim 11 above.
Claim 15 has been analyzed and rejected according to claims 11, 5 above.
Claim 16 has been analyzed and rejected according to claims 11, 6 above.
Claim 17 has been analyzed and rejected according to claims 11, 7 above.
Claim 18 has been analyzed and rejected according to claims 11, 10 above.
Claim 20 has been analyzed and rejected according to claims 19, 10 above.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention..
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 11, 14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Allen et al. (US 20160196497 A1, hereinafter Allen).
Claim 11: Allen teaches a system (title and abstract, ln 1-14, one or more computing devices 104 in fig. 1, para 45 and dataflows in figs. 3-4) comprising one or more processors (one or more processors in element 104, para 45) to generate, based at least on processing a translated representation of one or more natural language statements (question feature 420 extracted from input question 410, and for lookup operation processed with crowdsourcing reasoning data structure 430, para 88 or candidate answer 440 to be evaluated upon supporting evidence, crowdsourcing reasoning engine 480 to determine and evaluate a final answer 470 in fig. 4, para 89-90, and in a natural language form such as text, para 38, e.g., generated by a content creator, to question-related topics via element 320 and queries to be applied to corpora of data 345 via element 330, para 68-69, in fig. 3) using a logical reasoning engine (crowdsourcing reasoning engine 390 in fig. 3, 480 in fig. 4, and at least the crowdsourcing reasoning data structure 430 of the crowdsourcing reasoning engine is accessed, updated, accessing para 88-89, updated para 69, 89), a representation of one or more logical assessments (evaluating via crowdsourcing reasoning engine 480 with supporting evidence 450 and crowdsourcing reasoning data structure 430 in fig. 4, such as outputting “Austin is a nice place to live because it is clean, has good teachers, and has good schools”, para 21, 54, and a confidence measure with the answer result, para 3) expressed in a logic specification language of the one or more natural language statements (the example above, i.e., logic specification language is presented by the natural language sentence in English format above, and as the same language used to represent the question: “Is Austin a nice place to live?”, para 20-21).
Claim 14 has been analyzed and rejected according to claim 11 above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-10, 12-13, 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Allen (above) and in view of reference Nouri et al. (US 20240038226 A1, hereinafter Nouri).
Claim 19 has been analyzed and rejected according to claim 11 above and wherein Allen further teaches
obtaining, based at least on processing a translated representation of one or more statements (from inputted question 310 in a natural language form such as text, para 38, e.g., generated by a content creator, to question-related topics via element 320 and queries to be applied to corpora of data 345 via element 330, para 68, in fig. 3, or question related features/characteristics 520 in fig. 5, and then hypotheses, as translated representation of one or more natural language statements, generated by an element 340 in fig. 3, para 69) in a particular language (e.g., a logic language having grammar, etc., and represented by natural language and used for representing the question “Is Austin a nice place to live?”, para 20 and the answer with reasoning “Austin is a nice place to live because it is clean, has good teachers, and has good schools”, para 21) using a logical reasoning engine (crowdsourcing reasoning engine 390 in fig. 3, at least accessing the reasoning criteria extraction logic 394, etc., and reasoning rule, 396, etc., of the crowdsourcing reasoning engine 390, para 69), a representation of one or more logical assessments of the one or more statements (including reasoning criteria specifying the reasoning behind the final answer with the final answer, para 101); and
obtaining, based at least on processing the representation of the one or more logical assessments using language processing (performing natural language processing on evidential statements and identifying phrases specifying reasoning criteria by performing logic parse tree or knowledge graph, para 80), a response in the particular language or at least one other language for the one or more statements (presenting why the answer is considered to be a corrected answer, e.g., “Austin is a nice place to live because it is clean, has good teachers, and has good schools” in a natural language form, para 21, 54), except explicitly teaching wherein the language processing is a large language model.
Nouri teaches an analogous field of endeavor by disclosing a method (title and abstract, ln 1-17 and method steps in fig. 2) and wherein
obtaining, based at least on processing a translated representation of one or more statements (received or revised answers from the user to questions and generated question-answer pairs at step 214, para 41 and the question-answer pairs are initially generated from the language model via elements 202-206, in fig. 2) in a particular language (the received or revised answers in generated question-answer pairs at step 214 to be inputted to the language model for generating task specific output at step 216 in fig. 2, and thus, the question-answer pairs task in a language that is recognized by the language model inherently, para 36) using a logical reasoning engine (converting received keywords to task related question-answer list via a logic learning machine, para 21, para 36-37 and presenting to the user in steps 208-212 in fig. 2), a representation of one or more logical assessments of the one or more statements (prompt or not for user to answer the question by evaluating the answers to be validated or not by the answer relevance via comparing against an expected type of information, para 40, or for evaluating the generated task-specific question at step 312 in fig. 3A and model generating check at step 346 in fig. 3B); and
obtaining, based at least one processing the representation of the one or more logical assessments using one or more large language models (by using the language model 130 that is large language model having question-answer pair generator 132, task specific output generator 134, para 24, including different types of generative machine learning models, para 24), a response in the particular language or at least one other language for the one or more statements (repeating prompting to the user until a relevant answer entered by the user, para 40 and in a language the user can understand from the prompting inherently) for benefits of improving efficacy of using the machine learning model to execute tasks (for variety of tasks, para 3 and by improving accuracy of the generative tasks by matching one or more keywords, para 35 and enhancing text representation based on given answers, para 62).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the least one processing of the representation of the one or more logical assessments using the one or more large language models, as taught by Nouri, to using the language processing for processing the representation of the one or more logical assessments in the method, as taught by Allen, for the benefits discussed above.
Claim 1 has been analyzed and rejected according to claims 1 above and the combination of Allen and Nouri further teaches one or more processors (Allen, one or more processors in element 104, para 45, and Nouri, microprocessors contained in a single chip, para 65) comprising circuitry (Allen, electronic circuitry, programmable logic circuitry, etc., para 30 and Nouri, practiced in an electrical circuitry containing chips, para 65) to:
generate, based at least on processing a representation of one or more input statements (Allen, question features 420 represented as question and subsequent questions inputted by a user, para 15, and step 420 in fig. 4 and Nouri, keywords related task specific request provided by the user, para 34-35) using one or more large language models LLMs (Allen, performing natural language processing on evidential statements and identifying phrases specifying reasoning criteria by performing logic parse tree or knowledge graph, para 80, and performing lookup operation by accessing a crowdsourcing reasoning data structure 430, and Nouri, the large language model LLM 130 in fig. 1, para 24), one or more logical statements expressed in a logic specification language (Allen, the extracted features or characteristics compared with an entry of answer reasoning rules/logic for a match in fig. 4, para 18, i.e., a language that must be understood by accessing the crowdsourcing reasoning data structure 430 in fig. 4, e.g., via synonyms and other equivalent terms/phrases data structures for similar question characteristics or features, para 88, and Nouri, generated relevant question-answer pairs as list at step 204-206 in fig. 2 by processing the parameters or keywords inputted by the user and understood by the language model in fig. 2 inherently) and representing the one or more input statements (Allen, the extracted question features or characteristics 420 in fig. 4, para 88, inherently representing the user’s question input, and Nouri, generating a list of question-answer pairs 206 is based on received task request and contextual information derived from the task related request by using machine learning model, para 36-37, i.e., representing the task specific request above);
generate, using a logical reasoning engine corresponding to the logic specification language, a representation of one or more logical assessments of the one or more logical statements (Allen, evaluating via crowdsourcing reasoning engine 480 with supporting evidence 450 and crowdsourcing reasoning data structure 430 in fig. 4, such as outputting “Austin is a nice place to live because it is clean, has good teachers, and has good schools”, para 21, 54, and a confidence measure with the answer result, para 3, the discussion in claim 11 above, and Nouri, prompt or not for user to answer the question by evaluating the answers to be validated or not by the answer relevance via comparing against an expected type of information, para 40, or for evaluating the generated task-specific question at step 312 in fig. 3A and model generating check at step 346 in fig. 3B, and discussion in claim 19 above);
generate, based at least on processing the representation of the one or more logical assessments using the one or more LLMs, a natural language response for the one or more input statements (Allen, presenting why the answer is considered to be a corrected answer, e.g., “Austin is a nice place to live because it is clean, has good teachers, and has good schools” in a natural language form, para 21, 54 and the discussed in claim 19 above and Nouri, repeating prompting to the user until a relevant answer entered by the user, para 40 and in a language the user can understand from the prompting inherently); and
provide the natural language response as output (Allen, outputting the answer with reasons to be correct answer, para 21, 54 and Nouri, prompting in natural language, “Prompt to user: … and Please answer the following questions: …” in fig. 4A), wherein the one or more input statements comprise a representation of a query (Allen, the question is inputted for answer 410 in fig. 4, para 88, used for processing, para 90, and Nouri, receiving task specific request at 202 in fig. 2, para 34-35), and the natural language response comprises at least one of: an explanation of, or an answer to a question about, at least a portion of the query (Allen, answer with criteria information outputted, para 101 or with confidence measure, para 76, and Nouri, transmitting or displaying returned task specific output in text format at step 219, and displayed or transmitted at step 220, para 44).
Claim 2: the combination of Allen and Nouri further teaches, according to claim 1 above, wherein the processing circuitry is further to generate the representation of the one or more input statements (Allen, the question characteristics/features extracted from the input question, at 420 in fig. 4, step 520 in fig. 5, and Nouri, the task request with the keywords and then generating a list of the question-answer at steps 204-206 in fig. 2) based at least on inserting the one or more input statements into one or more template prompts (Allen, extracted question features and questions features from the crowdsourcing reasoning data structure obtained for comparison of match, para 88, and Nouri, the request task with the contextual information from the task processor to the language model in fig. 2, question plus contextual as template prompts) that instruct the one or more LLMs to convert the one or more input statements into the logic specification language (Nouri, language model is instructed and caused to perform converting or generating a list of relevant question-answer from the task request with parameters via 204-206 in fig. 2).
Claim 4: the combination of Allen and Nouri further teaches, according to claim 1 above, wherein the logical reasoning engine implements one or more solvers (Allen, reasoning term/phrase identification logic 392 for criteria extraction logic, para 78, reasoning criteria extraction logic 394 for identifying the reasoning criteria associated with final answer, and natural language processing on the evidential statements, para 80, and reasoning rule generation logic 396 for generating reasoning rules based on the selected criteria and weights, para 84, i.e., more solvers).
Claim 5: the combination of Allen and Nouri further teaches, according to claim 1 above, wherein the one or more logical assessments of the one or more logical statements represent at least one of: a proof or refutation of at least one of the one or more input statements, a deduced fact based on at least one of the one or more input statements, or a consistency check of at least one of the one or more input statements (Note: Markush rule applied, see MPEP 2117, and evidential statements supporting the final answer from the supporting evidence 450 are retrieved at element 450, para 89-90).
Claim 6: the combination of Allen and Nouri further teaches, according to claim 1 above, wherein the processing circuitry is further to generate the representation of the one or more logical assessments (discussed in claim 1 above) using one or more prompts (Nouri, prompts to user in figs. 4A-4B) generated using retrieval augmented generation (Nouri, by generated question and augmented answers generated in figs. 4A-4B) and augmented with a representation of content explaining the logic specification language (Nouri, the augmented with person by “Who”, What is a name of …”, time by “When …”, location by “What is the location of …”, etc., in fig. 4A/4B with certain logic of “bridge” that must be with “groom”, etc., “place” must be with “time”, etc., in figs. 4A/4B, para 57-60).
Claim 7: the combination of Allen and Nouri further teaches, according to claim 1 above, wherein the natural language response comprises an explanation of the one or more logical assessments, or an answer to a question associated with the one or more input statements based at least on the one or more logical assessments (Allen, the return is answer to the question with the reasoning, e.g., “a good place to live because it has good weather conditions, low crime, and good schools” to the question “Is Austin a nice place to live?” in one return upon an individual or “a bad place to live because bad traffic condition upon a different individual and Nouri, revised task-specific output in fig. 4C to the question of a wedding invite task in fig. 4A).
Claim 8: the combination of Allen and Nouri further teaches, according to claim 1 above, wherein the one or more input statements represent a sequence of image data of a video (Nouri, a video as content to be generated and used to represent the input task request at step 204 in fig. 2, para 36 or the video is recorded via element 630 as input to the system, para 73), wherein the natural language response comprises at least one of: an explanation of, or an answer to a question about, at least a portion of the video (Allen, outputting the answer to the question with reasoning, discussed in claim 1 above and Nouri, outputting the revised task-specific output at step 226 in response to the input task request by the user at step 202 in fig. 2).
Claim 9: the combination of Allen and Nouri further teaches, according to claim 1 above, wherein the one or more input statements represent a sequence of time-series data (Allen, the question “Is Austin a nice place to live?”, para 21 and further question “IS NYC nice?”, para 22, i.e., time-series data, and Nouri, the input sequence from task request with parameters at step 202, to intermediate answers to the questions at steps 208-214), wherein the natural language response comprises at least one of: an explanation of, or an answer to a question about, at least a portion of the time-series data (Allen and Nouri, discussed in claim 1-9 above).
Claim 10: the combination of Allen and Nouri further teaches, according to claim 1 above, wherein the processing circuitry is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system for performing remote operations; a system for performing real-time streaming; a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system for generating synthetic data; a system for generating synthetic data using AI; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (Note: Markush rule applied, see MPEP 2117, and Allen, data processing system in fig. 2, e.g., including controller 204, 202, etc., para 56, for automation of the laptop computer, para 64, and remote computer or server for remote operations, para 30, telephone for real time streaming, para 64, with AI application, para 38, and Nouri, smartphone for performing autonomous or semi-autonomous machine for smartphone, para 69, perception system for an autonomous or semi-autonomous machine such as temperature control for the model, maximize outputs size for the model, etc., para 56).
Claim 12 has been analyzed and rejected according to claims 11, 2 above.
Claim 13 has been analyzed and rejected according to claims 11, 3 above.
Claim 15 has been analyzed and rejected according to claims 11, 5 above.
Claim 16 has been analyzed and rejected according to claims 11, 6 above.
Claim 17 has been analyzed and rejected according to claims 11, 7 above.
Claim 18 has been analyzed and rejected according to claims 11, 10 above.
Claim 20 has been analyzed and rejected according to claims 19, 10 above.
Claims 3, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Allen (above) and in view of reference Nouri (above) and Yang et al. (CN 117076607 A, hereinafter Yang, its translation and original attached herein and its translation applied in paragraph citation under Li below).
Claim 3: the combination of Allen and Nouri further teaches, according to claim 1 above, wherein the processing circuitry is further to generate the one or more logical statements using a first large language model of the one or more LLMs (Allen, language processing is performed at 320, para 66, and language processing is also performed within the crowdsourcing reason engine 390 for criteria extraction logic, para 80, and Nouri, language processing to generating question-answer list which is logically related to the task request with parameters via steps 204-206 through the language model in fig. 2, and the language model is large language model, para 24), except explicitly teaching wherein the first large language model tuned to the logic specification language.
Yang teaches an analogous field of endeavor by disclosing a method (title and abstract, ln 1-13 and method steps in figs. 1-2) and wherein the first large language model is disclosed (a large language model, abstract) to be tuned to the logic specification language (when the large language model is not continuously trained, S103, the large language model is provided with an example of conversion from natural language text to logical expression, and the natural language text in order to obtain logic expression from the large language model, the last paragraph of page 7,and the 1st paragraph of page 8, i.e., the large language model is trained to accept a conversion from the natural language to a language representing the logic expression, and output the logic expression based on the input of the natural language text corresponding to the logic expression) for benefits of improving the efficiency of the large language model (by saving computation power, and adapted for diversified requirements, the last paragraph of page 7, with more accurate output of the logic expression corresponding to the natural language text from the large language model in an automatic, efficient, and convenient manner, para 3 of page 8).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied wherein the first large language model tuned to the logic specification language, as taught by Yang, to the first large language model implemented by the one or more processors, as taught by the combination of Allen and Nouri, for the benefits discussed above.
Claim 13 has been analyzed and rejected according to claims 11, 3 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LESHUI ZHANG whose telephone number is (571)270-5589. The examiner can normally be reached Monday-Friday 6:30amp-4:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LESHUI ZHANG/
Primary Examiner,
Art Unit 2695