DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The instant application having Application No. 19/081,936 has a total of 20 claims pending in the application, there are 3 independent claims and 17 dependent claims, all of which are ready for examination by the examiner.
Oath/Declaration
The applicant’s oath/declaration has been reviewed by the examiner and is found to conform to the requirements prescribed in 37 C.F.R. 1.63.
Drawings
The applicant’s drawings submitted are acceptable for examination purposes.
Specification
The applicant’s specification submitted is acceptable for examination purposes.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 USC 101 as being drawn to nonstatutory subject matter.
The independent claims are rejected under 35 U.S.C. 101 because the claimed invention is drawn to an abstract idea without significantly more. Independent claims 1, 8 and 15 are drawn to the receiving a user generated query which gets multiple responses and the chosen of the two responses being sent to the user for selection. The query and answers could have been accomplished through human mental process and conversation and addition of a prompt for the computer and gathering resources hold as abstract, thus the limitations don’t describe doing significantly more, and the claims as a whole does not provide integration into a practical application.
The claims fall within the “Mental Processes” grouping of abstract ideas. Specifically, the limitations as discussed above, as claimed, is a process that covers performance of the limitations in the mind, or with pen and paper, but for the recitation of generic computer components (e.g., computer, storage device) because a user can mentally, or with pen and paper, observe, evaluate and make judgements to perform the claimed limitations. For example, a person can read documents and make judgements to find sections of documents that contain a problem, and provide a report with any findings. A person can further link substantially identical concepts mentally or via taking notes on paper.
This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements (e.g., computer, storage device) that are recited at a high-level of generality (e.g., as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to apply the exception using a generic computer component. See 2106.05(d)(Il). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component is not significantly more than the judicial exception.
The dependent claims depend on a rejected parent claim and do not cure its deficiencies. Similar to the above discussion, each of the dependent claims are drawn to abstract ideas. The claims are drawn to subject matter that covers performance of the claimed limitations in the mind, or with pen and paper, but for the recitation of generic computer components as discussed above. The claims are not integrated into a practical application. The claims only recite additional elements that is/are recited at a high-level of generality (e.g., as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to apply the exception using a generic computer component. The dependent claims recite response time, response quality, confidence values, speech input with audio presentation, native language, sensor, imager analysis, context and conversation state, with respect providing responses to querying user. The dependent claims also state context window size, tokenization of query and database access, which are just generic computing elements that do not teach significantly more or integration into a practical application, for dependent claims 2-7, 9-14 and 16-20. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply the exception using a generic computer component. Therefore, the claims are not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Reddy et al. (US 2024/0045893 A1) and in view of Park et al. (US 2024/0420491 A1).
For claim 1, Reddy et al. teaches a method, comprising: obtaining a user-generated prompt from a user [user entry of query in natural language, 0009: Reddy]; opening a first session in a first foundation model with a first context window and a second session in a second foundation model with a second context window [commands from queries sent to multiple querying components of the heterogeneous infrastructure, 0009: Reddy]; selecting a single response from the first response and the second response for presentation to the user [option for single response to user based on best contextual response, 0065: Reddy]; and updating the first context window and the second context window based on the single response [infrastructure responses converted to response format, 0009: Reddy], but does not teach providing a first query based on the user-generated prompt to the first foundation model and a second query based on the user-generated prompt to the second foundation model; receiving a first response from the first foundation model and a second response from the second foundation model.
Park et al. teaches providing a first query based on the user-generated prompt to the first foundation model and a second query based on the user-generated prompt to the second foundation model [multiple modalities assessed for user prompt response using foundation models, 0071: Park]; receiving a first response from the first foundation model and a second response from the second foundation model [receiving response from foundation models based on different modalities of information, 0070-0071: Park].
Reddy et al. (US 2024/0045893 A1) and Park et al. (US 2024/0420491 A1) are analogous art because they are from the same field of AI assisted user query.
At the time of the invention it would have been obvious to a person of ordinary skill in the art to modify the query responses from prompt as described by Reddy et al. with foundation models with filtering as taught by Park et al.
The motivation for doing so would be to “generate coherent and contextually relevant text” [0006: Park].
Therefore, it would have been obvious to combine Reddy et al. (US 2024/0045893 A1) with Park et al. (US 2024/0420491 A1) for optimal query response.
For claim 2, Reddy et al. and Park et al. teaches:
The method of claim 1, where the single response is selected based on response time [selection based on response time, 0300: Park].
For claim 3, Reddy et al. and Park et al. teaches:
The method of claim 1, where the single response is selected based on response quality [selection based on response quality, 0300: Park].
For claim 4, Reddy et al. and Park et al. teaches:
The method of claim 3, where the response quality is inferred from soft max values and confidence values [selection from confidence and softmax values, 0299: Park].
For claim 5, Reddy et al. and Park et al. teaches:
The method of claim 1, where the user-generated prompt is based on a speech input and the single response is presented via audio presentation [speech input, 0038; interface having audio elements to respond through speaker in audible form, 0173: Park].
For claim 6, Reddy et al. and Park et al. teaches:
The method of claim 1, where the first context window has a different size than the second context window [design constraints vary based on many limitations for each input specializer, 0246: Park].
For claim 7, Reddy et al. and Park et al. teaches:
The method of claim 6, where the first query is constructed to fit a first set of relevant information based on the user-generated prompt within the first context window and the second query is constructed to fit a second set of relevant information based on the user-generated prompt within the second context window [input limits based on specialized constraints for two different inputs, 0246: Park].
For claim 8, Reddy et al. teaches an apparatus, comprising: a processor; and a non-transitory computer-readable medium comprising instructions that when executed by the processor, cause the processor to: obtain a user-generated prompt [user entry of query in natural language, 0009: Reddy]; select at least one destination resource from a plurality of destination resources based on the user-generated prompt [commands from queries sent to multiple querying components of the heterogeneous infrastructure, 0009: Reddy], but does not teach generate at least one query for the at least one destination resource; and transmit the at least one query to the at least one destination resource.
Park et al. teaches generate at least one query for the at least one destination resource [multiple modalities assessed for user prompt response using foundation models, 0071: Park]; and transmit the at least one query to the at least one destination resource [receiving response from foundation models based on different modalities of information, 0070-0071: Park].
Reddy et al. (US 2024/0045893 A1) and Park et al. (US 2024/0420491 A1) are analogous art because they are from the same field of AI assisted user query.
At the time of the invention it would have been obvious to a person of ordinary skill in the art to modify the query responses from prompt as described by Reddy et al. with foundation models with filtering as taught by Park et al.
The motivation for doing so would be to “generate coherent and contextually relevant text” [0006: Park].
Therefore, it would have been obvious to combine Reddy et al. (US 2024/0045893 A1) with Park et al. (US 2024/0420491 A1) for optimal query response.
For claim 9, Reddy et al. and Park et al. teaches:
The apparatus of claim 8, where the at least one destination resource is selected based on a softmax score obtained from native language processing of the user- generated prompt [selection from softmax values from language, 0284: Park].
For claim 10, Reddy et al. and Park et al. teaches:
The apparatus of claim 8, where the at least one destination resource comprises a first destination resource characterized by a first query constraint and a second destination resource characterized by a second query constraint [constraints based on LLM input specializer limits for two different inputs, 0246: Park].
For claim 11, Reddy et al. and Park et al. teaches:
The apparatus of claim 10, where the at least one query comprises a first query based on the first query constraint and a second query based on the second query constraint [input limits based on specialized constraints for two different inputs, 0246: Park].
For claim 12, Reddy et al. and Park et al. teaches:
The apparatus of claim 11, where the first query constraint is a first context window size and the second query constraint comprises a second context window size [design constraints for input can vary based on many limitations for each input specializer, 0246: Park].
For claim 13, Reddy et al. and Park et al. teaches:
The apparatus of claim 11, where the first query constraint is a first tokenization set and the second query constraint comprises a second tokenization set [various tokenization based on each LLM used for query response, 0040: Park].
For claim 14, Reddy et al. and Park et al. teaches:
The apparatus of claim 8, further comprising a network interface and where the at least one destination resource comprises a local private user-specific database and a public database accessible via the network interface [public and private database environments accessible through the network, 0025: Reddy].
For claim 15, Reddy et al. teaches a method, comprising: responsive to receiving a user-generated prompt [user entry of query in natural language, 0009: Reddy], obtaining user context [NLP engine to acquire context, 0041: Reddy]; and presenting the response to a user [option for single response to user based on best contextual response, 0065: Reddy], but does not teach selecting a foundation model from a plurality of foundation models based on a suitability score calculated from the user-generated prompt and the user context; generating a query for the foundation model based on the user-generated prompt and the user context; transmitting the query to the foundation model and receiving a response.
Park et al. teaches selecting a foundation model from a plurality of foundation models based on a suitability score calculated from the user-generated prompt and the user context [multiple modalities assessed for user prompt response using foundation models, 0071; selection of resource based on scoring, 0287: Park]; generating a query for the foundation model based on the user-generated prompt and the user context [query provided in terms for the foundation model, 0244; providing user-specific data LLM for context, 0053: Park]; transmitting the query to the foundation model and receiving a response [receiving response from foundation models based on different modalities of information, 0070-0071: Park].
Reddy et al. (US 2024/0045893 A1) and Park et al. (US 2024/0420491 A1) are analogous art because they are from the same field of AI assisted user query.
At the time of the invention it would have been obvious to a person of ordinary skill in the art to modify the query responses from prompt as described by Reddy et al. with foundation models with filtering as taught by Park et al.
The motivation for doing so would be to “generate coherent and contextually relevant text” [0006: Park].
Therefore, it would have been obvious to combine Reddy et al. (US 2024/0045893 A1) with Park et al. (US 2024/0420491 A1) for optimal query response.
For claim 16, Reddy et al. and Park et al. teaches:
The method of claim 15, where the user context is obtained by capturing instantaneous user context via a sensor [instantaneous data with various sensors like for eye gaze, 0060: Park].
For claim 17, Reddy et al. and Park et al. teaches:
The method of claim 16, where the instantaneous user context comprises labels identified from an image-to-text analysis of an image [conversion of image to text, 0066: Park].
For claim 18, Reddy et al. and Park et al. teaches:
The method of claim 15, where the user context is obtained by retrieving persistent user context from a user-specific database [providing user-specific data LLM, 0053: Park].
For claim 19, Reddy et al. and Park et al. teaches:
The method of claim 18, where the persistent user context comprises a conversation state and the method further comprises updating the conversation state based on the query and the response [conversational context for response, 0052: Park].
For claim 20, Reddy et al. and Park et al. teaches:
The method of claim 15, where the user-generated prompt is based on a speech input and the response is presented via audio presentation [interface having audio elements to input speech and respond through speaker in audible form, 0173: Park].
Conclusion
The Examiner requests, in response to this Office action, that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting the application.
When responding to this Office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111(c).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AJITH M JACOB whose telephone number is (571)270-1763. The examiner can normally be reached on Monday-Friday: Flexible Hours.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached on 571-272-4080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AJITH JACOB/Primary Examiner, Art Unit 2161
11/15/2025