Prosecution Insights
Last updated: April 19, 2026
Application No. 18/745,462

NETWORK INFRASTRUCTURE FOR USER-SPECIFIC GENERATIVE INTELLIGENCE

Non-Final OA §101§103
Filed
Jun 17, 2024
Examiner
MASTERS, KRISTEN MICHELLE
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Softeye Inc.
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
87%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
25 granted / 40 resolved
+0.5% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
36 currently pending
Career history
76
Total Applications
across all art units

Statute-Specific Performance

§101
35.2%
-4.8% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 40 resolved cases

Office Action

§101 §103
Detailed Action This communication is in response to the Application filed on 6/17/2024. Claims 1-20 are pending and have been examined. Claims 1-20 are rejected Claims 1, 10, and 16 are independent, are method, apparatus, and apparatus claims, respectively. Apparent priority: 6/16/2023. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The information disclosure statement (IDS) submitted on 11/11/2024, and 9/22/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The independent Claims are directed to statutory categories: Claim 1 is a method claim and directed to the machine or manufacture category of patentable subject matter. Claim 10 is an apparatus claim and directed to the process category of patentable subject matter. Claim 16 is an apparatus claim and is directed to the machine or manufacture category of patentable subject matter. Regarding Independent Claim 1, Claim 1 recites, “1. A method for emulating session persistence, comprising: obtaining a prompt from a user; obtaining a user context based on the user; [This relates to a human obtaining context through observation in the human mind.] opening a current session with a first foundation model, where the current session is different than a previous session; [This relates to a human opening a session.] generating a personalization state based on the user context and a record of the previous session; [This relates to a human generating a personalization state using pen and paper.] initializing the current session with the personalization state; and [This relates to a human initializing a session using pen and paper.] transmitting the prompt to the first foundation model within the current session. [This relates to a human transmitting a prompt using pen and paper.] The Dependent Claim does not include additional limitations that could incorporate the abstract idea into a practical application or cause the Claim as a whole to amount to significantly more than the underlying abstract idea. Regarding Independent Claim 10, Claim 10 recites, An apparatus, comprising: a processor; and a non-transitory computer-readable medium comprising instructions that when executed by the processor, cause the processor to: retrieve a conversation state; [This relates to a human retrieving a conversation state in the human mind.] create a first session state with a first foundation model, where the first session state is initialized based on the conversation state; [This relates to a human creating a session state using pen and paper.] and update the conversation state based on the first session state. [This relates to a human updating a conversation state using pen and paper.] The Dependent Claim does not include additional limitations that could incorporate the abstract idea into a practical application or cause the Claim as a whole to amount to significantly more than the underlying abstract idea. Regarding Independent Claim 16, Claim 16 recites, An apparatus, comprising: a processor; and a non-transitory computer-readable medium comprising instructions that when executed by the processor, cause the processor to: open a first session state with a first foundation model; [This relates to a human opening a session state in the human mind using pen and paper or in the mind] close the first session state with the first foundation model; [This relates to a human closing a session state in the human mind using pen and paper or in the mind] update a conversation state based on the first session state; [This relates to a human updating a conversation state using pen and paper.] open a second session state with a second foundation model; [This relates to a human opening a session state using pen and paper or in the human mind.] and initialize the second session state based on the conversation state. [This relates to a human initializing a session using pen and paper.] The Dependent Claim does not include additional limitations that could incorporate the abstract idea into a practical application or cause the Claim as a whole to amount to significantly more than the underlying abstract idea. This judicial exception is not integrated into a practical application. In particular, claims 10 and 16 recites additional elements of “processor” “and storage “medium” For example, in [0183-0184] of the as filed specification, there is description of using processor and storage media and general-purpose CPU. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using processor, storage medium, and general-purpose CPU is noted as a general computer. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, the additional limitation in the claims noted above are directed towards insignificant solution activity. The claims are not patent eligible. Dependent claim 2 recites, “2. The method of claim 1, where the user context comprises at least one of an instantaneous user context, an accumulated user context, or a user profile. [This relates to a human creating user context using observation and pen and paper.] No additional limitations present. Dependent claim 3 recites, “3. The method of claim 1, where the previous session was previously opened and closed by the first foundation model. [This relates to a human opening and closing a model.] A model is an additional limitation. Dependent claim 4 recites, “4. The method of claim 1, where the previous session was previously opened and remains concurrently running on a second foundation model different than the first foundation model. [This relates to a human keeping an open session.] A model is an additional limitation. Dependent claim 5 recites, “5. The method of claim 1, where the first foundation model is a large language model. A model is an additional limitation. Dependent claim 6 recites, “6. The method of claim 5, where the user context comprises captions generated from computer-vision analysis of an image, and the record of the previous session comprises a text transcript. [This relates to a human generating captions using pen and paper.] No additional limitations present. Dependent claim 7 recites, “7. The method of claim 5, where the record of the previous session defines a user-specific token and the prompt references the user-specific token. [This relates to a human generating a token for reference using pen and paper.] No additional limitations present. Dependent claim 8 recites, “8. The method of claim 7, where the user-specific token is mapped to a combination of embedding vectors trained from a generic library. [This relates to a human mapping vectors to a library using pen and paper] No additional limitations present. Dependent claim 9 recites, “9. The method of claim 8, where the combination of embedding vectors encodes a relationship of the user-specific token to the user that is inferred from the user context. [This relates to a human encoding a vector of a relationship of a token to the user from the context using pen and paper] No additional limitations present. Dependent claim 11 recites, “11. The apparatus of claim 10, where the instructions further cause the processor to: create a second session state with a second foundation model, where the second session state is initialized based on the conversation state; and where the conversation state is updated based on a selected portion from the first session state or the second session state. [This relates to a human creating a session state using using pen and paper] A model is an additional limitation. Dependent claim 12 recites, “12. The apparatus of claim 11, further comprising a user interface and where the instructions further cause the processor to: obtain a user prompt from a user; This relates to a human obtaining a prompt from a user using pen and paper or voice] generate a first query for the first foundation model and a second query for the second foundation model based on the user prompt; This relates to a human generating a prompt using pen and paper] receive a first response from the first foundation model and a second response from the second foundation model; This relates to a human receiving a response using observation in the human mind.] and select a single response from the first response and the second response to present to the user. [This relates to a human selecting a response using pen and paper] A model is an additional limitation. Dependent claim 13 recites, “13. The apparatus of claim 12, where the conversation state is updated based on the single response. [This relates to a human updating a conversation state on a single response using pen and paper.] No additional limitations present. Dependent claim 14 recites, “14. The apparatus of claim 12, where at least one of the first session state and the second session state is closed based on the single response. [This relates to a human closing a conversation state on a single response using pen and paper.] No additional limitations present. Dependent claim 15 recites, “15. The apparatus of claim 12, where both the first session state and the second session state are closed based on the single response. [This relates to a human closing a conversation state on a single response using pen and paper.] No additional limitations present. Dependent claim 17 recites, “17. The apparatus of claim 16, where the instructions further cause the processor to: generate a first query for the first foundation model based on first user context; and present a first response to the first query. [This relates to a human generating a query using pen and paper] A model is an additional limitation. Dependent claim 18 recites, “18. The apparatus of claim 17, where the instructions further cause the processor to: update the conversation state based on the first response and the first query; [This relates to a human updating a state based on a response and first query using pen and paper] generate a second query for the second foundation model based on second user context; and present a second response to the second query which references information from the first query. [This relates to a human generating a query based on a response and first query using pen and paper] A model is an additional limitation. Dependent claim 19 recites, “19. The apparatus of claim 17, further comprising a sensor and where the first query comprises instantaneous user context based on data captured by the sensor. [This relates to a human using sensory data to collect instantaneous context in the human mind.] No additional limitations present. Dependent claim 20 recites, “20. The apparatus of claim 17, further comprising a user-specific database and where the first query comprises accumulated user context retrieved from the user-specific database. [This relates to a human retrieving context from a database using visual data.] No additional limitations present. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5 and 10-20 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Agarwal (U.S. Patent Number US 20180067991 A1) in view of ANANTHANARAYANAN (U.S. Patent Number US 20240419698 A1). Regarding Claim 1, Agarwal teaches 1. A method for emulating session persistence, comprising: obtaining a prompt from a user; obtaining a user context based on the user; generating a personalization state based on the user context and a record of the previous session; initializing the current session with the personalization state; (see Agarwal [0028] “FIG. 3 illustrates the system architecture of a system 300 with the general stages involved in personalizing a digital assistant according to at least one embodiment. In FIG. 3, components having reference numbers 310, 320, 350 and 356 represent runtime components while the remaining components of FIG. 3 represent offline components. As part of the runtime flow, the user query of block 310 is received and then sent to the answer/result candidate generator of block 320. As part of the offline process the past conversational queries and action logs are depicted in cloud 330 and semantic parsing may be performed via an RDF adapter as shown in block 332. Thus, the user's past conversations along with the digital assistant's actions and the user's interactions to the digital assistant are used to build the knowledge base 340.” Agarwal does not specifically teach opening a current session with a first foundation model, where the current session is different than a previous session; However, ANANTHANARAYANAN does teach this limitation (See ANANTHANARAYANAN, [0016] “In some embodiments, the foundation model simultaneously reviews context profiles and selects a best response from the simultaneously reviewed responses. For example, the foundation model may identify multiple context profiles and simultaneously prepare responses based on these context profiles. The foundation model may prepare relevancy scores from the responses and select the context profile having a higher (or highest) relevancy score. In some examples, the foundation model may prepare the context profiles to be analyzed within a latency budget.”) and transmitting the prompt to the first foundation model within the current session. (See ANANTHANARAYANAN, [0034] “The context extraction system 110 may determine a context profile from the query. For example, the context extraction system 110 may generate a context profile based on content from the domain database 115 and text from the query. In some examples, the context extraction system 110 may select a context profile from a profile database of context profiles. The context extraction system 110 may generate a prompt for the foundation model 102 using the context information from the context profile and the query. For example, the context extraction system 110 may generate a text concatenation of the query and the context information from the context profile to generate the prompt. The foundation model 102 may prepare a response to the query based on the inputted prompt and send the response back to the user device 108. Additional information in connection with steps of the above-process will be discussed in further detail below.”) (See ANANTHANARAYANAN, [0036] “In some examples, the context extraction system 110 may simultaneously analyze multiple context profiles in parallel (e.g., any number of context profiles). The context extraction system 110 may prepare relevancy scores for each of the context profiles. The context extraction system 110 may select the context profile having the higher relevancy score (or a relevancy score that exceeds a threshold and having a lower expected latency). The context extraction system 110 may then send the prompt based on the best context profile to the foundation model 102.”) Agarwal and ANANTHANARAYANAN are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Agarwal to incorporate opening a current session with a first foundation model, where the current session is different than a previous session; and transmitting the prompt to the first foundation model within the current session of ANANTHANARAYANAN. This allows improved relevance of the responses while reducing the cost of the foundation model's response as recognized by ANANTHANARAYANAN [0014]. Regarding Independent Claim 10, Agarwal teaches 10. An apparatus, comprising: a processor; and a non-transitory computer-readable medium comprising instructions that when executed by the processor, cause the processor to: (see Agarwal [0006] “…a computer-readable storage medium including instructions for … The instructions executed by a processor”) retrieve a conversation state; and update the conversation state based on the first session state. (see Agarwal [0028] “FIG. 3 illustrates the system architecture of a system 300 with the general stages involved in personalizing a digital assistant according to at least one embodiment. In FIG. 3, components having reference numbers 310, 320, 350 and 356 represent runtime components while the remaining components of FIG. 3 represent offline components. As part of the runtime flow, the user query of block 310 is received and then sent to the answer/result candidate generator of block 320. As part of the offline process the past conversational queries and action logs are depicted in cloud 330 and semantic parsing may be performed via an RDF adapter as shown in block 332. Thus, the user's past conversations along with the digital assistant's actions and the user's interactions to the digital assistant are used to build the knowledge base 340.” Agarwal does not specifically teach create a first session state with a first foundation model, where the first session state is initialized based on the conversation state; However, ANANTHANARAYANAN does teach this limitation (See ANANTHANARAYANAN, [0016] “In some embodiments, the foundation model simultaneously reviews context profiles and selects a best response from the simultaneously reviewed responses. For example, the foundation model may identify multiple context profiles and simultaneously prepare responses based on these context profiles. The foundation model may prepare relevancy scores from the responses and select the context profile having a higher (or highest) relevancy score. In some examples, the foundation model may prepare the context profiles to be analyzed within a latency budget.”) (See ANANTHANARAYANAN, [0034] “The context extraction system 110 may determine a context profile from the query. For example, the context extraction system 110 may generate a context profile based on content from the domain database 115 and text from the query. In some examples, the context extraction system 110 may select a context profile from a profile database of context profiles. The context extraction system 110 may generate a prompt for the foundation model 102 using the context information from the context profile and the query. For example, the context extraction system 110 may generate a text concatenation of the query and the context information from the context profile to generate the prompt. The foundation model 102 may prepare a response to the query based on the inputted prompt and send the response back to the user device 108. Additional information in connection with steps of the above-process will be discussed in further detail below.”) (See ANANTHANARAYANAN, [0036] “In some examples, the context extraction system 110 may simultaneously analyze multiple context profiles in parallel (e.g., any number of context profiles). The context extraction system 110 may prepare relevancy scores for each of the context profiles. The context extraction system 110 may select the context profile having the higher relevancy score (or a relevancy score that exceeds a threshold and having a lower expected latency). The context extraction system 110 may then send the prompt based on the best context profile to the foundation model 102.”) Agarwal and ANANTHANARAYANAN are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus of Agarwal to incorporate create a first session state with a first foundation model, where the first session state is initialized based on the conversation state of ANANTHANARAYANAN. This allows improved relevance of the responses while reducing the cost of the foundation model's response as recognized by ANANTHANARAYANAN [0014]. Regarding Independent Claim 16, Agarwal teaches 16. An apparatus, comprising: a processor; and a non-transitory computer-readable medium comprising instructions that when executed by the processor, cause the processor to: (see Agarwal [0006] “…a computer-readable storage medium including instructions for … The instructions executed by a processor”) update a conversation state based on the first session state; and initialize the second session state based on the conversation state. (see Agarwal [0028] “FIG. 3 illustrates the system architecture of a system 300 with the general stages involved in personalizing a digital assistant according to at least one embodiment. In FIG. 3, components having reference numbers 310, 320, 350 and 356 represent runtime components while the remaining components of FIG. 3 represent offline components. As part of the runtime flow, the user query of block 310 is received and then sent to the answer/result candidate generator of block 320. As part of the offline process the past conversational queries and action logs are depicted in cloud 330 and semantic parsing may be performed via an RDF adapter as shown in block 332. Thus, the user's past conversations along with the digital assistant's actions and the user's interactions to the digital assistant are used to build the knowledge base 340.”) Agarwal does not specifically teach open a first session state with a first foundation model; close the first session state with the first foundation model; open a second session state with a second foundation model; However, ANANTHANARAYANAN does teach this limitation (See ANANTHANARAYANAN, [0016] “In some embodiments, the foundation model simultaneously reviews context profiles and selects a best response from the simultaneously reviewed responses. For example, the foundation model may identify multiple context profiles and simultaneously prepare responses based on these context profiles. The foundation model may prepare relevancy scores from the responses and select the context profile having a higher (or highest) relevancy score. In some examples, the foundation model may prepare the context profiles to be analyzed within a latency budget.”) (See ANANTHANARAYANAN, [0034] “The context extraction system 110 may determine a context profile from the query. For example, the context extraction system 110 may generate a context profile based on content from the domain database 115 and text from the query. In some examples, the context extraction system 110 may select a context profile from a profile database of context profiles. The context extraction system 110 may generate a prompt for the foundation model 102 using the context information from the context profile and the query. For example, the context extraction system 110 may generate a text concatenation of the query and the context information from the context profile to generate the prompt. The foundation model 102 may prepare a response to the query based on the inputted prompt and send the response back to the user device 108. Additional information in connection with steps of the above-process will be discussed in further detail below.”) (See ANANTHANARAYANAN, [0036] “In some examples, the context extraction system 110 may simultaneously analyze multiple context profiles in parallel (e.g., any number of context profiles). The context extraction system 110 may prepare relevancy scores for each of the context profiles. The context extraction system 110 may select the context profile having the higher relevancy score (or a relevancy score that exceeds a threshold and having a lower expected latency). The context extraction system 110 may then send the prompt based on the best context profile to the foundation model 102.”) Agarwal and ANANTHANARAYANAN are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the apparatus of Agarwal to incorporate open a first session state with a first foundation model; close the first session state with the first foundation model; open a second session state with a second foundation model of ANANTHANARAYANAN. This allows improved relevance of the responses while reducing the cost of the foundation model's response as recognized by ANANTHANARAYANAN [0014]. As to Claim 2, Agarwal in view of ANANTHANARAYANAN teaches 2. The method of claim 1, Furthermore, ANANTHANARAYANAN teaches where the user context comprises at least one of an instantaneous user context, an accumulated user context, or a user profile. (see ANANTHANARAYANAN, [0022] “In some examples, the context generation system may pre-select one or more context profiles based on historical data collective over performing a number of previous queries. For example, the context generation system may maintain a profile database of historical relevancy scores of various context profiles. When the context generation system receives a query, the context generation system may compare the query and the associated context with the profile database. The context generation system may determine a context profile by pre-selecting one or more context profiles that is predicted to generate a relevant response within a threshold latency. This pre-selection of context can significantly reduce utilization of processing resources on the cloud and/or on client devices.”) Agarwal in view of ANANTHANARAYANAN are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of Agarwal and ANANTHANARAYANAN to incorporate where the user context comprises at least one of an instantaneous user context, an accumulated user context, or a user profile of ANANTHANARAYANAN. This allows improved relevance of the responses while reducing the cost of the foundation model's response as recognized by ANANTHANARAYANAN [0014]. As to Claim 3, Agarwal in view of ANANTHANARAYANAN teaches 3. The method of claim 1, Furthermore, ANANTHANARAYANAN teaches where the previous session was previously opened and closed by the first foundation model. (see ANANTHANARAYANAN, [0064] “The response 442 may have an associated relevancy score. A response selector 444 may review the response 442 and determine 445 whether the responses 442 is above the response threshold. If the response 442 is above the response threshold, then the response selector 444 may submit the response 442 to the user. If the response 442 is not above the response threshold, then the response selector 444 may request that the context extractor 410 generate a new context profile. The context extractor 410 may generate a new context profile 438, the prompt generator 426 may generate a new prompt 440, and the foundation model 402 may generate a new response 442. In this manner, the context analysis system 434 may iteratively prepare responses to the query 436 until the response selector 444 identifies a response that is above the relevancy threshold. When the response selector 444 identifies a response that is above the relevancy threshold, the response selector 444 may select that response as a best response 446.”) Agarwal in view of ANANTHANARAYANAN are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of Agarwal and ANANTHANARAYANAN to incorporate where the previous session was previously opened and closed by the first foundation model of ANANTHANARAYANAN. This allows improved relevance of the responses while reducing the cost of the foundation model's response as recognized by ANANTHANARAYANAN [0014]. Regarding Claim 4, Agarwal in view of ANANTHANARAYANAN teaches 4. The method of claim 1, Furthermore, ANANTHANARAYANAN teaches where the previous session was previously opened and remains concurrently running on a second foundation model different than the first foundation model. (see ANANTHANARAYANAN [0015] “In some embodiments, the foundation model is used in iteratively reviewing context profiles until a response is prepared that is within the relevancy threshold. For example, the foundation model may prepare a first context profile and prepare a first response based on the first context profile. If the first response is not within the relevancy threshold, then the foundation model may prepare a second context profile and prepare a second response based on the second context profile. If the second response is within the relevancy threshold, then the foundation model may prepare a final response based on the second context profile. If the second response is not within the relevancy threshold, then the foundation model may iteratively prepare further context profiles until one of the context profiles results in a response within the relevancy threshold. In some embodiments, subsequent context profiles have higher complexity with a higher associated cost. In some embodiments, subsequent context profiles are prepared based on a pre-determined pattern. In some embodiments, if the foundation model reaches a latency budget, then the foundation model selects the context profile associated with the response having the higher relevance.”) Agarwal in view of ANANTHANARAYANAN are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of Agarwal and ANANTHANARAYANAN to incorporate where the previous session was previously opened and remains concurrently running on a second foundation model different than the first foundation model of ANANTHANARAYANAN. This allows improved relevance of the responses while reducing the cost of the foundation model's response as recognized by ANANTHANARAYANAN [0014]. Regarding Claim 5, Agarwal in view of ANANTHANARAYANAN teaches 5. The method of claim 1, Furthermore, ANANTHANARAYANAN teaches where the first foundation model is a large language model. (See ANANTHANARAYANAN [0025] “For example, as used herein, a “foundation model” refers to an AI or ML model that is trained to generate an output in response to an input based on a large dataset. A foundation model may include a neural network having a significant number of parameters (e.g., billions of parameters) that the foundation model can consider in performing a task or otherwise generating an output based on an input. In one or more embodiments described herein, a foundation model is trained to generate a response to a query. In some implementations, a foundation model refers to a large language model (LLM). The foundation model be trained in pattern recognition and text prediction. For example, the foundation model may be trained to predict the next word of a particular sentence or phrase. In one or more implementations described herein, the foundation model refers specifically to an LLM, though other types of foundation models may be used in generating responses to input queries. Indeed, while one or more embodiments described herein refer to features associated with determining context for an LLM, similar features may apply to determining and/or generating context for other types of foundation models.”) Agarwal in view of ANANTHANARAYANAN are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of Agarwal and ANANTHANARAYANAN to incorporate where the first foundation model is a large language model of ANANTHANARAYANAN. This allows improved relevance of the responses while reducing the cost of the foundation model's response as recognized by ANANTHANARAYANAN [0014]. Regarding Claim 11, Agarwal in view of ANANTHANARAYANAN teaches 11. The apparatus of claim 10, Furthermore, ANANTHANARAYANAN teaches where the instructions further cause the processor to: create a second session state with a second foundation model, where the second session state is initialized based on the conversation state; and where the conversation state is updated based on a selected portion from the first session state or the second session state. (see ANANTHANARAYANAN, [0017] In some embodiments, the foundation model maintains a set of different context profiles, their associated computation times, and likely relevance scores. For example, as the foundation model processes queries using different context profiles, the foundation model may record the particular context profile, the computation time, and the relevancy score. As the foundation model builds a database or collection of these context profiles, the foundation model may select a set of context profiles to analyze in response to a particular query. In this manner, the foundation model may generate responses using the context profiles that are most likely to fall within the relevancy threshold while reducing the cost (e.g., the latency and/or resource utilization) of the response. As used herein “cost of a response” refers to a metric of latency and/or utilization of resources. For example, a high cost may refer to a high quantity of latency and/or a high quantity of computing resources utilized by the system(s). Alternatively, a low cost may refer to a low metric of latency and/or low quantity of computing resources utilized by the system(s). Agarwal in view of ANANTHANARAYANAN are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Apparatus of combination of Agarwal and ANANTHANARAYANAN to incorporate where the instructions further cause the processor to: create a second session state with a second foundation model, where the second session state is initialized based on the conversation state; and where the conversation state is updated based on a selected portion from the first session state or the second session state of ANANTHANARAYANAN. This allows improved relevance of the responses while reducing the cost of the foundation model's response as recognized by ANANTHANARAYANAN [0014]. Regarding Claim 12, Agarwal in view of ANANTHANARAYANAN teaches 12. The apparatus of claim 11, Furthermore, Agarwal teaches further comprising a user interface and where the instructions further cause the processor to: obtain a user prompt from a user; generate a first query for the first foundation model and a second query for the second foundation model based on the user prompt; receive a first response from the first foundation model and a second response from the second foundation model; (see ANANTHANARAYANAN, [0015] “In some embodiments, the foundation model is used in iteratively reviewing context profiles until a response is prepared that is within the relevancy threshold. For example, the foundation model may prepare a first context profile and prepare a first response based on the first context profile. If the first response is not within the relevancy threshold, then the foundation model may prepare a second context profile and prepare a second response based on the second context profile. If the second response is within the relevancy threshold, then the foundation model may prepare a final response based on the second context profile. If the second response is not within the relevancy threshold, then the foundation model may iteratively prepare further context profiles until one of the context profiles results in a response within the relevancy threshold. In some embodiments, subsequent context profiles have higher complexity with a higher associated cost. In some embodiments, subsequent context profiles are prepared based on a pre-determined pattern. In some embodiments, if the foundation model reaches a latency budget, then the foundation model selects the context profile associated with the response having the higher relevance.”) and select a single response from the first response and the second response to present to the user. (see ANANTHANARAYANAN, [0034] “The context extraction system 110 may determine a context profile from the query. For example, the context extraction system 110 may generate a context profile based on content from the domain database 115 and text from the query. In some examples, the context extraction system 110 may select a context profile from a profile database of context profiles. The context extraction system 110 may generate a prompt for the foundation model 102 using the context information from the context profile and the query. For example, the context extraction system 110 may generate a text concatenation of the query and the context information from the context profile to generate the prompt. The foundation model 102 may prepare a response to the query based on the inputted prompt and send the response back to the user device 108. Additional information in connection with steps of the above-process will be discussed in further detail below.”) Agarwal in view of ANANTHANARAYANAN are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Apparatus of combination of Agarwal and ANANTHANARAYANAN to incorporate a user interface and where the instructions further cause the processor to: obtain a user prompt from a user; generate a first query for the first foundation model and a second query for the second foundation model based on the user prompt; receive a first response from the first foundation model and a second response from the second foundation model; and select a single response from the first response and the second response to present to the user. of ANANTHANARAYANAN. This allows improved relevance of the responses while reducing the cost of the foundation model's response as recognized by ANANTHANARAYANAN [0014]. Regarding Claim 13, Agarwal in view of ANANTHANARAYANAN teaches 13. The apparatus of claim 12, Furthermore, ANANTHANARAYANAN teaches where the conversation state is updated based on the single response. (see ANANTHANARAYANAN, [0015-0016] In some embodiments, the foundation model is used in iteratively reviewing context profiles until a response is prepared that is within the relevancy threshold. For example, the foundation model may prepare a first context profile and prepare a first response based on the first context profile. If the first response is not within the relevancy threshold, then the foundation model may prepare a second context profile and prepare a second response based on the second context profile. If the second response is within the relevancy threshold, then the foundation model may prepare a final response based on the second context profile. If the second response is not within the relevancy threshold, then the foundation model may iteratively prepare further context profiles until one of the context profiles results in a response within the relevancy threshold. In some embodiments, subsequent context profiles have higher complexity with a higher associated cost. In some embodiments, subsequent context profiles are prepared based on a pre-determined pattern. In some embodiments, if the foundation model reaches a latency budget, then the foundation model selects the context profile associated with the response having the higher relevance.[0016] In some embodiments, the foundation model simultaneously reviews context profiles and selects a best response from the simultaneously reviewed responses. For example, the foundation model may identify multiple context profiles and simultaneously prepare responses based on these context profiles. The foundation model may prepare relevancy scores from the responses and select the context profile having a higher (or highest) relevancy score. In some examples, the foundation model may prepare the context profiles to be analyzed within a latency budget. Agarwal in view of ANANTHANARAYANAN are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Apparatus of combination of Agarwal and ANANTHANARAYANAN to incorporate where the conversation state is updated based on the single response of ANANTHANARAYANAN. This allows improved relevance of the responses while reducing the cost of the foundation model's response as recognized by ANANTHANARAYANAN [0014]. Regarding Claim 14, Agarwal in view of ANANTHANARAYANAN teaches 14. The apparatus of claim 12, Furthermore, ANANTHANARAYANAN teaches where at least one of the first session state and the second session state is closed based on the single response. (see ANANTHANARAYANAN, [0064] “The response 442 may have an associated relevancy score. A response selector 444 may review the response 442 and determine 445 whether the responses 442 is above the response threshold. If the response 442 is above the response threshold, then the response selector 444 may submit the response 442 to the user. If the response 442 is not above the response threshold, then the response selector 444 may request that the context extractor 410 generate a new context profile. The context extractor 410 may generate a new context profile 438, the prompt generator 426 may generate a new prompt 440, and the foundation model 402 may generate a new response 442. In this manner, the context analysis system 434 may iteratively prepare responses to the query 436 until the response selector 444 identifies a response that is above the relevancy threshold. When the response selector 444 identifies a response that is above the relevancy threshold, the response selector 444 may select that response as a best response 446.”) Agarwal in view of ANANTHANARAYANAN are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Apparatus of combination of Agarwal and ANANTHANARAYANAN to incorporate where at least one of the first session state and the second session state is closed based on the single response of ANANTHANARAYANAN. This allows improved relevance of the responses while reducing the cost of the foundation model's response as recognized by ANANTHANARAYANAN [0014]. Regarding Claim 15, Agarwal in view of ANANTHANARAYANAN teaches 15. The apparatus of claim 12, Furthermore, ANANTHANARAYANAN teaches where both the first session state and the second session state are closed based on the single response. (see ANANTHANARAYANAN, [0064] “The response 442 may have an associated relevancy score. A response selector 444 may review the response 442 and determine 445 whether the responses 442 is above the response threshold. If the response 442 is above the response threshold, then the response selector 444 may submit the response 442 to the user. If the response 442 is not above the response threshold, then the response selector 444 may request that the context extractor 410 generate a new context profile. The context extractor 410 may generate a new context profile 438, the prompt generator 426 may generate a new prompt 440, and the foundation model 402 may generate a new response 442. In this manner, the context analysis system 434 may iteratively prepare responses to the query 436 until the response selector 444 identifies a response that is above the relevancy threshold. When the response selector 444 identifies a response that is above the relevancy threshold, the response selector 444 may select that response as a best response 446.”) Agarwal in view of ANANTHANARAYANAN are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Apparatus of combination of Agarwal and ANANTHANARAYANAN to incorporate where both the first session state and the second session state are closed based on the single response of ANANTHANARAYANAN. This allows improved relevance of the responses while reducing the cost of the foundation model's response as recognized by ANANTHANARAYANAN [0014]. Regarding Claim 17, Agarwal in view of ANANTHANARAYANAN teaches 17. The apparatus of claim 16, Furthermore, ANANTHANARAYANAN teaches where the instructions further cause the processor to: generate a first query for the first foundation model based on first user context; and present a first response to the first query. (see ANANTHANARAYANAN, [0018] “In some embodiments, a method includes receiving a query for a foundation model. The foundation model may receive a query and extract a first context profile for the query using language of the query and a context database (or domain database). The first context profile may include first context content based on the language of the query. Using the first context profile, a context generation system may generate a first prompt for the foundation model. The first prompt includes a first concatenation of the query and the first context content. The context generation system may input the first prompt into the foundation model, resulting in a first response from the foundation model. The context generation system may generate a first relevancy score or relevancy score for the first response based on a relevance of the first response to the query. The context generation system may extract a second context profile for the query using language of the query and the context database. The second context profile may include second context content based on the language of the query. Using the second context profile, a context generation system may generate a second prompt for the foundation model. The second prompt includes a second concatenation of the query and the first context content. The context generation system may input the second prompt into the foundation model, resulting in a second response from the foundation model. The context generation system may generate a second relevancy score or relevancy score for the second response based on a relevance of the second response to the query. The context generation system may select one of the first response or the second response based on the first relevancy score and the second relevancy score.”) Agarwal in view of ANANTHANARAYANAN are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Apparatus of combination of Agarwal and ANANTHANARAYANAN to incorporate where both the first session state and the second session state are closed based on the single response of ANANTHANARAYANAN. This allows improved relevance of the responses while reducing the cost of the foundation model's response as recognized by ANANTHANARAYANAN [0014]. Regarding Claim 18, Agarwal in view of ANANTHANARAYANAN teaches 18. The apparatus of claim 17, Furthermore, ANANTHANARAYANAN teaches where the instructions further cause the processor to: update the conversation state based on the first response and the first query; generate a second query for the second foundation model based on second user context; and present a second response to the second query which references information from the first query. ((see ANANTHANARAYANAN, [0018] “In some embodiments, a method includes receiving a query for a foundation model. The foundation model may receive a query and extract a first context profile for the query using language of the query and a context database (or domain database). The first context profile may include first context content based on the language of the query. Using the first context profile, a context generation system may generate a first prompt for the foundation model. The first prompt includes a first concatenation of the query and the first context content. The context generation system may input the first prompt into the foundation model, resulting in a first response from the foundation model. The context generation system may generate a first relevancy score or relevancy score for the first response based on a relevance of the first response to the query. The context generation system may extract a second context profile for the query using language of the query and the context database. The second context profile may include second context content based on the language of the query. Using the second context profile, a context generation system may generate a second prompt for the foundation model. The second prompt includes a second concatenation of the query and the first context content. The context generation system may input the second prompt into the foundation model, resulting in a second response from the foundation model. The context generation system may generate a second relevancy score or relevancy score for the second response based on a relevance of the second response to the query. The context generation system may select one of the first response or the second response based on the first relevancy score and the second relevancy score.”) Agarwal in view of ANANTHANARAYANAN are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Apparatus of combination of Agarwal and ANANTHANARAYANAN to incorporate the instructions further cause the processor to: update the conversation state based on the first response and the first query; generate a second query for the second foundation model based on second user context; and present a second response to the second query which references information from the first query of ANANTHANARAYANAN. This allows improved relevance of the responses while reducing the cost of the foundation model's response as recognized by ANANTHANARAYANAN [0014]. Regarding Claim 19, Agarwal in view of ANANTHANARAYANAN teaches 19. The apparatus of claim 17, Furthermore, Agarwal teaches further comprising a sensor and where the first query comprises instantaneous user context based on data captured by the sensor. (see Agarwal, [0072] “…the microphone may also serve as an audio sensor to facilitate control of notifications”) Regarding Claim 20, Agarwal in view of ANANTHANARAYANAN teaches 20. The apparatus of claim 17, Furthermore, Agarwal teaches further comprising a user-specific database and where the first query comprises accumulated user context retrieved from the user-specific database. (see Agarwal, [0006] According to yet another aspect disclosed herein, a computer-readable storage medium including instructions for enhancing a digital assistant on a user's device is disclosed. The instructions executed by a processor include accessing the digital assistant via a device, receiving a first query as input from the user via the digital assistant and, in response to the user's first query, generating search results via the digital assistant accessing an application on the device or accessing an online resource. The instructions also include defining and storing a session linking the first query from the user with the search results generated via the digital assistant, generating a knowledge base of information based on a plurality of sessions, and retrieving information about the session from the knowledge base in response to subsequently receiving a second query as input from the user, wherein the second query references the first query.”) Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Agarwal (U.S. Patent Number US 20180067991 A1) in view of ANANTHANARAYANAN (U.S. Patent Number US 20240419698 A1), and further in view of He (U.S. Patent Number US 20240362286 A1). Regarding Claim 6, Agarwal in view of ANANTHANARAYANAN teaches 6. The method of claim 5, Agarwal in view of ANANTHANARAYANAN do not specifically teach where the user context comprises captions generated from computer-vision analysis of an image, and the record of the previous session comprises a text transcript. However, He does teach this limitation (see He, [0084] Semi-supervised learning is a type of machine learning algorithm that combines both labeled and unlabeled data to improve the accuracy of predictions or classifications. In this approach, the algorithm is trained on a small amount of labeled data and a much larger amount of unlabeled data. The main idea behind semi-supervised learning is that labeled data is often scarce and expensive to obtain, whereas unlabeled data is abundant and easy to collect. By leveraging both types of data, semi-supervised learning can achieve higher accuracy and better generalization than either supervised or unsupervised learning alone. In semi-supervised learning, the algorithm first uses the labeled data to learn the underlying structure of the problem. It then uses this knowledge to identify patterns and relationships in the unlabeled data, and to make predictions or classifications based on these patterns. Semi-supervised learning has many applications, such as in speech recognition, natural language processing, and computer vision. It is particularly useful for tasks where labeled data is expensive or time-consuming to obtain, and where the goal is to improve the accuracy of predictions or classifications by leveraging large amounts of unlabeled data.”) (see He [0114] “Semi-structured text 616 is text information that does not fit neatly into the traditional categories of structured and unstructured data. It has some structure but does not conform to the rigid structure of a specific format or schema. Semi-structured data is characterized by the presence of context tags or metadata that provide some structure and context for the text information, such as a caption or description of a figure, name of a table, labels for equations, and so forth..”) Agarwal in view of ANANTHANARAYANAN and He are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of Agarwal and ANANTHANARAYANAN to incorporate where the user context comprises captions generated from computer-vision analysis of an image, and the record of the previous session comprises a text transcript of He. This allows for information retrieval, and content management as recognized by He [0029]. Claims 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Agarwal (U.S. Patent Number US 20180067991 A1) in view of ANANTHANARAYANAN (U.S. Patent Number US 20240419698 A1) and further in view of DUTTA (U.S. Patent Number US 20210192800 A1). Regarding Claim 7, Agarwal in view of ANANTHANARAYANAN teaches 7. The method of claim 5, Agarwal in view of ANANTHANARAYANAN do not specifically teach where the record of the previous session defines a user-specific token and the prompt references the user-specific token. However, Dutta does teach this limitation (see Dutta, [0115] The context determination module (114) is configured to determine context associated with the conversation. In an embodiment, the context determination module (114) is configured to extract the context of the conversation using a current conversation session between two or more users and user's behavior who are engaged in the current conversation session. In one embodiment, the context determination module (114) includes a query parser (202). The query parser is configured to parse the context and emotions associated with the current conversation session. Further, the context determination module (114) includes a speech to text converter (204) and a string tokenizer (208). The speech to text converter (204) is configured to convert audio/video form of conversation into text conversation, and is further configured to determine context associated with the conversation. The string tokenizer (208) is configured to break a string of the conversation into tokens. In another embodiment, the context determination module (114) is configured to tokenize the text/audio/video conversation into segments, and is further configured to create an attention module (210) for creating dependencies with previous form of text/audio/video conversation. The context determination module (114) is further configured to use a position encoder (212) and a recurrence mechanism (214) to extract and determine the context of the text/audio/video conversation. In an embodiment, the position encoder (212) is a relative position encoder that encodes a relative position of relevant segment tokens (206) or strings for extracting and determining the context of the conversation. The recurrence mechanism (214) is a segment level recurrence mechanism that uses previous history of the conversation for extracting and determining the context of the conversation. In an embodiment, the context includes purpose of the conversation, usage data, current activities of a user, pre-stored user data, time of the conversation, location of the user, real time conversation and current conversation details, previous history of conversation, relationship with other user who is engaged in the conversation, and frequency of the conversation between users.”) Agarwal in view of ANANTHANARAYANAN and Dutta are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of Agarwal and ANANTHANARAYANAN to incorporate where the record of the previous session defines a user-specific token and the prompt references the user-specific token of Dutta. This allows diversified sentiments as recognized by Dutta [0186]. Regarding Claim 8, Agarwal in view of ANANTHANARAYANAN and further in view of Dutta teaches 8. The method of claim 7, Furthermore, ANANTHANARAYANAN teaches where the user-specific token is mapped to a combination of embedding vectors trained from a generic library. (see ANANTHANARAYANAN, [0041] “Furthermore, the components of the computing system 200 may, for example, be implemented as one or more operating systems, as one or more stand-alone applications, as one or more modules of an application, as one or more plug-ins, as one or more library functions or functions that may be called by other applications, and/or as a cloud-computing model. Thus, the components may be implemented as a stand-alone application, such as a desktop or mobile application. Furthermore, the components may be implemented as one or more web-based applications hosted on a remote server. The components may also be implemented in a suite of mobile device applications or “apps.””) (See ANANTHANARAYANAN [0042] “In accordance with at least one embodiment of the present disclosure, the computing system 200 includes a context extractor 210, a domain database 215 in communication with the context extractor 210, and a foundation model 202 in communication with the domain database 215 and the context extractor 210. The context extractor 210 includes a context extraction system 214. The context extraction system 214 may extract context from the domain database 215 based on the query using one or more of a variety of context extraction techniques. For example, the context extraction system 214 may utilize text similarity metrics 216, vector embeddings 218, plugins 220, any other context extraction technique, and combinations thereof to generate context for the query. In some embodiments, the context extraction system 214 extracts a context profile that includes context information from the text similarity metrics 216, the vector embeddings 218, the plugins 220, any other context extraction technique, and combinations thereof.”) Agarwal in view of ANANTHANARAYANAN and further in view of Dutta are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of Agarwal and ANANTHANARAYANAN to incorporate where the user-specific token is mapped to a combination of embedding vectors trained from a generic library. This allows improved relevance of the responses while reducing the cost of the foundation model's response as recognized by ANANTHANARAYANAN [0014]. Regarding Claim 9, Agarwal in view of ANANTHANARAYANAN and further in view of Dutta teaches 9. The method of claim 8, Furthermore, Dutta teaches where the combination of embedding vectors encodes a relationship of the user-specific token to the user that is inferred from the user context. (see Dutta, [0115] The context determination module (114) is configured to determine context associated with the conversation. In an embodiment, the context determination module (114) is configured to extract the context of the conversation using a current conversation session between two or more users and user's behavior who are engaged in the current conversation session. In one embodiment, the context determination module (114) includes a query parser (202). The query parser is configured to parse the context and emotions associated with the current conversation session. Further, the context determination module (114) includes a speech to text converter (204) and a string tokenizer (208). The speech to text converter (204) is configured to convert audio/video form of conversation into text conversation, and is further configured to determine context associated with the conversation. The string tokenizer (208) is configured to break a string of the conversation into tokens. In another embodiment, the context determination module (114) is configured to tokenize the text/audio/video conversation into segments, and is further configured to create an attention module (210) for creating dependencies with previous form of text/audio/video conversation. The context determination module (114) is further configured to use a position encoder (212) and a recurrence mechanism (214) to extract and determine the context of the text/audio/video conversation. In an embodiment, the position encoder (212) is a relative position encoder that encodes a relative position of relevant segment tokens (206) or strings for extracting and determining the context of the conversation. The recurrence mechanism (214) is a segment level recurrence mechanism that uses previous history of the conversation for extracting and determining the context of the conversation. In an embodiment, the context includes purpose of the conversation, usage data, current activities of a user, pre-stored user data, time of the conversation, location of the user, real time conversation and current conversation details, previous history of conversation, relationship with other user who is engaged in the conversation, and frequency of the conversation between users.”) Agarwal in view of ANANTHANARAYANAN and further in view of Dutta are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of combination of Agarwal and ANANTHANARAYANAN and Dutta to incorporate where the combination of embedding vectors encodes a relationship of the user-specific token to the user that is inferred from the user context of Dutta. This allows diversified sentiments as recognized by Dutta [0186]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KRISTEN MICHELLE MASTERS whose telephone number is (703)756-1274. The examiner can normally be reached M-F 8:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Louis Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KRISTEN MICHELLE MASTERS/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jun 17, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592219
Hearing Device User Communicating With a Wireless Communication Device
2y 5m to grant Granted Mar 31, 2026
Patent 12548569
METHOD AND SYSTEM OF DETECTING AND IMPROVING REAL-TIME MISPRONUNCIATION OF WORDS
2y 5m to grant Granted Feb 10, 2026
Patent 12548564
SYSTEM AND METHOD FOR CONTROLLING A PLURALITY OF DEVICES
2y 5m to grant Granted Feb 10, 2026
Patent 12547894
ENTROPY-BASED ANTI-MODELING FOR MACHINE LEARNING APPLICATIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12547840
MULTI-STAGE PROCESSING FOR LARGE LANGUAGE MODEL TO ANSWER MATH QUESTIONS MORE ACCURATELY
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
87%
With Interview (+24.7%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 40 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month