Prosecution Insights
Last updated: April 19, 2026
Application No. 18/218,680

INTELLIGENT PEOPLE ANALYTICS FROM GENERATIVE ARTIFICIAL INTELLIGENCE

Final Rejection §103
Filed
Jul 06, 2023
Examiner
SPOONER, LAMONT M
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Praisidio Inc.
OA Round
6 (Final)
74%
Grant Probability
Favorable
7-8
OA Rounds
3y 4m
To Grant
86%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
445 granted / 603 resolved
+11.8% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
22 currently pending
Career history
625
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
50.1%
+10.1% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 603 resolved cases

Office Action

§103
DETAILED ACTION Introduction This office action is in response to applicant’s amendment filed 11/21/2025. Claims 1-12, 14-16, 19, 21-22, 24 and 27-33 are currently pending and have been examined. There is no claim to foreign priority. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see remark, filed 11/21/2025, with respect to the rejection of the pending claim(s) 1-12, 14-16, 19, 21-22, 24 and 27-33, have been fully considered and are persuasive, based on the current amendments to the claims. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of the new combination of references as seen below. The Examiner notes the plurality of cited references that are combined to make the rejection, wherein the Examiner further notes the construction of the claims that warrant the cited prior art. The Examiner notes, it appears the applicant has added, by modular construction, different elements to the claims (ex. such as training a LLM…data comprising human resources, context of people analytics…, another modular addition, could be training…data comprising financial markets…context retail investors, options, etc.), these elements do not present any new and novel detail, and the variation of training data, or language model, or description of the neural network layers, and using a second LLM (or as many agents/models as required or desired, one to extract context, determine intent, NLP, reasoning, etc., and a second to generate a response, a third to evaluate, verify…, hallucination detection, and so on, and so on), as presented, only drill down into well-known elements in the art. And the number of references used in constructing these added and relevant cited prior art references are inconsequential due to the above explanation. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 12, 14, 16, 19, 21-22, 24, 27-28 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Ogawa et al. (Ogawa, US 2021/0134295) in view of Mostafazadeh et al. (Mostafazadeh, 2022/0343903) and further in view of Gutta et al. (Gutta, US 11,520,815) and further in view of Gupta et al. (Gupta, US 2023/0021797), in view of Bierner et al. (Bierner, US 2024/0134865) in view of Ahmed et al. (Ahmed, US 2025/0005269) and further in view of Poirier et al. (US 2024/0202464). As per claim 1, Ogawa teaches a method, comprising: [training one or more large language models of a generative AI system on domain-specific data comprising human resources data, wherein the trained one or more large language models generate embeddings in the context of people analytics]; [providing, as input to the one or more large language models of the generative AI system, table definitions relating to tables in a data warehouse, wherein the table definitions comprise a table definition for employee records]; receiving a prompt from a client device associated with a user, wherein the prompt is related to a text input or a voice input from the client device, and wherein the prompt is in natural language format (paragraphs [0202-0209, 0013]- as his natural language user query voice/text input, Figs. 1, 6-including his client device and artificial intelligence system via machine learning); providing a form of the received prompt and contextual information to a generative AI system comprising one or ore generative AI models (ibid, paragraph [0057, 0073-0075, 0244]-his machine learning model to analyze the voice data for text, meaning, and other features of the user); obtaining from the generative Al system an embedding representation of the received prompt (ibid-paragraphs [0075, 0168-0170]-his word2vec representation, (AI), machine learning with deep neural networks to process the query data) [by tokenizing the received prompt into subword or word-level tokens, that are processed through multiple layers of the first large language model, wherein each layer of the first large language model extracts different levels of linguistic information, wherein the embedding representation of the received prompt is a vector representation that encodes semantics and syntactic structure of the received prompt]; obtaining, from the generative AI system, an [executable] expression for responding to the received prompt (paragraphs [0120-0142], Fig. 6, items 606-610, Fig. 9-his information exchange based on query language based on user query/input), wherein the [executable] expression is in the form of a query that is described in a query language (ibid, see his intelligent devices, querying and retrieving from the Data Lake), [and wherein the one or more large language models of the generative AI system matches the form of the received prompt to one or more tables of the table definitions that were provided as an input to the one or more large language models to be used for the query]; executing the obtained executable expression to query a data warehouse comprising one or more data sources to obtain data for a response to the received prompt (ibid-his pulling of the data from apps and DBS to generate output); determining, by inputting the received prompt into the generative Al system, and generating a type of response to be generated based on the received prompt (ibid-see above, factual/numerical, statistical, predictive, analytical, response types, based on user input and AI, paragraphs [0203-0209] see also Fig. 6, items 901-911, including AI, and output response, paragraph [0303, 0053, 0194]-his response type based on the user query, completion type responses, prompt for further information type, feedback, etc.), [wherein the type of response to be generated is classified, via the generative AI system, as either a visual graphic output or a textual output, wherein the visual graphic output comprises a chart, a graph or a diagram]; determining a context of the user that submitted the received prompt, wherein the context comprises a user’s department (paragraph [0343, 0194]-his contextual data, used by the conversational bot to determine a response data, the contextual data, comprising user’s account, company data, the user’s prompting, that account as the user’s department, via the user account identity information); generating, by a [second] large language model of the generative AI system, based on the determined context of the user, a response output corresponding to the determined type of response to be generated (ibid-see the corresponding response output to the user, AI discussion, based on the query/prompt, based on the identified user’s account, representative, and/or company information); and providing the response output to the client device associated with the user (ibid, Fig. 5, paragraph [0053]). Ogawa lacks explicitly teaching that which Mostafazadeh teaches obtaining, from the generative AI system, an executable expression for responding to the received prompt (paragraphs [0051, 0050, 0009, 0040, 0041]-as his natural language AI system, executable code for responding to the query, in his query answer system), wherein the executable expression is in the form of a query that is described in a query language (ibid-such that his query is generated in a specific query language, capable of interfacing with the AI and data records of the system); executing the obtained executable expression to query the data warehouse comprising one or more data sources to obtain a response to the received prompt, [the one or more data sources including employee data] (ibid, see above, executable code, and his explicit “executing” thereof, and execution results as his response form the received prompt). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Mostafazadeh to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa with obtaining an executable expression in the form of a query that is described in query language and executing the executable expression to generate a response, wherein the executable hereinafter is noted based on Mostafazadeh and as taught by Mostafazadeh as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating a query and response system able to communication with a data store/warehouse based on the underlying schema of the structured datasets (ibid-Mostafazadeh). Ogawa with Mostafazadeh lack explicitly teaching that which Gutta teaches providing, as input to the one or more large language models of the generative AI system (C.12 lines 38-52-his deep learning model, and corresponding input into his generative AI system, C.3 lines 41-46-as his large language model generative AI system, for generating a query, C.3 lines 44-59-see his NLP models, neural network model, and generated query discussion), table definitions relating to tables in a data warehouse (ibid, C.12 lines 38-52-his table schemas received by the deep learning model), wherein the generative AI system comprises at least one large language model , and wherein the table definitions comprise a table definition for employee records (ibid-see above deep learning model as the AI system, comprising NLP and language generation, C.4 lines 49-59, C.8 lines 25-35-his table and corresponding definitions/schema for his employee records); obtaining, from the generative AI system, an executable expression for responding to the received prompt (C.8 lines 45-47), wherein the executable expression is in the form of a query that is described in a query language (ibid), and wherein the one or more large language models of the generative AI system matches the form of the received prompt to one or more tables of the table definitions that were provided as an input to the one or more large language models to be used for the query (C.11 lines 1-21, his query language including table name/identification, from the table schema/definitions, “employees”, see cited C.11 and C.12, sections, his AI system matching the natural language input, C.7 lines 31-C.8 line 13-his received prompt, and corresponding features, context and encoding indicating the form of the prompt, matched to a specific table, based on the table schema/features of the selected database); executing the obtained executable expression to query the data warehouse comprising one or more data sources to obtain a response to the received prompt, the one or more data sources including employee data (C.6 lines 20-26, C.8 lines 29-32-as his data sources including employee data). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Mostafazadeh and Gutta to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa with obtaining an executable expression in the form of a query that is described in query language and executing the executable expression to generate a response, wherein the executable hereinafter is noted based on Mostafazadeh and as taught by Mostafazadeh with providing, as an input to a generative AI system, table definitions and obtaining an executable expression in the form of a query, wherein the query includes a table from the table of definitions for employee records, the AI system matching the prompt to the tables of the table definitions in order to generate the query, from a database or data warehouse comprising data sources including employee data as taught by Gutta as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating a query and response system able to communication with a data store/warehouse including employee data based on the underlying schema relating to a table of employee records of the structured datasets (ibid-Mostafazadeh, ibid-Gutta, C.8 lines 28-47). The above combination lacks teaching that which Gupta teaches, determining, via the generative Al system, a type of response to be generated based on the received prompt, wherein the type of response to be generated is classified, via the generative AI system, as either a visual graphic output or a textual output, wherein the visual graphic output comprises a chart, a graph or a diagram (paragraphs [0042-0052, 0103-0106]-his response rendition type, from charts, text, etc., using AI as his neural network, and keywords and intent determined based on the received prompt/query and based on the obtained data to the response, paragraph [0066], wherein his neural network selects and classifies the intent to a particular configuration/format for displaying the response, including text, and visual graphic outputs). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Mostafazadeh and Gutta and Gupta to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa with obtaining an executable expression in the form of a query that is described in query language and executing the executable expression to generate a response, wherein the executable hereinafter is noted based on Mostafazadeh and as taught by Mostafazadeh with providing, as an input to a generative AI system, table definitions and obtaining an executable expression in the form of a query, wherein the query includes a table from the table of definitions, the AI system matching the prompt to the tables of the table definitions in order to generate the query as taught by Gutta with the response type to be generated as visual graphic output or text output, the visual output comprising a chart, graph or diagram, as taught by Gupta as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating a query and response system able to communication with a data store/warehouse based on the underlying schema of the structured datasets (ibid-Mostafazadeh, ibid-Gutta, C.8 lines 28-47, ibid-Gupta see his rendition type discussion, paragraphs [0042, 0043]). The above combination lack teaching that which Bierner teaches, by tokenizing the received prompt into subword or word-level tokens, that are processed through multiple layers of the first large language model, wherein each layer of the first large language model extracts different levels of linguistic information (paragraphs [0056, 0057,0136, 0056-0060]-his prompt tokenization into input tokens, as word, etc. units, and corresponding layers of his language model, each extracting different levels of linguistic information from the text, with respect to each embedding representation), wherein the embedding representation of the received prompt is a vector representation that encodes semantics and syntactic structure of the received prompt (ibid, paragraphs [0134-0136]-his vectorized embedding representations from the prompt, that encodes syntax and semantics). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Mostafazadeh and Gutta and Gupta and Bierner et al., to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa with obtaining an executable expression in the form of a query that is described in query language and executing the executable expression to generate a response, wherein the executable hereinafter is noted based on Mostafazadeh and as taught by Mostafazadeh with providing, as an input to a generative AI system, table definitions and obtaining an executable expression in the form of a query, wherein the query includes a table from the table of definitions, the AI system matching the prompt to the tables of the table definitions in order to generate the query as taught by Gutta with the response type to be generated as visual graphic output or text output, the visual output comprising a chart, graph or diagram, as taught by Gupta with the particulars of embedding a prompt by a LLM in order to extract linguistic information, including semantic and syntactic information, utilizing layers of a neural network as taught by Bierner, as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating a query, by prompt using neural network model, and response system able to communication with a data store/warehouse based on the underlying schema of the structured datasets (ibid-Mostafazadeh, ibid-Gutta, C.8 lines 28-47, ibid-Gupta see his rendition type discussion, paragraphs [0042, 0043], ibid-see Bierner neural network discussion, prompt and LLM discussion). The above combination lacks teaching that which Ahmed teaches, training one or more large language models of a generative AI system on domain-specific data comprising human resources data, wherein the trained one or more large language models generate embeddings in the context of people analytics (paragraphs [0025, 0056-0059]-his AI model, Human Resources, domain-specific training, and corresponding HR data, as people analytics and corresponding embeddings). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Mostafazadeh and Gutta and Gupta and Bierner with Ahmed, to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa with obtaining an executable expression in the form of a query that is described in query language and executing the executable expression to generate a response, wherein the executable hereinafter is noted based on Mostafazadeh and as taught by Mostafazadeh with providing, as an input to a generative AI system, table definitions and obtaining an executable expression in the form of a query, wherein the query includes a table from the table of definitions, the AI system matching the prompt to the tables of the table definitions in order to generate the query as taught by Gutta with the response type to be generated as visual graphic output or text output, the visual output comprising a chart, graph or diagram, as taught by Gupta with the particulars of embedding a prompt by a LLM in order to extract linguistic information, including semantic and syntactic information, utilizing layers of a neural network as taught by Bierner, with training a LLM with domain-specific HR data as taught by Ahmed, as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating a query, by prompt using neural network model, trained on HR data, and response system able to communication with a data store/warehouse based on the underlying schema of the structured datasets (ibid-Mostafazadeh, ibid-Gutta, C.8 lines 28-47, ibid-Gupta see his rendition type discussion, paragraphs [0042, 0043], ibid-see Bierner neural network discussion, prompt and LLM discussion, ibid-Ahmed, abstract, prompt based AI, in HR domain, with people analytics). The above combination lacks teaching that which Poirier teaches, generating, by a second large language model of the generative AI system, based on the determined context of the user, a response output corresponding to the determined type of response to be generated (paragraphs [0080, 0120-0124, 0139]-see his second LLM of a generative AI system, and based on context of a user’s input, generates a response). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Mostafazadeh and Gutta and Gupta and Bierner with Ahmed with Poirier, to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa with obtaining an executable expression in the form of a query that is described in query language and executing the executable expression to generate a response, wherein the executable hereinafter is noted based on Mostafazadeh and as taught by Mostafazadeh with providing, as an input to a generative AI system, table definitions and obtaining an executable expression in the form of a query, wherein the query includes a table from the table of definitions, the AI system matching the prompt to the tables of the table definitions in order to generate the query as taught by Gutta with the response type to be generated as visual graphic output or text output, the visual output comprising a chart, graph or diagram, as taught by Gupta with the particulars of embedding a prompt by a LLM in order to extract linguistic information, including semantic and syntactic information, utilizing layers of a neural network as taught by Bierner, with training a LLM with domain-specific HR data as taught by Ahmed, with using a second large language model to generate a response, as taught by Poirier, as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating a query, by prompt using neural network model, trained on HR data, and response system, the response by a second LLM for receiving a prompt, and generating a response, able to communication with a data store/warehouse based on the underlying schema of the structured datasets (ibid-Mostafazadeh, ibid-Gutta, C.8 lines 28-47, ibid-Gupta see his rendition type discussion, paragraphs [0042, 0043], ibid-see Bierner neural network discussion, prompt and LLM discussion, ibid-Ahmed, abstract, prompt based AI, in HR domain, with people analytics, ibid-see Poirier’s response second LLM, utilizing a plurality of agents, wherein a second agent/LLM is used to generate a response, abstract). As per claim 12, Ogawa further makes obvious the method of claim 1, wherein at least a subset of the generative Al models is trained using an initial training dataset generated by a general language model, and the generative Al model is subsequently specialized to generate executable expressions in response to textual prompts from users (ibid-see claim 4, general/specific types of data or decision science based intelligence, based on user queries, paragraphs [0138-0170]-his detailed each intelligent device performing general or specific types, the general using general data set, based on intelligence, and specific based on specific commands/prompts/queries from the user, see paragraphs [0072, 0073]). As per claim 14, Ogawa with Mostafazadeh with Gutta with Gupta further makes obvious the method of claim 1, wherein the embedding representation is generated by encoding the received prompt using the generative Al model (ibid, Ogawa, paragraphs [0169, 0170]-his NLP, word2vec representation learning, NLP query, etc., see also Nair, paragraphs [0081-0084]-his conversion of the query into vector, with respect to query, and similarity prompt/query determination). As per claim 16, claim 16 sets forth limitations similar to claim 1 and is thus rejected under similar reasons and rationale, wherein the system is deemed to embody the method, such that Ogawa with Mostafazadeh with Gupta with Gutta with Bierner with Ahmed with Poirier make obvious a communication system comprising one or more processors configured to perform the operations of (Ogawa, Figs 2, 5, paragraphs [0348, 0366]-see his system, processors instructions discussion): training one or more large language models of a generative AI system on domain-specific data comprising human resources data, wherein the trained one or more large language models generate embeddings in the context of people analytics (ibid-see claim 1, corresponding and similar limitation); providing, as an input to the one or more large language models of the generative AI system, table definitions relating to tables in a data warehouse, wherein the table definitions comprise a table definition for employee records (ibid-see claim 1, corresponding and similar limitation); receiving a prompt from a client device associated with a user, wherein the prompt is related to a text input or a voice input from the client device, and wherein the prompt is in natural language format (ibid-see claim 1, corresponding and similar limitation); providing a form of the received prompt and contextual information to a first large language model of the generative AI system (ibid); obtaining, from the generative Al system, an embedding representation of the received prompt by tokenizing the received prompt into subword units or word-level tokens, that are processed through multiple layers of the first large language model, wherein each of a layer of the first large language model extracts different levels of linguistic information, wherein the embedding representation of the received prompt is a vector representation that encodes semantics and syntactic structure of the received prompt (ibid); obtaining, from the generative AI system, an executable expression for responding to the received prompt (ibid), wherein the executable expression is in the form of a query that is described in a query language (ibid), and wherein the one or more large language models of the generative AI system matches the form of the received prompt to one or more tables of the table definitions that were provided as an input to the one or more large language models to be used for the query (ibid); executing the obtained executable expression to query the data warehouse comprising one or more data sources to obtain data for a response to the received prompt, the one or more data sources including employee data (ibid); determining, by inputting the received prompt into the generative Al system, and generating a type of response to be generated based on the received prompt (ibid), wherein the type of response to be generated is classified via the generative AI system (ibid), as either a visual graphic output or a textual output (ibid), wherein the visual graphic output comprises a chart, a graph or a diagram (ibid); determining a context of the user that submitted the received prompt, wherein the context comprises a user’s department (ibid); generating, by a second large language model of the generative AI system, based on the determined context of the user (ibid), a response output corresponding to the determined type of response to be generated (ibid); and providing the response output to the client device associated with the user (ibid). As per claim 19, Ogawa with Mostafazadeh with Gutta with Gupta with Bierner with Ahmed with Poirier further make obvious the communication system of claim 16, wherein receiving the prompt comprises: displaying, within a user interface presented to the client device, a plurality of selectable generated prompts representing suggested prompts the user may wish to submit (ibid, Ogawa, paragraph [0170, 0200-0209]-his “related query suggestions”, “recommendations”, Figs. 12-18-including suggested prompts, and selection thereof); and receiving a selected one of the selectable generated prompts (ibid). As per claim 21, Ogawa further makes obvious the method of claim 1, further comprising: categorizing prompts into different sets based on their topics or intents (ibid-paragraphs [0168-0170]-his intent prediction and topic classification); updating the categorized prompts based on user input to improve the accuracy and relevance of the categorization (ibid-his neural networks and deep machine learning, for improving accuracy with respect the topic/intent); and allowing users to use a category as a guide to quickly generate prompts related to specific topics (ibid, Figs. 12-19, 17-19, including prompt category guides related to specific topics). As per claim 22, Ogawa further makes obvious the method of claim 1, further comprising: maintaining a conversation context by tracking the history of user interactions and response outputs by storing prompts submitted by the user and corresponding generated response outputs to the stored prompts (paragraph [0223, 0303, 0304, 0324]-his user historical track record, including full conversations, and conversation libraries); and generating follow-up responses to multiple follow-up prompts based on the conversation context and utilizing the generative AI system to generate appropriate response outputs to the follow-up prompts (ibid-his autonomous prediction, using machine learning, and corresponding responses to user follow-up prompts (ibid, see also paragraphs [0093, 0094, 0158-0160]-see his history, user interactions, learning over time based on the conversation context and AI generated predications and recommendations, follow-up prompts, for smarter as increased accuracy and appropriate responses over time) receiving a selection, via the user interface, of a follow-up prompt (paragraph [0165, 0166, 0182-0194]-his prompt provided to the user, and selection of the prompt); and generating, via the generative AI system, an additional response outputs bases on the selected suggested prompt (ibid-and corresponding AI agent conversation based on the selected presented information, i.e. suggested meeting data, etc.). As per claim 24, Ogawa with Mostafazadeh with Gutta further makes obvious the method of claim 1, wherein the executable expression includes one or more structured query language statements where information relating to certain types of data are requested corresponding to tables of the table definitions (ibid-see Mostafazadeh executable expression discussion, his SQL and various types of data stored and accessed, based on query, see also paragraphs [0024, 0029], and claim 1, see Gutta, SQL with table name corresponding to table definitions used in generating the query, see claim 1 corresponding and similar motivation). As per claim 27, Ogawa with Mostafazadeh with Gutta with Gupta with Bierner with Ahmed with Poirier further makes obvious the method of claim 1, further comprising: determining a type of query to generate and which table to query from the data warehouse (ibid, see Mostafazadeh also, paragraph [0029, 0049]-his AI and corresponding referenced tables as defined, “data warehouse” and previous executable expression discussion, for specific data types, text, images/graphics, etc. see claim 1, corresponding and similar motivation), where the type of query is in a first query language or in a second query language, the first query language comprising structured query language statements (ibid-paragraph [0029]-his varies types of data in the data warehouse for retrieval by AI, using corresponding SQL query, amongst his plurality of query languages, see claim 1, corresponding and similar motivation). As per claim 28, Ogawa with Mostafazadeh with Gutta with Gupta with Bierner with Ahmed with Poirier further makes obvious the method of claim 1, wherein the type of response is a textual type or a graphical type (Ogawa, Figs. 12-19, including text and graphical responses to queries, paragraphs [0201-0209]-his queries and corresponding response types, including text, and/or graphics). As per claim 31, Ogawa with Mostafazadeh with Gutta with Gupta with Bierner with Ahmed with Poirier further makes obvious the method of claim 1, further comprising: prioritizing generating, by the generative AI system, a response output that includes charts or infographics (ibid, paragraphs [0042-0051]-Gupta-his selection of the chart from the display rendition types, as prioritized over the other types, via the neural network used to select the types, based on intent, and output data, Figs. 2, 3 and 5, items 208, 312 and 504-508, as similarly motivated for response type rendered, see claim 1). Claim(s) 2, 3, 4, 8-9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Ogawa et al. (Ogawa, US 2021/0134295) in view of Mostafazadeh et al. (Mostafazadeh, S 2022/0343903) in view of Gutta in view of Gupta in view of Bierner in view of Ahmed in view of Poirier, as applied to claim 1, and further in view of Nair et al. (Nair, US 2022/0358295). As per claim 2, Ogawa with Mostafazadeh with Gutta with Gupta make obvious the method of claim 1, wherein obtaining an executable expression for responding to the received prompt comprises: [storing in a vector database previously submitted prompts and their corresponding embeddings]; [performing a similarity search using the obtained embedding representation to identify whether the received prompt is similar to one of previously submitted prompts]; [selecting a similar previously submitted prompt based on a similarity score]; and [when a similar previously submitted prompt is selected via the similarity search, then retrieving for execution as the obtained executable expression the selected similar previously submitted prompt]. Ogawa with Mostafazadeh lack explicitly teaching that which Nair teaches storing in a vector database previously submitted prompts and their corresponding embeddings (Nair, see also paragraphs [0054-0059, 0082-0084]-his questions already used to train the model, Neural Query Similarity, and correspond index with vector embeddings); performing a similarity search using the obtained embedding representation to identify whether the received prompt is similar to one of previously submitted prompts (ibid, paragraph [0084, 0083]-his similarity search, based on embeddings, using AI, and corresponding historical index for query/question, based on similarity); selecting a similar previously submitted prompt based on a similarity score (Nair, see also paragraphs [0054-0059, 0083]-his question/query and identified prompt/query based on similarity); and when a similar previously submitted prompt is selected via the similarity search, then retrieving for execution as the obtained executable expression the selected similar previously submitted prompt (ibid, paragraphs [0080-0085]-his FAISS similarity, prompt determined, and the similar prompt selected as the executable expression as it is executed and corresponding response is returned, see his quick response). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa with Mostafazadeh and Nair to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa with executables for execution as taught by Mostafazadeh with providing a similarity search for a query, in order to generate a response as taught by Nair as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be executing and generating a quick response based on the indexed similarity queries and corresponding responses (ibid-Nair, Mostafazadeh). As per claim 3, Ogawa with Mostafazadeh with Gutta with Gupta with Bierner with Ahmed with Poirier with Nair make obvious the method of claim 1, wherein obtaining an executable expression for responding to the received prompt comprises: storing in a vector database previously submitted prompts and their corresponding embeddings (ibid-see claim 1, corresponding and similar limitation); performing a similarity search using the obtained embedding representation to identify whether the received prompt is similar to one of the previously submitted prompts or of a predefined prompt used to generate a particular response (ibid, Nair, see also paragraphs [0054-0059, 0082-0084]-his questions already used to train the model, Neural Query Similarity, and correspond index with vector embeddings); retrieving, from the vector database, a set of similar prompts based on a similarity score, and excluding prompts that having a similarity scores below a certain threshold (ibid-his most similar questions, as the threshold, wherein only the most similar questions are chosen, the others are excluded); based on the performed similarity search (ibid), determining that a similar prompt has been identified (ibid-see claim 2, similar prompt discussion, and corresponding and similar limitation and rejection and rationale, Nair, similarity discussion, paragraphs [0083, 0084]; and receiving an executable expression for responding to the received prompt, the executable expression corresponding to the identified similar prompt (ibid-Nair, paragraphs [0051-0058], Figs. 3-5, see also above claim 1, executable discussion, and answer extraction, his request intent executable sent to web service, executed and response returned, with corresponding and similarity motivation), wherein the executable expression comprises a markup language, a scripting language, or a query language and comprises instructions for performing calculations (ibid-see claim 2, corresponding and similar limitation, reasons and rationale). As per claim 4, Ogawa further make obvious the method of claim 1, wherein the one or more large language models are trained on a labeled dataset comprising various prompt types and corresponding response types (paragraph [0057-0065, 0138-0149]-his general or specific types of data science, and corresponding machine learning, including his different types of machine learning to include the general and specific types, paragraphs [0122-0145,0155-0160, 0169, 0170]-his machine learning, specific to localized business, people and not specific to others, based on query and corresponding learning, see also paragraphs [0072, 0073]-query, and as specific learning regarding specialized people analytics, see queries, paragraphs [0203-0209]). As per claim 8, Ogawa with Mostafazadeh with Gutta with Gupta with Bierner with Ahmed with Poirier further makes obvious the method of claim 1, but lack explicitly teaching that which Nair teaches wherein a similarity search is performed by comparing the generated embedding representation with pre-existing embedding representations of previously submitted prompts stored in a vector database (paragraph [0081- 0084]-his similarity search, based on embeddings, using AI, and corresponding historical index for query/question, based on similarity in vector database, see claim 2, similar rationale). As per claim 9, Ogawa with Mostafazadeh with Gutta with Gupta with Bierner with Ahmed with Poirier with Nair further makes obvious the method of claim 2, wherein Nair teaches that which Ogawa lacks, wherein the response output is provided through an API endpoint that allows users to access at least part of the generative Al system, wherein the API endpoint receives the prompt form a client device associated with the user, and the API endpoint provide the response output in a structured data format comprising JSON or AXML (ibid-Nair, paragraphs [0029-0030, 0048, 0049]-as his endpoint access for training his AI/learner, JSON format for conversational engine response). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Nair to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa with providing a similarity search for a query, and endpoint access for training, and communication format, in order to generate a response as taught by Nair as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating a quick response based on the indexed similarity queries and corresponding responses, with endpoint access for training his service on specific topics (ibid-Nair, paragraph [0029]). As per claim 15, Ogawa with Mostafazadeh with Gutta with Gupta with Nair further makes obvious the method of claim 1, wherein Nair further teaches that which Ogawa lacks, the pre-existing executable expression is retrieved from a database that associates prompts with corresponding executable expressions (ibid-Nair, paragraphs [0051-0059, 0081-0084]-his indexed database of pre-existing and trained executable expression via query language, and corresponding queries for extracting the data from the database, see claim 1, similar rationale). Claim(s) 5 and 32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ogawa et al. (Ogawa, US 2021/0134295) in view of Mostafazadeh et al. (Mostafazadeh, S 2022/0343903) in view of Gutta in view of Gupta in view of Nair, as applied to claim 2 above, and further in view of Hailpern et al. (Hailpern, US 2018/0253414). As per claim 5, Ogawa further makes obvious the method of claim 2, but lacks that which Hailpern teaches, further comprising: determining that the type of response is comparative or involves presenting statistical data (paragraph [0042]-his analysis and presentation type, based on characteristics evaluation, see statistics, and type discussion); based on the determined type of response being comparative or involves presenting statistical data, generating the response output with one or more visual representations of data comprising charts, graphs or diagrams (ibid-his graph, chart, mathematical result output based on the determination). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Hailpern to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa with determining a type of response as taught by Hailpern as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating a type of output presentation based on the information to be presented (ibid-Hailpern). As per claim 32, Ogawa further makes obvious the method of claim 31, but lacks that which Hailpern teaches, wherein the user indicated a preference to tailor a response output format for visual representations (paragraph [0042]-his analysis and presentation type, based on characteristics evaluation, see statistics, and type discussion, and user preference for output type, as similarly motivated as seen in claim 5). Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over Ogawa et al. (Ogawa, US 2021/0134295) in view of Mostafazadeh et al. (Mostafazadeh, S 2022/0343903) in view of Gutta in view of Gupta in view of Bierner in view of Ahmed in view of Poirier in view of in view of Nair et al. (Nair, US 2022/0358295), as applied to claim 2, and further in view of Tremblay et al. (Tremblay, US 2022/0343250). As per claim 7, Ogawa with Mostafazadeh with Gutta with Gupta with Nair make obvious the method of claim 2, wherein Tremblay teaches that which Ogawa lacks, further comprising: determining that a response type involves recommending actions (paragraph [0207, 0213, 0226, 0304, 0560, 0561]-his response to user submission, his recommendation including actions, generate service system, etc.); and generating the response output comprising interactive elements comprising clickable links, buttons or menus (ibid-his suggestion, generation of menu, other selectable/clickable service features, selecting from a menu, AI generation of menu, button, text box, or the like, as interactive elements). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Tremblay to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa with determining a response type includes actions and generating actionable interactive items, such as a menus, buttons, etc. as taught by Tremblay as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating an interactive actionable response item (ibid-Tremblay-paragraph [0226]-see his actionable service features, including a multitude of action types and features, determined and generated by the AI system). Claim(s) 33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ogawa et al. (Ogawa, US 2021/0134295) in view of Mostafazadeh et al. (Mostafazadeh, S 2022/0343903) in view of Gutta in view of Gupta in view of Bierner in view of Ahmed in view of Poirier, as applied to claim 1 above, and further in view of Chu et al. (Chu, US 2009/0063541). As per claim 5, Ogawa further makes obvious the method of claim 1, but lacks that which Chu teaches teaches, the method of claim 1, further comprising: determining a level of intricacy of the received prompt (paragraph [0003]-his automatic parallelism, for complex queries); and based on the determined level of intricacy, performing parallel processing to optimize execution of the executable expression (ibid). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Chu to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa, automatic parallelism for processing complex queries as taught by Chu as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be reducing time and expense in complex computing (ibid-Chu). Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ogawa et al. (Ogawa, US 2021/0134295) in view of Mostafazadeh in view of Gutta in view of Gupta in view of Nair, as applied to claim 2 above, and further in view of Cole et al. (Cole, US 11,765,267). As per claim 6, Ogawa with Mostafazadeh with Gutta with Gupta with Nair make obvious the method of claim 2 but lack teaching that which Cole teaches, further comprising: collecting user feedback on the provided response output (Fig. 6, C.7 lines 9-59-as his user feedback on a provided response); and retraining and updating at least a subset of the generative Al models using the collected user feedback in order to improve the generative Al system over time, wherein the user feedback is used as available data used as sources for the generative AI system (ibid-see above figures, retraining cycle, as his updating his ML model, improving an AI system, based on the user feedback). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Mostafazadeh and Cole to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa user feedback to improve AI as taught by Cole as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating an improved response based on the user feedback and improved accuracy of the model(s) (ibid-Nair). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ogawa et al. (Ogawa, US 2021/0134295) in view of Mostafazadeh in view of Gutta in view of Gupta in view of Bierner in view of Ahmed in view of Poirier in view of Nair, as applied to claim 2, and further in view of Burns, Sr. et al. (Burns, US 11,232,383), and further in view of Stowell et al. (Stowell, US 2024/0288381). As per claim 10, Ogawa with Mostafazadeh with Gutta with Gupta with Bierner with Ahmed with Poirier with Nair further makes obvious the method of claim 2, but lack teaching that which Burns teaches, further comprising: monitoring the performance of the generative Al system using a centralized server serving Al models (C.14 lines 25-40-his monitoring performance of all data received by AI system, using server platform); [collecting data on performance metrics of one or more generative AI models, the data comprising accuracy, response time, and resource utilization]; and refining at least a subset of the generative Al system based on the results of the monitoring, wherein the refining comprising retraining the one or more generative AI models (ibid-his updating/improving, thus retraining, the AI engine based on the modeled data resulting from the monitoring). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Nair and Burns to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa, monitoring the performance of AI as taught by Burns as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating an improved response based on the monitored and resultant data, for improved accuracy of the model(s) (ibid-Nair, Burns). The above combination lacks teaching that which Stowell teaches collecting data on performance metrics of one or more generative AI models, the data comprising accuracy, response time, and resource utilization (paragraphs [0672- 0674]-his monitoring his AI classification system for accuracy, response time, and resource utilization). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Nair and Burns and Stowell to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa, monitoring the performance of AI as taught by Burns with monitoring to include collection of performance metrics as taught by Stowell, as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating an improved response based on the monitored and resultant data, for improved accuracy of the model(s) (ibid-Nair, Burns, Stowell). Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ogawa et al. (Ogawa, US 2021/0134295) in view of Mostafazadeh in view of Gutta in view of Gupta in view of Bierner in view of Ahmed in view of Poirier, as applied to claim 2, and further in view of Brown (US 10,877,964) in view of Gomes (US 8,375,008). As per claim 11, Ogawa with Mostafazadeh with Gutta with Gupta with Bierner with Ahmed with Poirier with Nair further makes obvious the method of claim 2, but lack further comprising, that which Brown teaches: implementing one or more access controls configured such that AI-generated executable expressions have read-only access permissions to the data warehouse, wherein the one or more access controls comprise access control policies and role-based access controls that define and enforce the permissions of the AI-generated executable instructions (C.7 lines 16-48, C.24 lines 28-55-as his AI generated executable expressions, and corresponding read-only databases, C.8 lines 1-39-his adaptive AI intelligence, and particular access of databases, and roles of the user, wherein the AI expressions are executed based on the defined parameters for fulfilling the request). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Nair and Brown to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa, and AI-generated executable expression with a read-only database as taught by Brown as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be extracting data using a query format/expression from a memory file, such as a Read-Only file, wherein the file may not be written or manipulated, such that users submitting queries are not granted write access privileges (ibid-Brown, Gomes-C.12 lines 46-67-his read-only access to a data warehouse or dataset, based on allowed/permission). Claim(s) 29 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Ogawa et al. (Ogawa, US 2021/0134295) in view of Mostafazadeh et al. (Mostafazadeh, 2022/0343903) and further in view of Gutta et al. (Gutta, US 11,520,815) in view of Gupta in view of Bierner in view of Ahmed in view of Poirier, as applied to claim 1 above, and further in view of Rathod (US 2022/0179665). As per claim 29, Ogawa with Mostafazadeh with Gutta with Gupta make obvious the method of claim 1, but lack the method further comprising, that which is taught by Rathod: providing for display a user interface comprising a text input field and a listing of plurality of categories (paragraph [0934]-Fig. 83, his text input field, and corresponding “categories suggesting list”, on his device interface); receiving a selection of a category (ibid-his selection of categories); categorizing the prompt according to the selected category (ibid-his structured query language query generated based on the input text prompt/query by the user); and wherein the form of the received prompt includes the selected category (ibid-his SQL form of the prompt, including the category as selected by the user); and [determining and multiplexing one of the one or more large language models to deploy based on the selected category]. Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Mostafazadeh and Gutta to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa with obtaining an executable expression in the form of a query that is described in query language and executing the executable expression to generate a response, wherein the executable hereinafter is noted based on Mostafazadeh and as taught by Mostafazadeh with the query interface with category filters as taught by Rathod as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating a query and response system interface based on categories, able to communication with a data store/warehouse based on the underlying schema of the structured datasets (ibid-Mostafazadeh, ibid-Rathod paragraph [0934, 0790]). Poirier teaches that which the others lack, determining and multiplexing one of the one or more large language models to deploy based on the selected category (paragraph [0112-0114]-his prompt, his selected category, and determining multiple agents of LLMs, selection of and managing of the multiple agents, as multiplexing and deployed for appropriate model response retrieval). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Mostafazadeh and Gutta and Poirier to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa with obtaining an executable expression in the form of a query that is described in query language and executing the executable expression to generate a response, wherein the executable hereinafter is noted based on Mostafazadeh and as taught by Mostafazadeh with the query interface with category filters, and selecting a category as taught by Rathod with determining and multiplexing large language models based on a selected category, as taught by Poirier, as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating a query and response system interface based on categories, able to communication with a data store/warehouse based on the underlying schema of the structured datasets, and determining, selecting and managing appropriate language models for execution of the prompt, based on corresponding categories and agents that are capable of fulling the prompt request (ibid-Mostafazadeh, ibid-Rathod paragraph [0934, 0790], ibid-Poirier-see his supervisory orchestrator discussion, with respect to the multiple AI models/agents). As per claim 30, Ogawa with Mostafazadeh with Gutta with Gupta Bierner with Ahmed with Poirier with Rathod make obvious the method of claim 29, further comprising: providing for display, via the user interface, an updated listing of a plurality of categories based on input received via the text input field (ibid-Rathod, paragraph [0934, 0935, 0949-0952]-his presenting the category list(s), and updating thereof, based on user interactions, including text field inputs). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Ogawa and Mostafazadeh and Gutta to combine the prior art element of a query and searching sources including database(s) as taught by Ogawa with obtaining an executable expression in the form of a query that is described in query language and executing the executable expression to generate a response, wherein the executable hereinafter is noted based on Mostafazadeh and as taught by Mostafazadeh with the query interface with category filters as taught by Rathod as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be generating a query and response system interface based on updated categories with respect to user text input (ibid-Rathod paragraph [0934, 0949-0951, 0790]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (See PTO-892). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAMONT M SPOONER whose telephone number is (571)272-7613. The examiner can normally be reached 8:00 AM -5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached on (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LAMONT M SPOONER/Primary Examiner, Art Unit 2657 1/29/2026
Read full office action

Prosecution Timeline

Jul 06, 2023
Application Filed
Sep 22, 2023
Non-Final Rejection — §103
Nov 27, 2023
Response Filed
Dec 07, 2023
Final Rejection — §103
Mar 07, 2024
Examiner Interview Summary
Mar 07, 2024
Applicant Interview (Telephonic)
Apr 05, 2024
Request for Continued Examination
Apr 09, 2024
Response after Non-Final Action
Apr 20, 2024
Non-Final Rejection — §103
Aug 20, 2024
Examiner Interview Summary
Aug 20, 2024
Applicant Interview (Telephonic)
Aug 21, 2024
Response Filed
Nov 11, 2024
Final Rejection — §103
May 15, 2025
Request for Continued Examination
May 16, 2025
Response after Non-Final Action
May 17, 2025
Non-Final Rejection — §103
Nov 21, 2025
Response Filed
Jan 29, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602542
Text Analysis System, and Characteristic Evaluation System for Message Exchange Using the Same
2y 5m to grant Granted Apr 14, 2026
Patent 12596881
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12591737
Systems and Methods for Word Offensiveness Detection and Processing Using Weighted Dictionaries and Normalization
2y 5m to grant Granted Mar 31, 2026
Patent 12572744
Generative Systems and Methods of Feature Extraction for Enhancing Entity Resolution for Watchlist Screening
2y 5m to grant Granted Mar 10, 2026
Patent 12518107
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
74%
Grant Probability
86%
With Interview (+11.8%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 603 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month