Prosecution Insights
Last updated: April 19, 2026
Application No. 18/628,215

DATA TRANSFORMATION FOR WEB SEARCH USING PROPRIETARY DATA

Final Rejection §103
Filed
Apr 05, 2024
Examiner
MARI VALCARCEL, FERNANDO MARIANO
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
Zagmo Corporation
OA Round
4 (Final)
49%
Grant Probability
Moderate
5-6
OA Rounds
3y 10m
To Grant
71%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
71 granted / 145 resolved
-6.0% vs TC avg
Strong +22% interview lift
Without
With
+22.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
40 currently pending
Career history
185
Total Applications
across all art units

Statute-Specific Performance

§101
13.5%
-26.5% vs TC avg
§103
66.1%
+26.1% vs TC avg
§102
13.2%
-26.8% vs TC avg
§112
5.1%
-34.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 145 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is in response to applicant’s arguments and amendments filed 12/19/2025, which are in response to USPTO Office Action mailed 10/01/2025. Applicant’s arguments have been considered with the results that follow: THIS ACTION IS MADE FINAL. Status of Claims Claims 1-20 are currently pending in the present application. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 7-8, 13, 15, 17 and 19-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Samdani et al. (US Patent No. 11,803,556; Date of Patent: Oct. 31, 2023) in view of Jain et al. (US Patent No. 11,947,923; Date of Patent: Apr. 2, 2024) in view of ZHUANG et al. (China Invention Application Publication No.: CN 117494786 A; Pub. Date: 2-Feb-2024). Regarding independent claim 1, Samdani discloses a system, comprising: a communication interface; See FIG. 8, (Disclosing a system for creating and searching knowledge base articles inside an organization by supporting operation as a Software-as-a-Service (SaaS) across a plurality of organizations. FIG. 8 illustrates a computing device 80-2 embodied as user device(s) 104 or query server 108 including a communication interface(s) 810 and processor(s) 806, i.e. a communication interface; and a processor coupled to the communication interface.) and a processor coupled to the communication interface and configured to: receive a user input data via the communication interface; See Col. 3, lines 9-11 & 17-20, (A user may provide a query to the system such as "What is ACME's recruiting policy?", i.e. receive a user input data via the communication interface.) use the user input data as an input to a knowledge retrieval engine configured to generate in response to the input a generated response that is derived at least in part from a set of proprietary data; See Col. 3, lines 9-14, (An augmented query may be generated in response to receiving a user query such as via natural language processing techniques. The augmented query may be directed to a private corpus of articles stored in a knowledge base available only to users from a particular organization, i.e. use the user input data as an input to a knowledge retrieval engine (e.g. the user query is directed to articles of a knowledge base of private data) configured to generate in response to the input a generated response that is derived at least in part from a set of proprietary data (e.g. the input query is used to build an augmented query used to retrieve data from the knowledge base).) wherein proprietary data comprises any data unique to an organization or individual, comprising at least one of the following: word processor documents, spreadsheets, slide presentations, natural language documents, tabular data, or knowledge bases; See Col. 3, lines 35-40, (The augmented query may be directed to a private corpus of articles stored in a knowledge base available only to users from a particular organization. Note Col. 7 lines 52-53 wherein an article may comprise a document, web page or other information, i.e. wherein proprietary data comprises any data unique to an organization or individual, comprising at least one of the following: word processor documents (e.g. the articles may comprise text), knowledge bases (e.g. the articles represent ap private corpus of documents of a knowledge base).) and use the generated response to generate a set of web search results. See Col. 3, 52-55, (The system may determine an appropriate response to the query by comparing the augmented query to existing data entries in a knowledge base. The system may provide search results corresponding to previously received queries based on a correspondence between the augmented query and previously received queries. Samdani does not disclose the step wherein the knowledge retrieval engine is a system that provides a response to a natural language chat prompt and/or a search query by leveraging proprietary data to improve an input to a web search engine; wherein leveraging proprietary data to improve the input to the web search engine comprises adding a structural context and/or a contextual hint to the generated response; Jain discloses the step wherein the knowledge retrieval engine is a system that provides a response to a natural language chat prompt and/or a search query by leveraging proprietary data to improve an input to a web search engine; See FIG. 3 & Col. 17, lines 7-13, (Disclosing a system for managing multimedia content obtained by large language models and/or generated by other generative models. FIG. 3 illustrates method 300 comprising step 352 of receiving a natural language input from a client device of a user, which is then processed using an LLM at step 354 to generate an LLM output including at least the input of step 352. The LLM output is generated using an explicitation LLM that structures the user input and/or other content such as a context or other prompts separate from the user input for processing by the LLM. Note Col. 6, lines 26-49, wherein the system may supplement or re-write a user input to generate an implied input, i.e. wherein the knowledge retrieval engine is a system that provides a response to a natural language chat prompt and/or a search query (e.g. See FIG. 3 wherein the user input is given to an LLM in order to generate a request to retrieve multimedia content) by leveraging proprietary data (e.g. the LLM uses a user input and additional context data to generate an LLM output) to improve an input to a web search engine (e.g. the LLM output is used to search a plurality of search systems).) wherein leveraging proprietary data to improve the input to the web search engine comprises adding a structural context and/or a contextual hint to the generated response; See FIG. 3 & Col. 17, lines 7-13, (FIG. 3 illustrates method 300 comprising step 352 of receiving a natural language input from a client device of a user, which is then processed using an LLM at step 354 to generate an LLM output including at least the input of step 352. The LLM output is generated using an explicitation LLM that structures the user input and/or other content such as a context or other prompts separate from the user input for processing by the LLM.) See Col. 6, lines 26-49, (The system comprises context engine 113 which may determine a current context based on a current state of a dialog session, profile data, and/or a current location of a client device 110. The system may supplement or re-write a user input to generate an implied input. Note Col. 6, lines 60-66 wherein implied input engine 114 may generate an implied NL-based input using one or more past or current contexts from context engine 113 and determining to submit the implied NKL-based input, i.e. wherein leveraging proprietary data to improve the input to the web search engine comprises adding a structural context and/or a contextual hint to the generated response (e.g. the implied NL-based input is based on one or more past or current contexts and is generated independent of any explicit NL-based input provided by a user).) Samdani and Jain are analogous art because they are in the same field of endeavor, AI-based search systems. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Samdani to include the method of re-writing user queries via LLMs as disclosed by Jain. Col. 11, lines 5-18 of Jain disclose that the multimedia content management system 120 is rendered at client device 110 to ensure data security for a user s maintained in order to mitigate and/or eliminate occurrences of nefarious activity. The system may also process requests via parallel processing which results in a reduction in latency in causing responses to be rendered at the client device. Samdani-Jain does not disclose the step wherein the knowledge retrieval engine comprises a large language model (LLM) configured for generating web search engine input based at least in part on proprietary data, in part by using the proprietary data to fine-tune the LLM or using the proprietary data to create a searchable index to return extractions to be input to the LLM; and use the generated response comprising the structural context and/or the contextual hint to generate a set of web search results. ZHUANG discloses the step wherein the knowledge retrieval engine comprises a large language model (LLM) configured for generating web search engine input based at least in part on proprietary data, in part by using the proprietary data to fine-tune the LLM or using the proprietary data to create a searchable index to return extractions to be input to the LLM; See Pg. 2, Paragraph 4, (Disclosing a system for generating a hot search of a large language model based on fine tuning. The system comprises performing hot event extraction based on preprocessed data of a microblog including a microblog text event. Input and output of a large language model (LLM) is determined based on the extracted data which selects the microblog with a highest heat metric as input and outputting a search term related to the microblog having the highest heat as an expected output. Note Pg. 2, Paragraph 8 wherein the system provides a trim-based LLM hot search generating system comprising an adjustment module for generating a fine-tuned LLM.) and use the generated response comprising the structural context and/or the contextual hint to generate a set of web search results. See Pg. 6, Paragraphs 5-6, (An extraction module extracts each element in the microblog text event according to the microblog text under the hot search and selects a microblog with a highest correlation degree with the hot search term, i.e. a structural context.) See Pg. 6, Paragraph 10, (The trimmed LLM is packaged into an interface form to perform heat supply search to generate a product call, i.e. use the generated response (e.g. the fine-tuned LLM output) comprising the structural context (e.g. the selected microblog text data having a highest correlation degree with a hot search term) and/or the contextual hint to generate a set of web search results (e.g. LLM output is used to generate a product call).) Samdani, Jain and ZHUANG are analogous art because they are in the same field of endeavor, search operations using large language model. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Samdani-Jain to include the method of fine-tuning an LLM using microblog text data s disclosed by ZHUANG. Pg. 3, Paragraph 3 of ZHUANG discloses that the system may fine-tune a model in order to improve the process of generating a hot search term which reduces the influence of irrelevant information, which effectively improves user experience and avoids public opinion risks. Regarding dependent claim 2, As discussed above with claim 1, Samdani-Jain-ZHUANG discloses all of the limitations. Jain further discloses the step the step wherein the knowledge retrieval engine is a fine-tuned LLM. See Col. 7, lines 50-60, (Multimedia content management system 120 comprises fine-tuning engine 130.) See Col. 9, lines 9-17, (An LLM may be fine-tuned to obtain multimedia content via training instances, i.e. wherein the knowledge retrieval engine is a fine-tuned LLM.) Regarding dependent claim 3, As discussed above with claim 2, Samdani-Jain discloses all of the limitations. Jain further discloses the step wherein the generated response is a hypothetical answer of the fine-tuned LLM. See Col. 17, lines 7-20, (FIG. 3 illustrates method 300 comprising step 354 wherein the LLM input is used to generate an LLM output used to retrieve multimedia content. Note that the LLM may be fine-tined to retrieve multimedia content, i.e. wherein the generated response is a hypothetical answer of the fine-tuned LLM.) Paragraph [0041] of Applicant's Specification defines a "hypothetical answer" as "an answer that provides an answer format and/or structure with contextual content associated with the proprietary data". The multimedia content of Jain is retrieved and presented to the user based on the LLM output and is therefore identical in function to Applicant’s description of a hypothetical answer. Regarding dependent claim 7, As discussed above with claim 1, Samdani-Jain-ZHUANG discloses all of the limitations. Samdani further discloses the step wherein the proprietary data comprises customer-owned data. See Col. 35, lines 35-40, (The system is configured to search a private corpus containing articles available only to users affiliated with the organization that controls the knowledge base, i.e. wherein the proprietary data comprises customer-owned data.) Regarding dependent claim 8, As discussed above with claim 7, Samdani-Jain-ZHUANG discloses all of the limitations. Samdani further discloses the step wherein the customer-owned data is associated with a first customer included in a plurality of customers associated with the system, and the customer-owned data is used as the proprietary data only for a subset of the plurality of customers that includes the first customer. See Col. 35, lines 35-40, (The system is configured to search a private corpus containing articles available only to users affiliated with the organization that controls the knowledge base, i.e. wherein the proprietary data comprises customer-owned data (e.g. the knowledge base is associated with an organization comprising users that have access to the private corpus of a knowledge base), and the customer-owned data is used as the proprietary data only for a subset of the plurality of customers that includes the first customer (e.g. the system may provide a SaaS to multiple organizations where users may request information from an organization they are affiliated with).) Regarding dependent claim 13, As discussed above with claim 1, Samdani-Jain-ZHUANG discloses all of the limitations. Jain further discloses the step wherein the user input data comprises a natural language prompt or a search query for a chatbot or a search query. See FIG. 3 & Col. 16, lines 62-64, (Method 300 comprises step 352 wherein the system receives a natural language (NL) based input associated with a client device, i.e. wherein the user input data comprises a natural language prompt or a search query for a chatbot or a search query.) Regarding dependent claim 15, As discussed above with claim 1, Samdani-Jain-ZHUANG discloses all of the limitations. Jain further discloses the step wherein the set of web search results is presented to a user via a chatbot or via a search engine results page or via a link to a search engine results page. See FIG. 6A & Col. 25, lines 21-31, (FIG. 6A illustrates a graphical user interface wherein a user may provide an NL based input to the system via a conversational LLM in order to generate or retrieve multimedia content items. The generative multimedia content item can be rendered on client device 110, i.e. wherein the set of web search results is presented to a user via a chatbot.) Regarding independent claim 17, Samdani discloses system, comprising: a communication interface; and a processor coupled to the communication interface, See FIG. 8, (Disclosing a system for creating and searching knowledge base articles inside an organization by supporting operation as a Software-as-a-Service (SaaS) across a plurality of organizations. FIG. 8 illustrates a computing device 80-2 embodied as user device(s) 104 or query server 108 including a communication interface(s) 810 and processor(s) 806, i.e. a communication interface; and a processor coupled to the communication interface.) and configured to: receive a user input data via the communication interface; See Col. 3, lines 9-11 & 17-20, (A user may provide a query to the system such as "What is ACME's recruiting policy?", i.e. receive a user input data via the communication interface.) use the user input data as a first input to a first knowledge retrieval engine configured to generate in response to the first input an intermediate response that is derived at least in part from a set of proprietary data; See FIG. 8, (Disclosing a system for creating and searching knowledge base articles inside an organization by supporting operation as a Software-as-a-Service (SaaS) across a plurality of organizations. FIG. 8 illustrates a computing device 80-2 embodied as user device(s) 104 or query server 108 including a communication interface(s) 810 and processor(s) 806, i.e. a communication interface; and a processor coupled to the communication interface.) wherein proprietary data comprises any data unique to an organization or individual, comprising at least one of the following: word processor documents, spreadsheets, slide presentations, natural language documents, tabular data, or knowledge bases; See Col. 3, lines 9-14, (An augmented query may be generated in response to receiving a user query such as via natural language processing techniques. The augmented query may be directed to a private corpus of articles stored in a knowledge base available only to users from a particular organization, i.e. use the user input data as an input to a knowledge retrieval engine (e.g. the user query is directed to articles of a knowledge base of private data) configured to generate in response to the input a generated response that is derived at least in part from a set of proprietary data (e.g. the input query is used to build an augmented query used to retrieve data from the knowledge base).) Samdani does not disclose the step wherein the knowledge retrieval engine is a system that provides a response to a natural language chat prompt and/or a search query by leveraging proprietary data to improve an input to a web search engine; wherein leveraging proprietary data to improve the input to the web search engine comprises adding a structural context and/or acontextual hint to the intermediate response at least in part to generate web search engine input; wherein the knowledge retrieval engine comprises a large language model (LLM) configured for generating web search engine input based at least in part on proprietary data; and use the intermediate response as a second input to a second knowledge retrieval engine configured to generate in response to the second input a generated response that is derived at least in part from the set of proprietary data. Jain discloses the step wherein the knowledge retrieval engine is a system that provides a response to a natural language chat prompt and/or a search query by leveraging proprietary data to improve an input to a web search engine; See FIG. 3 & Col. 17, lines 7-13, (Disclosing a system for managing multimedia content obtained by large language models and/or generated by other generative models. FIG. 3 illustrates method 300 comprising step 352 of receiving a natural language input from a client device of a user, which is then processed using an LLM at step 354 to generate an LLM output including at least the input of step 352. The LLM output is generated using an explicitation LLM that structures the user input and/or other content such as a context or other prompts separate from the user input for processing by the LLM. Note Col. 6, lines 26-49, wherein the system may supplement or re-write a user input to generate an implied input, i.e. wherein the knowledge retrieval engine is a system that provides a response to a natural language chat prompt and/or a search query (e.g. See FIG. 3 wherein the user input is given to an LLM in order to generate a request to retrieve multimedia content) by leveraging proprietary data (e.g. the LLM uses a user input and additional context data to generate an LLM output) to improve an input to a web search engine (e.g. the LLM output is used to search a plurality of search systems).) wherein leveraging proprietary data to improve the input to the web search engine comprises adding a structural context and/or acontextual hint to the intermediate response at least in part to generate web search engine input; See FIG. 3 & Col. 17, lines 7-13, (FIG. 3 illustrates method 300 comprising step 352 of receiving a natural language input from a client device of a user, which is then processed using an LLM at step 354 to generate an LLM output including at least the input of step 352. The LLM output is generated using an explicitation LLM that structures the user input and/or other content such as a context or other prompts separate from the user input for processing by the LLM.) See Col. 6, lines 26-49, (The system comprises context engine 113 which may determine a current context based on a current state of a dialog session, profile data, and/or a current location of a client device 110. The system may supplement or re-write a user input to generate an implied input. Note Col. 6, lines 60-66 wherein implied input engine 114 may generate an implied NL-based input using one or more past or current contexts from context engine 113 and determining to submit the implied NKL-based input, i.e. wherein leveraging proprietary data to improve the input to the web search engine comprises adding a structural context and/or a contextual hint to the intermediate response at least in part to generate web search engine input (e.g. the implied NL-based input is based on one or mroe past or current contexts and is generated independent of any explicit NL-based input provided by a user);) wherein the knowledge retrieval engine comprises a large language model (LLM) configured for generating web search engine input based at least in part on proprietary data; See Col. 6, lines 26-49, (The system comprises context engine 113 which may determine a current context based on a current state of a dialog session, profile data, and/or a current location of a client device 110. The system may supplement or re-write a user input to generate an implied input.) See Col. 11, lines 58-67, (Explicitation LLM engine 141 may generate one or more queries based on a natural language user input (NL based input 201) and submit the query to one or more search systems, i.e. wherein the knowledge retrieval engine comprises a large language model (LLM) configured for generating web search engine input based at least in part on proprietary data (e.g. the one or more queries generated by the LLM engine include user input data and context data).) Samdani and Jain are analogous art because they are in the same field of endeavor, AI-based search systems. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Samdani to include the method of re-writing user queries via LLMs as disclosed by Jain. Col. 11, lines 5-18 of Jain disclose that the multimedia content management system 120 is rendered at client device 110 to ensure data security for a user s maintained in order to mitigate and/or eliminate occurrences of nefarious activity. The system may also process requests via parallel processing which results in a reduction in latency in causing responses to be rendered at the client device. Samdani-Jain does not disclose the step wherein the knowledge retrieval engine comprises an LLM configured for generating web search engine input based at least in part on proprietary data, in part by using the proprietary data to fine-tune the LLM or using the proprietary data to create a searchable index to return extractions to be input to the LLM; and use the intermediate response comprising the structural context and/or the contextual hint as a second input to a second knowledge retrieval engine configured to generate in response to the second input a generated response that is derived at least in part from the set of proprietary data. ZHUANG discloses the step wherein the knowledge retrieval engine comprises an LLM configured for generating web search engine input based at least in part on proprietary data, in part by using the proprietary data to fine-tune the LLM or using the proprietary data to create a searchable index to return extractions to be input to the LLM; See Pg. 2, Paragraph 4, (Disclosing a system for generating a hot search of a large language model based on fine tuning. The system comprises performing hot event extraction based on preprocessed data of a microblog including a microblog text event. Input and output of a large language model (LLM) is determined based on the extracted data which selects the microblog with a highest heat metric as input and outputting a search term related to the microblog having the highest heat as an expected output. Note Pg. 2, Paragraph 8 wherein the system provides a trim-based LLM hot search generating system comprising an adjustment module for generating a fine-tuned LLM.) and use the intermediate response comprising the structural context and/or the contextual hint as a second input to a second knowledge retrieval engine configured to generate in response to the second input a generated response that is derived at least in part from the set of proprietary data. See Pg. 6, Paragraphs 5-6, (An extraction module extracts each element in the microblog text event according to the microblog text under the hot search and selects a microblog with a highest correlation degree with the hot search term, i.e. a structural context.) See Pg. 6, Paragraph 10, (The trimmed LLM is packaged into an interface form to perform heat supply search to generate a product call, i.e. use the intermediate response (e.g. the fine-tuned LLM output) comprising the structural context (e.g. the selected microblog text data having a highest correlation degree with a hot search term) and/or the contextual hint as a second input to a second knowledge retrieval engine configured to generate in response to the second input a generated response that is derived at least in part from the set of proprietary data (e.g. LLM output is used to generate a product call as part of an interface platform loaded into a video memory. The product call is distinct form the microblog text event used to perform the hot search).) Samdani, Jain and ZHUANG are analogous art because they are in the same field of endeavor, search operations using large language model. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Samdani-Jain to include the method of fine-tuning an LLM using microblog text data s disclosed by ZHUANG. Pg. 3, Paragraph 3 of ZHUANG discloses that the system may fine-tune a model in order to improve the process of generating a hot search term which reduces the influence of irrelevant information, which effectively improves user experience and avoids public opinion risks. Regarding dependent claim 19, As discussed above with claim 17, Samdani-Jain-ZHUANG discloses all of the limitations. Jain further discloses the step wherein the processor is further configured to use the generated response to generate a set of web search results. See FIG. 3, (FIG. 3 illustrates method 300 comprising step 372 comprising causing multimedia content to be rendered at the client device wherein the response corresponds to search results, i.e. wherein the processor is further configured to use the generated response to generate a set of web search results.) Regarding independent claim 20, The claim is analogous to the subject matter of independent claim 1 directed to a method or process and is rejected under similar rationale. Regarding independent claim 21, The claim is analogous to the subject matter of independent claim 17 directed to a method or process and is rejected under similar rationale. Regarding independent claim 22, The claim is analogous to the subject matter of independent claim 1 directed to a non-transitory, computer readable medium and is rejected under similar rationale. Regarding independent claim 23, The claim is analogous to the subject matter of independent claim 17 directed to a computer system and is rejected under similar rationale. Claim(s) 4-6 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Samdani in view of Jain and ZHUANG as applied to claim 1 above, and further in view of Hosseini et al. (US PGPUB No. 2023/0297860; Pub. Date: Sep. 21, 2023). Regarding dependent claim 4, As discussed above with claim 1, Samdani-Jain-ZHUANG discloses all of the limitations. Samdani-Jain-ZHUANG does not disclose the step wherein the knowledge retrieval engine is a searchable index which returns extractions to an LLM. Hosseini discloses the step wherein the knowledge retrieval engine is a searchable index which returns extractions to an LLM. See Paragraph [0069], (Disclosing a system for generating and transmitting insight according to machine learning models. A service request to perform an action may be received from a client agent, wherein a service request includes a plurality of attributes specified by a user using a graphical user interface of a client device.) See Paragraph [0070], (A service request may be associated with an objective which may be represented as one or more vectors, such that it may be expressed using multiple sets of values or dimensions representing specific aspects or characteristics of the objective. The one or more vectors may be stored and managed in a vector database. A Large Language Model (LLM) is used to embed the context of the service request into a vector, which is then used to query the vector database to find the list of vectors that have a similarity score above a threshold. The list of vectors is then used to select the vectors that best match the criteria for fulfilling the service request, i.e. wherein the knowledge retrieval engine is a searchable index which returns extractions to an LLM (e.g. the large language model is used in conjunction with the vector database to process data contained in service requests).) Samdani, Jain, ZHUANG and Hosseini are analogous art because they are in the same field of endeavor, machine learning models for search systems. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Samdani- Jain-ZHUANG to include the method of processing vector data via a large language model in order to process service requests as disclosed by Hosseini. Paragraph [0070] of Hosseini discloses that the use of a large language model allows the system to determine vectors most similar to an input service request such that the ML model selects the best matches for fulfilling said requests. This represents an improvement in the process of servicing user requests. Regarding dependent claim 5, As discussed above with claim 4, Samdani- Jain-ZHUANG-Hosseini discloses all of the limitations. Jain further discloses the step wherein the generated response is a hypothetical answer of the LLM to the user input data. See Col. 17, lines 7-20, (FIG. 3 illustrates method 300 comprising step 354 wherein the LLM input is used to generate an LLM output used to retrieve multimedia content. Note that the LLM may be fine-tuned to retrieve multimedia content, i.e. wherein the generated response is a hypothetical answer of the LLM to the user input data (e.g. the LLM output is generated based on an LLM input including the NL based user input and context data).) Paragraph [0041] of Applicant's Specification defines a "hypothetical answer" as "an answer that provides an answer format and/or structure with contextual content associated with the proprietary data". The multimedia content of Jain is retrieved and presented to the user based on the LLM output. Regarding dependent claim 6, As discussed above with claim 4, Samdani- Jain-ZHUANG-Hosseini discloses all of the limitations. Jain further discloses the step wherein the generated response is a summary of returned extractions to the LLM. See FIG. 6A & Col. 25, lines 32-37, (FIG. 6A illustrates a graphical user interface wherein a user may provide an NL based input to the system via a conversational LLM in order to generate or retrieve multimedia content items. The system may present the generative multimedia content prompt used to generate an image to a user, i.e. wherein the generated response is a summary of returned extractions to the LLM.) Regarding dependent claim 18, As discussed above with claim 17, Samdani-Jain-ZHUANG discloses all of the limitations. Samdani further discloses the step wherein the first knowledge retrieval engine is a fine-tuned LLM, See Col. 7, lines 50-60, (Multimedia content management system 120 comprises fine-tuning engine 130.) See Col. 9, lines 9-17, (An LLM may be fine-tuned to obtain multimedia content via training instances, i.e. wherein the knowledge retrieval engine is a fine-tuned LLM.) Samdani-Jain-ZHUANG does not disclose the step wherein the second knowledge retrieval engine is a searchable index which returns extractions to an LLM. Hosseini discloses the step wherein the second knowledge retrieval engine is a searchable index which returns extractions to an LLM. See Paragraph [0069], (Disclosing a system for generating and transmitting insight according to machine learning models. A service request to perform an action may be received from a client agent, wherein a service request includes a plurality of attributes specified by a user using a graphical user interface of a client device.) See Paragraph [0070], (A service request may be associated with an objective which may be represented as one or more vectors, such that it may be expressed using multiple sets of values or dimensions representing specific aspects or characteristics of the objective. The one or more vectors may be stored and managed in a vector database. A Large Language Model (LLM) is used to embed the context of the service request into a vector, which is then used to query the vector database to find the list of vectors that have a similarity score above a threshold. The list of vectors is then used to select the vectors that best match the criteria for fulfilling the service request, i.e. wherein the knowledge retrieval engine is a searchable index which returns extractions to an LLM (e.g. the large language model is used in conjunction with the vector database to process data contained in service requests).) Samdani, Jain, ZHUANG and Hosseini are analogous art because they are in the same field of endeavor, machine learning models for search systems. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Samdani-Jain-ZHUANG to include the method of processing vector data via a large language model in order to process service requests as disclosed by Hosseini. Paragraph [0070] of Hosseini discloses that the use of a large language model allows the system to determine vectors most similar to an input service request such that the ML model selects the best matches for fulfilling said requests. This represents an improvement in the process of servicing user requests. Claim(s) 9-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Samdani in view of Jain as applied to claim 1 above, and further in view of Bloom (US PGPUB No. 2024/0020771; Pub. Date: Jan. 18, 2024). Regarding dependent claim 9, As discussed above with claim 1, Samdani-Jain-ZHUANG discloses all of the limitations. Samdani-Jain-ZHUANG does not disclose the step wherein the processor is further configured to concatenate the generated response with a second input. Bloom discloses the step wherein the processor is further configured to concatenate the generated response with a second input. See FIG. 6, (Disclosing a system for generating a pecuniary program generated using machine learning processes. FIG. 6 illustrates method 600 comprising step 605 of receiving a user input relating to a user. Step 635 additionally comprises a step of receiving further feedback.) See Paragraph [0040], (The method may include generating an updated pecuniary program as a function of user feedback by combining all the feedback from a user, i.e. wherein the processor is further configured to concatenate the generated response with a second input (e.g. the combination of first and second feedback used to generate the updated pecuniary program).) Samdani, Jain, ZHUANG and Bloom are analogous art because they are in the same field of endeavor, optimization and usage of generative machine models. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Samdani-Jain-ZHUANG to include the method of generating an updated second machine learning model according to a combination of various stages of feedback as disclosed by Bloom. Paragraph [0042] of Bloom discloses that the user feedback is used to indicate what parts of the generated pecuniary program were good, bad and/or how to improve target datasets in order to improve the quality of the generated machine learning models over time. Regarding dependent claim 10, As discussed above with claim 9, Samdani-Jain-ZHUANG-Bloom discloses all of the limitations. Bloom further discloses the step wherein the second input is a second generated response of a second knowledge retrieval engine that uses the user input data as input. See Paragraph [0040], (Computing device 104 may train a second machine learning model 156 wherein pecuniary program 144 and user feedback 148 are used as inputs to output updated pecuniary program 152, i.e. wherein the second input is a second generated response of a second knowledge retrieval engine that uses the user input data as input.) Regarding dependent claim 11, As discussed above with claim 9, Samdani-Jain-ZHUANG-Bloom discloses all of the limitations. Bloom further discloses the step wherein the second input is the user input data. See FIG. 6 & Paragraph [0064], (FIG. 6 illustrates method 600 comprising step 605 of receiving a user input relating to a user. Step 635 additionally comprises a step of receiving further feedback wherein user feedback may be uploaded and received by the computing device through a user database, i.e. wherein the second input is the user input data.) Regarding dependent claim 12, As discussed above with claim 9, Samdani-Jain-ZHUANG-Bloom discloses all of the limitations. Jain further discloses the step wherein the second input is a reformulated user input data that reflects context of a user input data history. See FIG. 3 & Col. 17, lines 7-14, (Method 300 comprises step 354 of generating an LLM input from the NL based input and including a context for processing by the LLM. Note Coo. 6, lines 12-25 wherein context data may be determined by a context engine 113 and may include user interaction data that characterizes current or recent interactions of client device 110 and/or a user of the client device 110, i.e. wherein the second input (e.g. the LLM input includes the NL based input and context data) is a reformulated user input data that reflects context of a user input data history (e.g. context data includes current or recent interaction data).) Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Samdani in view of Jain and ZHUANG as applied to claim 1 above, and further in view of PELED (US PGPUB No. 2024/0037170; Pub. Date: Feb. 1, 2024). Regarding dependent claim 14, As discussed above with claim 1, Samdani-Jain-ZHUANG discloses all of the limitations. Samdani-Jain-ZHUANG does not disclose the step wherein the set of proprietary data is at least one of: a set of customer defined data and a set of customer owned data. PELED discloses the step wherein the set of proprietary data is at least one of: a set of customer defined data and a set of customer owned data. See Paragraph [0202], (The value-based search may derive benefit parameters that could be useful or of interest to a user 202 based on one or more data points identified and/or assumed for the user according to trends, correlation, past usage, user data, etc., i.e. wherein the set of proprietary data is at least a set of customer owned data (e.g. Note [0182] wherein user attribute data is stored in client device 200.) Samdani, Jain, ZHUANG and PELED are analogous art because they are in the same field of endeavor, knowledge base data retrieval. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Samdani-Jain-ZHUANG to include the method of using a large language model to retrieve data as disclosed by PELED. Paragraph [0201] of PELED discloses that the system may identify a particular users interests, which allows the search engine to identify and select consumable items that are more appropriate and/or fitting for the identified user in terms of knowledge, language, style and/or needs specific to the respective user. Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Samdani in view of Jain and ZHUANG as applied to claim 1 above, and further in view of DeLuca et al. (US PGPUB No. 2018/0067912; Pub. Date: Mar. 8, 2018). Regarding dependent claim 16, As discussed above with claim 1, Samdani-Jain-ZHUANG discloses all of the limitations. Samdani-Jain-ZHUANG does not disclose the step wherein the processor is further configured to provide query compression in an event the generated response is larger than a specified limit. DeLuca discloses the step wherein the processor is further configured to provide query compression in an event the generated response is larger than a specified limit. See Paragraph [0003], (Disclosing a system for enabling reduction of characters in a character-limited scenario by minimally editing a text to remain within a character limit. A user may enter text into a character-limited field wherein the system may shorten the text entered by the user to bring the entered text within the character limit of the character-limited field. Note [0070] wherein machine learning models may be used to derive emotion scores from text in order to improve the accuracy of the character reduction alternatives, i.e. wherein the processor is further configured to provide query compression in an event the generated response is larger than a specified limit. Samdani, Jain, ZHUANG and DeLuca are analogous art because they are in the same field of endeavor, optimization and usage of generative machine models. It would have been obvious to anyone having ordinary skill in the art before the effective filing date to modify the system of Samdani-Jain-ZHUANG to include the method of editing a user-input text to maintain a particular character limit as disclosed by DeLuca. Paragraph [0060] of DeLuca discloses that the process presents the following advantages: maintaining an author's original tone, adjusting character reduction as an author writes, and enhancing and optimizing user experiences by not requiring a user to consciously limit characters while typing. Embodiments of the present invention reduce the characters in a message to fit within a prescribed length by removing as little material as required from the original content. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 17 and 20-23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant’s amendments necessitated the new grounds of rejection presented in this Office Action. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Fernando M Mari whose telephone number is (571)272-2498. The examiner can normally be reached Monday-Friday 7am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J. Lo can be reached at (571) 272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FMMV/Examiner, Art Unit 2159 /ANN J LO/Supervisory Patent Examiner, Art Unit 2159
Read full office action

Prosecution Timeline

Apr 05, 2024
Application Filed
Dec 13, 2024
Non-Final Rejection — §103
Mar 09, 2025
Interview Requested
Mar 18, 2025
Response Filed
May 08, 2025
Final Rejection — §103
Jul 27, 2025
Interview Requested
Aug 14, 2025
Applicant Interview (Telephonic)
Aug 14, 2025
Request for Continued Examination
Aug 14, 2025
Examiner Interview Summary
Aug 22, 2025
Response after Non-Final Action
Sep 30, 2025
Non-Final Rejection — §103
Nov 29, 2025
Interview Requested
Dec 19, 2025
Response Filed
Jan 22, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591588
CATEGORICAL SEARCH USING VISUAL CUES AND HEURISTICS
2y 5m to grant Granted Mar 31, 2026
Patent 12547593
METHOD AND APPARATUS FOR SHARING FAVORITE
2y 5m to grant Granted Feb 10, 2026
Patent 12505129
Distributed Database System
2y 5m to grant Granted Dec 23, 2025
Patent 12499123
ACTOR-BASED INFORMATION SYSTEM
2y 5m to grant Granted Dec 16, 2025
Patent 12499121
REAL-TIME MONITORING AND REPORTING SYSTEMS AND METHODS FOR INFORMATION ACCESS PLATFORM
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
49%
Grant Probability
71%
With Interview (+22.0%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 145 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month