DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1, 2, 8, 9, and 15 have been amended by Applicant. No claims have been cancelled or added. Claims 1-20 are currently pending.
Information Disclosure Statement
The Information Disclosure Statement (IDS_ submitted on 09/30/2025 by Applicant has been considered.
Response to Arguments
Claim Rejections under 35 U.S.C. 101
Claim rejections under 35 U.S.C. 101 have been withdrawn in view of Applicant’s amendments to independent claims 1, 8, and 15.
Claim Rejections under 35 U.S.C. 103
The rejection of claims 1-20 under 35 U.S.C. 103 have been withdrawn in view of Applicant’s amendments to independent claims 1, 8, and 15. However, upon further consideration and in view of said amendments, a new ground of rejection has been made herein.
Applicant’s arguments with respect to claim(s) 1, 8, and 15 (as amended) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-9, 11-16, and 18-20 (as amended) are rejected under 35 U.S.C. 103 as being unpatentable over Gupta et al. (US 20230021797 A1 filed Jul. 22, 2021 and published Jan. 26, 2023) in view of Gautam et al. (US 10,657,125 B1, published May 19, 2020), and Nakano et al., “WebGPT: Browser-assisted question-answering with human feedback” (June 1, 2022)
Regarding Claim 1 (as amended),
Gupta teaches a processor-implemented method of neural network based text generation server using…search results, the method comprising (Gupta, Fig. 6F teaches using a cross-platform search system to generate textual responses using search results; pg. 5 [0046]-[0047] teaches neural network based text generation; pg. 16 [0136] teaches processor-implemented; Gupta, Abstract, teaches the present disclosure relates to systems, non-transitory computer-readable media, and methods that generate a dynamic cross-platform ask interface; Fig. 3 teaches answer providing servers 102b, 102c):
receiving, at the neural network based text generation server comprising a generative neural network model and from a user interface on a user device, a user input inquiring on a topic relating to real-time information that is not contained in prior training data the generative neural network model has been trained on (Gupta, pg. 7 [0067]: “the cross-platform search system 108 generates platform-specific requests in response to receiving a digital text query from a client device (e.g., the client device 110)… Additionally or alternatively, the cross-platform search system 108 receives the digital text query 302 requesting information specific to the software platform system 114a, or information specific to a general help topic.…the cross-platform search system 108 receives a digital text query 302…the cross-platform search system 108 receives the digital text query 302 from the software platform application 112 a (e.g., the native application installed on the client device 110 that coordinates with the software platform system 114 a), where the digital text query 302 requests information specific to the software platform system 114 b in natural language” teaches the cross-platform search system receives a digital text query in natural language (corresponds to natural language input) from a software platform application (corresponds to user interface) on a client device (corresponds to user device); pg. 6 [0061]: “Indeed, in one or more implementations, the server(s) 102 a includes all, or a portion of, the cross-platform search system 108” teaches the cross-platform search system can be implemented at a server; Gupta [0091]: teaching the response to the digital text query 312 causes the client device 110 to generate results in real-time; see also Figs. 6F; Note: As stated further below, Gupta, [0022] teaches the cross-platform search system builds the cross-platform language processing model utilizing the previous digital text queries and corresponding ground truth intents included in the training data [previous digital text queries as in not real-time information];Gupta, Paragraph [0047] teaches machine learning model can include generative adversarial neural networks.);
generating, by a query processing module at the neural network based text generation server, one or more search queries based on the user input (Gupta, pg. 8 [0073]: “in response to a digital text query requesting an update on the performance of a particular campaign over the last week, the cross-platform search system 108 extracts one or more parameter values from the digital text query including the name of the campaign and the date range including the last week…the cross-platform language processing model 304 extracts parameter values along with a registered intent from the digital text query 302…the cross-platform search system 108 utilizes the already-extracted parameter values to generate the platform-specific request” teaches the cross-platform search system generates a platform-specific request (corresponds to search query) based on the digital text query in natural language (corresponds to natural language input; Gupta, Paragraph [0008] teaches the disclosed systems offer a wide range of actions, insights, and content/links, all accessible from a single, dynamic user interface; Gupta, Paragraph [0025] further teaches the cross-platform search system further receives run-time digital text queries from a dynamic user interface.); pg. 6 [0061]: “Indeed, in one or more implementations, the server(s) 102 a includes all, or a portion of, the cross-platform search system 108” teaches the cross-platform search system can be implemented at a server; Gupta, Paragraph [0007] teaches in response to receiving a digital text query on a first software platform, the disclosed systems utilize a cross-platform language processing model (trained to recognize and understand terminology across a variety of specific software platforms) to extract at least one intent from the digital text query and identify a platform-specific configuration corresponding to the extracted intent.; Fig. 3 teaches answer providing servers 102b, 102c);
generating, by the generative neural network model that is deployed at the neural network based text generation server pretrained to process a variety of natural language processing tasks (Gupta, Paragraph [0007] teaches in response to receiving a digital text query on a first software platform, the disclosed systems utilize a cross-platform language processing model [trained to recognize and understand terminology across a variety of specific software platforms]), an output text describing the topic by feeding a natural language input combining texts from the one or more search results to the generative artificial intelligence AI neural network (Gupta, Paragraph [0046] teaches the machine learning model learns to approximate complex functions and generate outputs based on inputs provided by the model; Gupta, Paragraph [0048] further teaches an artificial neural network uses sequential information associated with words in a text input (e.g., a sentence) and in which an output of a current word is dependent on computations for previous words; Gupta, Abstract, teaches the cross-platform processing model provides platform-specific, contextually based responses to natural language digital text queries; pg. 13 [0110]: “the cross-platform search server system 104 is operable on the server(s) 102 a…the cross-platform search server system 104 (or mirrored cross-platform search system 108) includes…the cross-platform language processing model 304” teaches the server implements the cross-platform language processing model; pg. 5 [0046]-[0047] teaches the cross-platform language processing model can be a generative adversarial neural networks,…; pg. 8 [0075]: “In response to receiving a response to the platform-specific request from the answer provider (e.g., from the answer provider server(s) 102 c), the cross-platform search system 108 generates a response to the digital text query 312” teaches generating a response (corresponds to output) based on an answer (corresponds to search result; see also [0099], which recites “search result”) from an answer provider; pg. 12 [0105]: “the cross-platform search system 108 generates the query response items 624a, 624b including one or more of a media player associated with a digital video, a document preview (e.g., a PDF document preview, a WORD document preview), a link to a digital content item, and/or a digital image preview or link” teaches the response (corresponds to output) includes a link to a digital content item (corresponds to reference to a data source server) an output text describing the topic, wherein the generating comprises transforming a natural language input combining the user input and the one or more search results from the real-time Internet search into output text through layers of weights associated with neurons and non-linear activation functions in the generative neural network model (Gupta [0091]: teaching in one or more embodiments, the cross-platform search system 108 provides the response to the digital text query 312 [i.e., user input] to the client device 110 to cause the client device 110 to display the response to the digital text query 312 according to the display instructions. In at least one embodiment, for example, the cross-platform search system 108 does not render the digital text query response at the server-level. Instead, the cross-platform search system 108 generates the response to the digital text query 312 including raw data received from the software platform system 114b and the display instructions associated with at least one rendition type. Thus, when provided to the client device 110, the response to the digital text query 312 causes the client device 110 to generate a display of the raw data according to the display instructions. In this way, the response to the digital text query 312 causes the client device 110 to generate results in real-time, rather than providing a pre-rendered display.; see also Figs. 6F illustrating the user query – i.e., user input 612b concurrently displayed [combined] with the real-time Internet search results – 616b comprising - 624a and 624b; Gupta, Paragraph [0080] teaches the cross-platform language processing model 304 utilizes the knowledge graph as part of a graph neural network (e.g., with one or more neural network layers that generate an intent prediction based on weights between nodes within the knowledge graph neural network). Moreover, the cross-platform search system 108 performs the act 410 by modifying internal parameters and/or the weighted edges in the knowledge graph to reflect the terms or phrases of the previous digital text queries that correspond with the ground truth intents.; Gupta, Paragraph [0046] further teaches the cross-platform language processing model is a machine learning model with one or more parameters that can be built or tuned (e.g., trained) based on inputs to approximate unknown functions. In particular, the term machine learning model learns to approximate complex functions and generate outputs based on inputs provided to the model.), wherein the AI neural network has not been trained on any training data containing the real time information (Gupta, [0022]: the cross-platform search system builds the cross-platform language processing model utilizing the previous digital text queries and corresponding ground truth intents included in the training data [previous digital text queries as in not real-time information].; Gupta, [0059]: a user of the administrator client device 106 utilizes one or more user interfaces of the cross-platform search system 108 on the administrator client device 106 to submit platform-specific configurations associated with the software platform systems 114a, 114b along with previous digital text queries for information specific to the software platform system 114a, 114b, respectively, and ground truth intents for the previous digital text queries. Furthermore, the cross-platform search system 108 trains a cross-platform language processing model with this training data,);
However, Gupta does not distinctly disclose:
performing by a search engine at the neural network based text generation server real-time Internet search through web search on web indexing of one or more Internet data source servers based on the generated one or more search queries;
obtaining one or more search result through the real-time Internet search from the one or more Internet data source servers;
and wherein the output text includes one or more sentences that are different from the user input; and
and inserting, by the generative neural network model a citation in a form of a user-clickable user interface (UI) element that redirects to a webpage of at least one search result from at least on Internet data source server that provides reference authority to the one or more sentences in the output text; and
transmitting, from the server to the user device, the output text comprising the one or more sentences and the user-clickable UI element to be displayed at the user interface.
Nevertheless, Gautam teaches text generation using real-time search results… obtaining one or more search results through a real-time search at one or more data source servers based on the one or more search queries, as provided below.
obtaining one or more search result through the real-time Internet search from the one or more Internet data source servers; (Gautam et al., Col. 8 lines 54-65: “the analytics engine module 209…transforms the analytics query into a search query and sends a signal representing the search query to the search engine controller module 205. The search engine controller module 205 can perform a parallel distribution of the search query to one or more search engine server(s) (shown as 109 in FIG. 1), for example via an output signal 223 (shown as 123 in FIG. 1). The search engine(s) 109 performs search on various distributed search index nodes 215 a-215 m and sends a signal (not shown) representing the search results to the analytics engine module 209” [Note: Gautam teaches the real-time business intelligence platform performs real-time search of data source servers to produce search results; Fig. 6 teaches the output graph includes text])
performing by a search engine at the neural network based text generation server real-time Internet search through web search on web indexing of one or more Internet data source servers based on the generated one or more search queries;(Gautam, Abstract, teaches providing real-time business intelligence using national language queries facilitate a user to search within a data warehouse using a natural language question. Such business intelligence platform may receive a natural language based question, extract one or more key words from the natural language based question; Gautam, Col. 2, lines 48-51 teaches a real time discovery and business intelligence platform using natural language queries allows a user to search within a data warehouse and other data sources using a natural language question; Gautam, Col. 3, lines 9-11, teaches in one implementation, business intelligence may include the scope of analysis in Internet of Things (IoT) and/or Internet of Everything (IoE). Fig. 1, teaches Communication Network 105 and Col. 5, lines 42-47 teaches Communication Network 105 can be any communication network, such as the Internet, configurable to allow the one or more UEs 101, the one or more search servers 109 and business intelligence platform 103 to communicate with communication network 105.; Fig. 3 teaches Search Engine Indexer 315; Gautam, Col. 3, lines 63-6 and Col. 4, lines 1-3 teach the business intelligence platform indexes data from disparate sources into a computation search engine designed for real-time ad-hoc multi-dimensional analysis. The search engine can be used as an underlying data storage mechanism that enables fast multi-dimensional lookups in real-time, which enables real-time processing of a natural language question using cross functional-dependency algorithms without a time lapse.; Gautam, Col. 5, lines 4-10 teaches the business intelligence platform can use index files based on computational data search engine technology. Such a platform can provide natural language and search-based interfaces to analyze data and generate reports substantially in real-time without requiring a user to write queries in a query language (e.g., SQL) or use software configurations for generating reports.; Gautam, Col. 13, lines 50-56, teaches a search engine indexer 315 (similar to the search engine control module 205) can use the data sets 313 to define distributed search index nodes 317. The distributed search index nodes 317 can be similar to search index nodes 215a-215m. The search index nodes 317 can include data extracted, transformed, and loaded by the ETL layer 319.; Gautam et al., Col. 8 lines 54-65 further teach: “the analytics engine module 209…transforms the analytics query into a search query and sends a signal representing the search query to the search engine controller module 205. The search engine controller module 205 can perform a parallel distribution of the search query to one or more search engine server(s) (shown as 109 in FIG. 1)).
It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Gautam et al. to the disclosed invention of Gupta et al.
One of ordinary skill in the arts would have been motivated to make this modification to leverage “real-time data analysis, reporting and business intelligence related to data stored in various sources, and more particularly, to providing real-time business intelligence to users, irrespective of users' technical knowledge, using natural language interfaces” (Gautam et al. Col. 1 lines 59-63).
However, the combination of Gupta in view of Gautam does not distinctly disclose:
and wherein the output text includes one or more sentences that are different from the user input; and
and inserting, by the generative neural network model a citation in a form of a user-clickable user interface (UI) element that redirects to a webpage of at least one search result from at least on Internet data source server that provides reference authority to the one or more sentences in the output text; and
transmitting, from the server to the user device, the output text comprising the one or more sentences and the user-clickable UI element to be displayed at the user interface.
Nevertheless, Nakano teaches:
and wherein the output text includes one or more sentences that are different from the user input (See user interface illustrated in Nakano Figure 1 where the user input is “how to train crows to bring…” and the output texts are shown below and to the right of those results there is a user-clickable citation/reference);
and inserting, by the generative neural network model a citation in a form of a user-clickable user interface (UI) element that redirects to a webpage of at least one search result from at least on Internet data source server that provides reference authority to the one or more sentences in the output text (Nakano, Figure 1, teaches observation from a text-based web-browser environment [from fined-tuned GPT3 –generative neural network model, see Abstract] as shown to human demonstrators (left) and models (right),. Figure 1, as shown, teaches link that redirects to webpage <www.birdsoutsidemywindow.org> shown as a citation/reference at the right of the search results to the left; Nakano, pg. 2 further teaches a text-based web-browsing environment that a fine-tuned language model can interact with. This allows to improve both retrieval and synthesis in an end-to end fashion using general methods such as imitation learning and reinforcement learning. And, further teaches generating answers with references: passages extracted by the model from webpages while browsing. This is crucial for allowing to judge the factual accuracy of answers without engaging in a difficult and subjective process of independent search.); and
transmitting, from the server to the user device, the output text comprising the one or more sentences and the user-clickable UI element to be displayed at the user interface (Figure 1, teaches observation from a text-based web-browser environment [from fined-tuned GPT3 –generative neural network model, see Abstract] as shown to human demonstrators (left) and models (right),. Figure 1, as shown, teaches link that redirects to webpage <www.birdsoutsidemywindow.org> shown as a citation/reference at the right of the search results to the left;).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the dynamic cross-platform ask interface and natural language processing model, as taught by Gupta in view of Gautam, to further include the -----browser-assisted generative neural network model, as taught by Nakano, in order to improve both retrieval and synthesis in and end-to-end fashion using general methods such as imitation learning and reinforcement learning. And, further generating answers with reference:; passages extracted by the model from webpages while browsing. This is crucial for allowing to judge the factual accuracy of answers without engaging in a difficult and subjective process of independent search.(Nakano, pg. 2)
Regarding Claim 2 (as amended),
The combination of Gupta in view of Gautam and Nakano teaches all of the limitations of claim 1, the combination further teaches wherein the output text is generated by the generative AI neural network model at the server, and the generative AI neural network model is a language model (Nakano, Abstract, Figure 1, and pg. 2 teach fine-tuned GPT3 – a browser assisted generative neural network model that is a large language model, wherein Figure 1 illustrates the generated output text)
Regarding Claim 4,
The combination of Gupta in view of Gautam and Nakano teaches all of the limitations of claim 1, and Gupta further teaches wherein the output text is generated further based on user configured parameters including one or more of: a type of the output text; an intended audience of the output text; a tone of the output text; a format of the output text; and a length of the output text (Gupta, pg. 16 [0131]: “For example, the act 850 can include transmitting the response to the digital text query to the client device such that the client device renders the response to the digital text query by expanding the query interface into a response rendition canvas and rendering the response to the digital text query withing the response rendition canvas according to at least one of a table response rendition type, a reporting chart response rendition type, or a list response rendition type” teaches the response (corresponds to output) is generated based on response rendition types (correspond to user configured parameters including type or format of the natural language output)).
Regarding Claim 5,
The combination of Gupta in view of Gautam and Nakano teaches all of the limitations of claim 1, and Gautam further teaches wherein the obtaining the one or more search results through the real-time search comprises obtaining content from a web file via a link in the one or more search results (Gautam, Col. 8 lines 54-65: “the analytics engine module 209…transforms the analytics query into a search query and sends a signal representing the search query to the search engine controller module 205. The search engine controller module 205 can perform a parallel distribution of the search query to one or more search engine server(s) (shown as 109 in FIG. 1), for example via an output signal 223 (shown as 123 in FIG. 1). The search engine(s) 109 performs search on various distributed search index nodes 215 a-215 m and sends a signal (not shown) representing the search results to the analytics engine module 209” teaches the real-time business intelligence platform performs real-time search of data source servers to produce search results; Col. 6 line 45 to Col. 7 line 17: “The search engine server(s) 109 each can be, for example, a web server configured to provide search capabilities to electronic devices, such as UEs 101…The UEs 101 each can include a web browser configured to access a webpage or website hosted on or accessible via the business intelligence platform 103 over communication network 105…a user of a UE 101 can access a search engine server 109 via a URL designated for the search engine server 109 ” teaches obtaining search results includes obtaining content from web files through a URL (link)).
Motivation to combine same as stated for claim 1.
Regarding Claim 6 (as amended),
The combination of Gupta in view of Gautam and Nakano teaches all of the limitations of claim 1, and the combination further teaches wherein the output text is a summary of the one or more search results, and the user-clickable citation comprises a reference to the at least one search result from at least one Internet data source server indicates the one or more sentences relates to the at least one Internet data source server (Gupta, Fig. 6F and pg. 12 [0105]: “the cross-platform search system 108 generates the query response items 624a, 624b including one or more of a media player associated with a digital video, a document preview (e.g., a PDF document preview, a WORD document preview), a link to a digital content item, and/or a digital image preview or link” teaches the response (corresponds to output) includes a link to a digital content item (corresponds to reference to a data source server) wherein Fig. 6F teaches the output (as a reference to at least one data source server) indicates a portion of text that is a summary of the search result;).
Motivation to combine same as stated above for claim 1.
Regarding Claim 7,
The combination of Gupta in view of Gautam and Nakano teaches all of the limitations of claim 1, and Gupta further teaches wherein the user input comprises one or more of a text input, an audio input, an image input, and a video input (Gupta pg. 4 [0039]: “As used herein, a "digital text query" refers to one or more words and/or phrases that form a request for information. The cross-platform search system can identify a digital text query based on user input via a search interface, such as text input via a search bar or audio input device” teaches the digital text query in natural language (corresponds to natural language input) can be a text input or audio input; Gupta, Paragraph [0008] teaches the disclosed systems offer a wide range of actions, insights, and content/links, all accessible from a single, dynamic user interface; Gupta, Paragraph [0025] further teaches the cross-platform search system further receives run-time digital text queries from a dynamic user interface.); .
Regarding Claim 8 (as amended), the claim recites the same and/or analogous limitations to claim 1 (as amended). Therefore, claim 8 is rejected based on the same rationale and motivation as claim 1.
Gupta also teaches a system…the system comprising: a communication interface (Gupta Fig. 3 and pg. 7 [0067]: “the cross-platform search system 108 generates platform-specific requests in response to receiving a digital text query from a client device (e.g., the client device 110)…the cross-platform search system 108 receives a digital text query 302…the cross-platform search system 108 receives the digital text query 302 from the software platform application 112 a (e.g., the native application installed on the client device 110 that coordinates with the software platform system 114 a), where the digital text query 302 requests information specific to the software platform system 114 b in natural language” teach the answer provider has a “gateway” (corresponds to a communication interface) that receives a digital text query in natural language (corresponds to natural language input) from a software platform application (corresponds to user interface) on a client device (corresponds to user device); pg. 6 [0061]: “Indeed, in one or more implementations, the server(s) 102 a includes all, or a portion of, the cross-platform search system 108” teaches the cross-platform search system can be implemented at a server; Gupta, Abstract, teaches the present disclosure relates to systems, non-transitory computer-readable media, and methods that generate a dynamic cross-platform ask interface; Fig. 3 teaches answer providing servers 102b, 102c.; Paragraph [0047] teaches machine learning model can be a generative adversarial neural network model.);
a server implementing a generative artificial intelligence (AI) neural network and a plurality of processor-executable instructions; and one or more processors executing the instructions to perform operations comprising (Gupta, pg. 13 [0110]: “the cross-platform search server system 104 is operable on the server(s) 102 a…the cross-platform search server system 104 (or mirrored cross-platform search system 108) includes…the cross-platform language processing model 304” teaches the server implements the cross-platform language processing model; pg. 5 [0046]-[0047] teaches the cross-platform language processing model can be a generative adversarial neural networks (corresponds to generative artificial intelligence (AI) neural network); pg. 16 [0136] teaches processors executing the instructions).
Regarding Claim 9 (as amended),
Claim 9 recites the same and/or analogous limitations to claim 2. Therefore, claim 9 is rejected based on the same rationale as claim 2.
Regarding Claim 11,
Claim 11 recites the same and/or analogous limitations to claim 4. Therefore, claim 11 is rejected based on the same rationale as claim 4.
Regarding Claim 12,
Claim 12 recites the same and/or analogous limitations to claim 5. Therefore, claim 12 is rejected based on the same rationale as claim 5.
Regarding Claim 13,
Claim 13 recites the same and/or analogous limitations to claim 6. Therefore, claim 13 is rejected based on the same rationale as claim 6.
Regarding Claim 14,
Claim 14 recites the same and/or analogous limitations to claim 7. Therefore, claim 14 is rejected based on the same rationale as claim 7.
Regarding Claim 15 as amended,
Claim 15 (as amended) recites the same and/or analogous limitations to claim 1 (as amended). Therefore, claim 15 is rejected based on the same rationale and motivation as claim 1.
Gupta also teaches a processor-readable non-transitory storage medium storing a plurality of processor-executable instructions…the instructions being executed by one or more processors to perform operations comprising (Gupta, pg. 16 [0136] teaches a processor-readable non-transitory storage medium storing a plurality of processor-executable instructions).
Regarding Claim 16 (as amended),
Claim 16 recites analogous limitations to claim 2. Therefore, claim 16 is rejected based on the same rationale as claim 2.
Regarding Claim 18,
Claim 18 (as amended) recites the same and/or analogous limitations to claim 4. Therefore, claim 18 is rejected based on the same rationale as claim 4.
Regarding Claim 19,
Claim 19 (as amended) recites the same and/or analogous limitations to claim 5. Therefore, claim 19 is rejected based on the same rationale as claim 5.
Regarding Claim 20,
Claim 20 (as amended) recites the same and/or analogous limitations to claim 6. Therefore, claim 20 is rejected based on the same rationale as claim 6.
Claims 3, 10, and 17 (as amended) are rejected under 35 U.S.C. 103 as being unpatentable over Gupta et al. in view of Gautam et al. and Nakano et al. as applied to claims 1, 8, and 15, and further in view of Chai et al. (US 2023/0205832 A1, filed Dec. 29, 2021 and published Jun. 29, 2023)
Regarding Claim 3 (as amended),
The combination of Gupta in view of Gautam and Nakano teaches all of the limitations of claim 1, however, the combination does not distinctly disclose further comprising: transmitting a text generation input comprising the one or more search results to an external server hosting a language model; and obtaining the output text from the external server.
Nevertheless, Chai teaches further comprising: transmitting a text generation input comprising the one or more search results to an external server hosting a language model (Chai, Fig. 1 and pg. 3 [0026]: “The asset generator system 106 includes a document understanding module 114 that obtains the webpage 110 and recognizes portions of the webpage 110. The webpage 110 includes content 115, where the content 115 can include a title, section headers, body text, etc. The document understanding module 114 detects a language of the content 115 in the webpage 110, extracts a title from HTML of the webpage 110…the document understanding module 114 can employ natural language processing (NLP) technologies, image analysis technologies, and the like in connection with recognizing and extracting text from the content 115 of the webpage 110” teaches transmitting the webpages (correspond to search results) as input to an external computing system (server) that hosts the generator model (corresponds to language model)); and
obtaining the output text from the external server (Chai, Fig. 1 teaches the external computing system (server) provides asset outputs; see also Fig. 5 Step 512; Chai, Paragraph [0021]: Electronic summary documents are conventionally presented on search engine results pages (SERPs) in response to receipt of user queries. In a non-limiting example, an electronic summary document is a text advertisement.).
It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Chai et al. to the disclosed invention of Gupta et al. in view of Gautam et al. and Nakano.
One of ordinary skill in the arts would have been motivated to make this modification to leverage the “generation of multiple, diverse electronic summary documents for a webpage” (Chai et al. pg. 1 [0005]).
Regarding Claim 10,
Claim 10 recites the same and/or analogous limitations to claim 3. Therefore, claim 10 is rejected based on the same rationale and motivation as claim 3.
Regarding Claim 17,
Claim 17 recites the same and/or analogous limitations to claim 3. Therefore, claim 17 is rejected based on the same rationale and motivation as claim 3.
Conclusion
The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Chalasani et al. (US 20180040032 A1) teaching neural networks can be trained and implemented to identify predictive models for incrementality. In some embodiments, the determination of the causal effect (for example, the ATT, ATL, INC) can be used to redirect a customer to a network resource, such as a webpage for a product sale or a mobile function, such that the consumer is retargeted to a particular direction.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEATRIZ RAMIREZ BRAVO whose telephone number is 571-272-2156. The examiner can normally be reached Mon. - Fri. 7:30a.m.-5:00p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, USMAAN SAEED can be reached at 571-272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.R.B./Examiner, Art Unit 2146
/USMAAN SAEED/Supervisory Patent Examiner, Art Unit 2146