DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/18/2026 has been entered. Claims 1, 8, 10, 11, 17, and 19-20 stand amended. Claims 1-20 are currently pending.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim 1, 2, 4, 7-9, 11, 12, 16-18, 20 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-8 of copending Application No. 18/734,488 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the scope of the instant claims is anticipated by the scope of the reference claims.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
In regard to instant claim 1, the reference claim recites a method comprising:
scraping, by a first computing system, one or more first data sources of the first computing system, and one or more second data sources of one or more external computing systems, to compile a standardized dataset (“scraping, by a first computing system, one or more first data sources of the first computing system, and one or more second data sources of one or more external computing systems, to compile a first dataset;” wherein “standardizing, by the first computing system, the first dataset to generate a standardized dataset;” reference claim 1) comprising a plurality of data entries sourced from the one or more first data sources and the one or more second data sources;
establishing, by the first computing system in response to authenticating one or more credentials associated with a user, a session between the first computing system and a computing device associated with the user (“establishing, by the first computing system in response to authenticating the one or more credentials associated with the user, a session between the first computing system and a computing device associated with the user” reference claim 1), the one or more credentials comprising an identifier associated with one or more data access rules for the user (“determining, by the first computing system in response to authenticating one or more credentials associated with a user, access rights using an identifier within the one or more credentials;” reference claim 1);
receiving, by an AI interface of the first computing system during the session, a query from the computing device (“receiving, by the first computing system via the AI interface during the session, a query from the computing device” reference claim 1);
receiving, by the first computing system, an output of an Al model comprising a context associated with the query (“applying, by the first computing system, the encoded tokens to an AI model, to determine a context associated with the query;” reference claim 4), wherein the Al model is provided a plurality of encoded tokens representing the query to determine the context (“identifying, by the first computing system, a plurality of tokens representing the query, wherein the plurality of tokens are generated by the tokenizing of the one or more words included in the query; encoding, by the first computing system, each token into a corresponding encoded token;” reference claim 4);
querying, by the first computing system, the standardized dataset based on the determined context and the one or more credentials associated with the user (the authentication of the credentials as prerequisite for the establishment of the session, reference claim 1, “determining, by the first computing system in response to authenticating one or more credentials associated with a user, access rights using an identifier within the one or more credentials; establishing, by the first computing system in response to authenticating the one or more credentials associated with [[a]]the user, a session between the first computing system and a computing device associated with the user;”) to obtain a plurality of retrieved data entries of the plurality of data entries (“requesting, by the first computing system, one or more data entries from the database and/or from the one or more first data sources or the one or more second data sources, the one or more data entries requested according to the determined context;”, reference claim 4; “the first computing system generates the response to the query using at least a portion of the data from the database populated with the standardized dataset.” Reference claim 6) sourced from the one or more first data sources and the one or more second data sources (“requesting, by the first computing system, one or more data entries from the database and/or from the one or more first data sources or the one or more second data sources, the one or more data entries requested according to the determined context;” reference claim 4);
generating, by the first computing system, a response to the query for delivering via the AI interface to the computing device during the session (“generating, by the first computing system during the session, a response to the query for delivering via the Al interface to the computing device.” Reference claim 1), the response comprising a portion of information relating to the query to form a presentation (i.e. display of results of query);
and generating, by the first computing system, the presentation for display at the computing device in the AI interface according to the portion of information (“generating, by the first computing system, a response to the query for delivering via the Al interface to the computing device.” Reference claim 1).
In regard to claims 11 and 20, they are substantially similar to claim 1 and accordingly are rejected under similar reasoning.
In regard to instant claim 2, the reference claims further recite the presentation and the response are displayed at the computing device within the AI interface (“generating, by the first computing system, a response to the query for delivering via the Al interface to the computing device.” Reference claim 1).
In regard to claim 12, it is substantially similar to claim 2 and accordingly is rejected under similar reasoning.
In regard to instant claim 4, the reference claims further recite: identifying, by the first computing system from the response, at least a portion of the response corresponding to a plurality of presentations linked according to the query (i.e. plurality of data in response, “wherein the query comprises an inquiry for information relating to an enterprise, and wherein the response includes values for a plurality of fields relating to the enterprise.” Reference claim 2).
In regard to claim 14, it is substantially similar to claim 4 and accordingly is rejected under similar reasoning.
In regard to instant claim 7, the reference claims further recite: applying, by the first computing system, a first artificial intelligence (AI) algorithm to assign labels to data entries of the standardized dataset (“applying, by the first computing system, a first artificial intelligence (AI) algorithm to assign labels to data entries of the standardized dataset;” reference claim 1);
and compiling, by the first computing system, a database populated with the standardized dataset having the labels assigned to the respective data entries (“compiling, by the first computing system, the standardized dataset having the labels assigned to the respective data entries in a database;” reference claim 1).
In regard to claim 16, it is substantially similar to claim 7 and accordingly is rejected under similar reasoning.
In regard to instant claim 8, the reference claims further recite that information corresponding to the response is retrieved from the database using the labels assigned to the respective data entries and according to the request (“the first computing system generates the response to the query using at least a portion of the data from the database populated with the standardized dataset.” reference claim 6, wherein “applying, by the first computing system, a first artificial intelligence (AI) algorithm to assign labels to data entries of the standardized dataset; compiling, by the first computing system, the standardized dataset having the labels assigned to the respective data entries in a database;” reference claim 1).
In regard to claim 17, it is substantially similar to claim 8 and accordingly is rejected under similar reasoning.
In regard to instant claim 9, the reference claims further recite that the one or more first data sources comprise a customer relationship management (CRM) platform and a document database (“the one or more first data sources comprise a customer relationship management (CRM) platform and a document database.” Reference claim 3).
In regard to claim 18, it is substantially similar to claim 9 and accordingly is rejected under similar reasoning.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hudetz et al. in US Patent Application Publication № 2024/0370479, hereinafter called Hudetz, in combination with Gentilcore et al. in US Patent Application Publication № 2022/0269703, hereinafter called Gentilcore.
In regard to claim 1, Hudetz teaches a method comprising:
scraping, by a first computing system, one or more first data sources of the first computing system, and one or more second data sources of one or more external computing systems, to compile a standardized dataset (“Examples of data sources 302 include without limitation databases, web scraping, sensors and Internet of Things (IoT) devices, image and video cameras, audio devices, text generators, publicly available databases, private databases, and many other data sources 302. The data sources 302 may be remote from the artificial intelligence architecture 300 and accessed via a network, local to the artificial intelligence architecture 300 an accessed via a network interface, or may be a combination of local and remote data sources 302.” Paragraph 0098, wherein “The document manager 120 may process a document container 128 to generate a document image 140. The document image 140 is a unified or standard file format for an electronic document used by a given EDMP implemented by the system 100.” paragraph 0072; alternatively or additionally, a corpus as described in paragraph 0120) comprising a plurality of data entries sourced from the one or more first data sources and the one or more second data sources ("A corpus can include a variety of document types such as web pages, books, news articles, social media posts, scientific papers, and more. The corpus may be created for a specific domain or purpose, and it may be annotated with metadata or labels to facilitate analysis. Document corpora are commonly used in research and industry to train machine learning models and to develop NLP applications." Paragraph 0120);
establishing, by the first computing system in response to one or more credentials associated with a user (“The client device 112 may have utilized various work flows to identify the signers and associated network addresses (e.g., email address, short message service, multimedia message service, chat message, social message, etc.). For example, the client 134 may utilize workflows to identify multiple parties to the lease including bankers, landlord, and tenant. Further, the client 134 may
utilize workflows to identify network addresses (e.g., email address) for each of the signers.” Paragraph 0079), a session between the first computing system and a computing device associated with the user (i.e. workflow, “For example, the signature manager 122 may utilize a workflow to configure communication of the document
image 140 in series to obtain the signature of the first party before communicating the document image 140, including the signature of the first party, to a second party to obtain the signature of the second party before communicating the document image 140, including the signature of the first and second party to a third party, and so forth.” Paragraph 0079), the one or more credentials comprising an identifier associated with one or more data access rules for the user;
receiving, by an AI interface of the first computing system during the session, a query from the computing device (“The search manager 124 may receive a search query 144, encode it to a contextualized embedding in real-time, and leverage vector search to retrieve search results 146 with semantically similar document content within an electronic document 706.” Paragraph 0144);
receiving, by the first computing system, an output of an Al model comprising a context associated with the query (“The search process may produce a set of search results 146. The search results 146 may include a set of candidate document vectors that are semantically similar to the search vector of the search query 144.” Paragraph 0083), wherein the Al model is provided a plurality of encoded tokens representing the query to determine the context (i.e. contextual embedding, “The search manager 124 may generate a contextualized embedding for the search query 144 to form a search vector. A contextualized embedding may comprise a vector representation of a sequence of words in the search query 144 that includes contextual information for the sequence of words. The search manager 124 may search a document index of contextualized embeddings for the electronic document 142 with the search vector. Each contextualized embedding may comprise a vector representation of a sequence of words in the electronic document that includes contextual information for the sequence of words.” Paragraph 0083);
querying, by the first computing system, the standardized dataset based on the determined context and the one or more credentials associated with the user (“The
context information 734 may also comprise metadata for the electronic document 706 (e.g., signatures, STME, marker elements, document length, document type, etc.), the user generating the search query 144 (e.g., demographics, location, interests, business entity, etc.),”) paragraph 0147, note further the identity as supported by received information is within the broadest reasonable interpretation of a credential, paragraph 0049) to obtain a plurality of retrieved data entries of the plurality of data entries sourced from the one or more first data sources and the one or more second data sources (“In some embodiments, as with the document vectors 726, the candidate document vectors 718 may include or make reference to text components 606 for an electronic document 706. Alternatively, the text components 606 may be encoded into a different format other than a vector, such as text strings, for example." Paragraph 149, wherein "The search model 704 can then aggregate the embeddings of the document tokens using an attention mechanism to weight the importance of each token based on its relevance to the query. Specifically, the search model 704 can compute the attention scores between the query embedding and each document token embedding using the dot product or the cosine similarity" paragraph 0150, further note that multiple documents are contemplated as in paragraph 0144 and 0120);
generating, by the first computing system, a response to the query for delivering via the AI interface to the computing device during the session, the response generated based on a second output of the Al model and comprising a portion of information relating to the query to form a presentation (i.e. summary, “Once a set of search results 146 are obtained, the search manager 124 may summarize one or more of the candidate document vectors as an abstractive summary. The search manager 124 may implement or access a generative artificial intelligence (AI) platform that uses a large language module (LLM) to assist in summarizing the search results 146 to produce an Abstractive summary 148.” Paragraph 0084); wherein the Al model is provided the plurality of retreived data entries from the standardized dataset according to the determined context and the plurality of encoded tokens to generate the second output (“The generative AI may provide an Abstractive summary 148 of the search results 146 relevant to a given search query 144.” Paragraph 0084, wherein “Additionally, or alternatively, the search query 144 may be modified or expanded using context information 734. The context information 734 may be any information that provides some context for the search query 144. For example, the context information 734 may comprise a previous search query 144 by the same user, a search query 144 submitted by other users, or prior search results 146 from a previous search query 144." Paragraph 0147; alternatively or additionally, note that the contextualized embeddings taught in paragraph 0141 uses a set of documents and is used to create the needed search vectors, i.e. a standardized data set);
and generating, by the first computing system during the session, the presentation for display at the computing device in the AI interface according to the portion of information (“The search process may produce a set of search results 146.” Paragraph 0148).
However, while Hudetz does teach establishing, by the first computing system in response to one or more credentials associated with a user, a session between the first computing system and a computing device associated with the user (paragraph 0079),
he fails to expressly teach that the one or more credentials comprising an identifier associated with one or more data access rules for the user.
Gentilcore teaches establishing, by the first computing system in response to authenticating one or more credentials associated with a user, a session between the first computing system and a computing device associated with the user, the one or more credentials comprising an identifier associated with one or more data access rules for the user ("The networked computing environment 100 may provide access to protected resources (e.g., networks, servers, storage devices, files, and computing applications) based on access rights (e.g., read, write, create, delete, or execute rights) that are tailored to particular users of the computing environment ( e.g., a particular employee or a group of users that are identified as belonging to a particular group or classification). An access control system may perform various functions for managing access to resources including authentication, authorization, and auditing. Authentication may refer to the process of verifying that credentials provided by a user or entity are valid or to the process of confirming the identity associated with a user or entity (e.g., confirming that a correct password has been entered for a given username). Authorization may refer to the granting of a right or permission to access a protected resource or to the process of determining whether an authenticated user is authorized to access a protected resource." Paragraph 0043)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant invention to modify the AI document search system taught by
Hudetz to include authentication of credentials to determine access rights to access via a web browser, as taught by Gentilcore. It would have been obvious because it represents the application of a known technique (i.e. the authentication of user credentials to determine access rights, especially to a search system web interface, as taught by Gentilcore in at least paragraph 0043) to a known system (i.e. the Al-based document search system, which includes a web page search GUI, as taught by Hudetz in at least paragraph 0219) ready for improvement to yield predictable results (i.e. the web page search system will use authentication of user credentials to determine access privileges). One would have been motivated to do so in order to ensure compliance with regulations, as taught by Gentilcore ("In some cases, a particular set of data may be associated with an ACL that determines which users within an organization may access the particular set of data. In one example, to ensure compliance with data security and retention regulations, the particular set of data may comprise sensitive or confidential information that is restricted to viewing by only a first group of users. In another example, the particular set of data may comprise source code and technical documentation for a particular product that is restricted to viewing by only a second group of users." paragraph 0056)
In regard to claims 11 and 20, they are substantially similar to claim 1 and accordingly are rejected under similar reasoning.
In regard to claim 2, Hudetz further teaches that the presentation and the response are displayed at the computing device within the AI interface (“The search process may produce a set of search results 146.” Paragraph 0148).
In regard to claim 12, it is substantially similar to claim 2 and accordingly is rejected under similar reasoning.
In regard to claim 3, Hudetz further teaches that generating the presentation comprises generating a plurality of presentations including the presentation, each presentation corresponding to a different type of presentation, from a first type including a plot presentation, a second type including a visual presentation (“The search manager 124 may prepare a prompt with both the search query 144 and some or all of the search results 146 (e.g., the top k sections) from the electronic document 706, and send it to the generative AI model 728 to create an Abstractive summary 148. The server device 102 may surface the abstractive summary 148 and/or the search results 146 in a graphical user interface (GUI) of a client device, such as client devices 112 or client devices 116.” Paragraph 0134)., and a third type comprising a comparative presentation.
In regard to claim 13, it is substantially similar to claim 3 and accordingly is rejected under similar reasoning.
In regard to claim 4, Hudetz further teaches: identifying, by the first computing system from the response, at least a portion of the response corresponding to a plurality of presentations linked according to the query (“, “The search manager 124 may prepare a prompt with both the search query 144 and some or all of the search results 146 (e.g., the top k sections) from the electronic document 706, and send it to the generative AI model 728 to create an Abstractive summary 148.” Paragraph 0134).
In regard to claim 5, Hudetz further teaches that generating the presentation comprises: generating, by the first computing system, a comparative presentation based on at least a portion of data corresponding to the plurality of presentations linked according to the query (i.e. summary of results, “The search manager 124 may prepare a prompt with both the search query 144 and some or all of the search results 146 (e.g., the top k sections) from the electronic document 706, and send it to the generative AI model 728 to create an Abstractive summary 148. The server device 102 may surface the abstractive summary 148 and/or the search results 146 in a graphical user interface (GUI) of a client device, such as client devices 112 or client devices 116.” Paragraph 0134).
In regard to claim 6, Hudetz further teaches: receiving, by the first computing system via the AI interface from the computing device, responsive to the AI interface displaying the response and the presentation, a prompt relating to the presentation (i.e. a query, “Additionally, or alternatively, the search query 144 may be modified or expanded using context information 734. The context information 734 may be any information that provides some context for the search query 144. For example, the context information 734 may comprise a previous search query 144 by the same user, a search query 144 submitted by other users, or prior search results 146 from a previous search query 144. The context information 734 may allow the user to build search queries in an iterative manner, drilling down on more specific search questions in follow-up to reviewing previous search results 146. The context information 734 may also comprise metadata for the electronic document 706 (e.g., signatures, STME, marker elements, document length, document type, etc.),” paragraph 0147);
and determining, by the first computing system, based on the standardized dataset, information responsive to the prompt (i.e. STME information,” The document
manager 120 may generate the visual elements based on separate and distinct input including the STME information 130 and the STME 132 contained in the document container” paragraph 0077, note that “Accordingly, the PDF and the STME 132 are separate and distinct input as they are generated by different workflows provided by different providers.” Paragraph 0077);
and generating, by the first computing system, an overlay applied to the presentation, providing the information response, for displaying via the AI interface (“In addition to the electronic document 142, the document container 128 may also include metadata for the electronic document 142. In one embodiment, the metadata may comprise signature tag marker element (STME) information 132 for the electronic document 142. The STME information 130 may comprise one or more STME 132, which are graphical user interface (GUI) elements superimposed on the electronic document 142. The GUI elements may comprise textual elements, visual elements, auditory elements, tactile elements, and so forth” paragraph 0070).
In regard to claim 15, it is substantially similar to claim 6 and accordingly is rejected under similar reasoning.
In regard to claim 7, Hudetz further teaches: applying, by the first computing system, a first artificial intelligence (AI) algorithm to assign labels to data entries of the standardized dataset (“This can be useful for tasks such as document content classification or sentiment analysis, where the search model 704 assigns a label or score to a portion of a document or the entire document based on its content” paragraph 0140; “One or more of the information blocks 710 and/or the document vectors 726 may optionally include block labels assigned using a machine learning model, such as a classifier.” paragraph 0164);
and compiling, by the first computing system, a database populated with the standardized dataset having the labels assigned to the respective data entries (“A corpus can include a variety of document types such as web pages, books, news articles, social media posts, scientific papers, and more. The corpus may be created for a specific domain or purpose, and it may be annotated with metadata or labels to facilitate analysis. Document corpora are commonly used in research and industry to train machine learning models and to develop NLP applications.” Paragraph 0120).
In regard to claim 16, it is substantially similar to claim 7 and accordingly is rejected under similar reasoning.
In regard to claim 8, Hudetz further teaches that information corresponding to the response is retrieved from the database using the labels assigned to the respective data entries and according to the request (“The search manager 124 may receive a search query 144, encode it to a contextualized embedding in real-time, and leverage vector search to retrieve search results 146 with semantically similar document content within an electronic document 706.” Paragraph 0144).
In regard to claim 17, it is substantially similar to claim 8 and accordingly is rejected under similar reasoning.
In regard to claim 9, Hudetz further teaches that the one or more first data sources comprise a customer relationship management (CRM) platform and a document database (“In some cases, the document corpus may be associated with a particular entity, such as a customer or client of the electronic document management company, and may therefore contain proprietary, strategic and valuable business information.” Paragraph 0046).
In regard to claim 18, it is substantially similar to claim 9 and accordingly is rejected under similar reasoning.
In regard to claim 10, Hudetz further teaches that generating the response comprises: generating, by the first computing system, a plurality of tokens representing the query (i.e. each value in an embedding token, “The search manager 124 may receive a search query 144, encode it to a contextualized embedding in real-time, and leverage vector search to retrieve search results 146 with semantically similar document content within an electronic document 706.” Paragraph 0144);
encoding, by the first computing system, each token into a corresponding encoded token of the plurality of encoded tokens representing the query; (“The search manager 124 may use the search model 704 to generate a contextualized embedding for the search query 144 to form a search vector. As previously discussed, a contextualized embedding may comprise a vector representation of a sequence of words in the search query 144 that includes contextual information for the sequence of words.” Paragraph 0145);
applying, by the first computing system, the plurality of encoded tokens to an AI model, to generate the output comprising the context associated with the query (“Additionally, or alternatively, the search query 144 may be modified or expanded using context information 734. The context information 734 may be any information that provides some context for the search query 144. For example, the context information 734 may comprise a previous search query 144 by the same user, a search query 144 submitted by other users, or prior search results 146 from a previous search query 144.” Paragraph 0147);
requesting, by the first computing system, the plurality of retrieved data entries from the database populated with the standardized dataset and/or from the one or more first data sources or the one or more second data sources, the plurality of retrieved data entries requested according to the determined context (“The search manager 124 may search a document index 730 of contextualized embeddings for the electronic document 706 with the search vector, which is itself a contextualized embedding of the same type as those stored in the document index 730. Each contextualized embedding may comprise a vector representation of a sequence of words in the electronic document that includes contextual information for the sequence of words.” Paragraph 0148);
applying, by the first computing system, data corresponding to the plurality of retrieved data entries and the plurality of encoded tokens to the AI model to generate the second output (“The search manager 124 may search a document index 730 of contextualized embeddings for the electronic document 706 with the search vector, which is itself a contextualized embedding of the same type as those stored in the document index 730.” Paragraph 0148, wherein “semantically similar document content within an electronic document 706. The search manager 124 may prepare a prompt with both the search query 144 and some or all of the search results 146 (e.g., the top k sections) from the electronic document 706, and send it to the generative AI model 728 to create an Abstractive summary 148. The server device 102 may surface the abstractive summary 148 and/or the search results 146 in a graphical user interface (GUI) of a client device, such as client devices 112 or client devices 116.” Paragraph 0134);
and generating, by the first computing system, the response based on the second output from the AI model (“The search process may produce a set of search results 146. The search results 146 may include a set of P candidate document” paragraph 0148, wherein a summary is generated as in paragraph 0134).
In regard to claim 19, it is substantially similar to claim 10 and accordingly is rejected under similar reasoning.
In regard to claim 14, Hudetz further teaches that the one or more processors are configured to: identify, from the response, at least a portion of the response corresponding to a plurality of presentations linked according to the query, and wherein to generate the presentation, the one or more processors are configured to generate a comparative presentation based on at least a portion of data corresponding to the plurality of presentations linked according to the query (The search manager 124 may prepare a prompt with both the search query 144 and some or all of the search results 146 (e.g., the top k sections) from the electronic document 706, and send it to the generative AI model 728 to create an Abstractive summary 148. The server device 102 may surface the abstractive summary 148 and/or the search results 146 in a graphical user interface (GUI) of a client device, such as client devices 112 or client devices 116.” Paragraph 0134).
Response to Arguments
Applicant’s arguments, see page 9, filed 2/18/2026, with respect to the double-patenting rejection(s) of claim(s) 1, 2, 4, 7-9, 11, 12, 16-18, 20 under copending application № 18/734,488, and specifically that the previous rejection is no longer applicable as the claims have been amended, have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of copending application № 18/734,488. In particular, the limitations of copending application № 18/734,488 as presently amended anticipate the instant claims. For more information, please refer to the relevant section above.
Applicant’s arguments, see pages 9-12, filed 2/18/2026, with respect to the rejection(s) of claim(s) 1-20 under 35 U.S.C. 102(a)(2) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Hudetz and Gentilcore. For more information, please refer to the relevant sections above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Lauren Z Ganger whose telephone number is (571)272-0270. The examiner can normally be reached 10:00 AM - 7:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay Bhatia can be reached at (571) 272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AJAY M BHATIA/ Supervisory Patent Examiner, Art Unit 2156