Prosecution Insights
Last updated: April 19, 2026
Application No. 18/652,615

UTILIZING A LARGE LANGUAGE MODEL TO PERFORM A QUERY

Non-Final OA §103§DP
Filed
May 01, 2024
Examiner
SPOONER, LAMONT M
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Tiny Fish Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
86%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
445 granted / 603 resolved
+11.8% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
22 currently pending
Career history
625
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
50.1%
+10.1% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 603 resolved cases

Office Action

§103 §DP
DETAILED ACTION Introduction This office action is in response to applicant’s claims filed 5/1/2024. Claims 1-23 are currently pending and have been examined. There is no claim to foreign priority. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 2, 5-23 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,019,663 (hereinafter referred to as ‘663). Regarding claim 1, ’663 teaches a method, comprising: receiving from a client device a query (claim 1, column 8 line 38, see corresponding and similar limitation); providing to a large language model a prompt to generate a plurality of subtopics on the query and to generate a corresponding plurality of keywords for each of the plurality of subtopics (claim 1 column 8, lines 39-42, see corresponding and similar limitation); utilizing one or more search engines to perform a plurality of searches utilizing the plurality of subtopics and the corresponding plurality of keywords received from the large language model (claim 1, column 8 lines 49-52, see corresponding and similar limitation); receiving from the one or more search engines a plurality of responses corresponding to the plurality of subtopics and the corresponding plurality of keywords (claim 1, column 8 lines 53-55, see corresponding and similar limitation); evaluating the plurality of responses based on the corresponding plurality of keywords (claim 1, column 8 lines 56-57, see corresponding and similar limitation); providing to the large language model the received query and a corresponding subset of sentences associated with each of the plurality of subtopics selected from a subset of the plurality of responses (claim 1, column 8 lines 63-65, see corresponding and similar limitation); receiving a query response from the large language model that is generated based on the received query and the corresponding subset of sentences associated with each of the plurality of subtopics selected from the subset of the plurality of responses (claim 1, column 8, lines 66-, see corresponding and similar limitation); and providing to the client device the query response that includes a plurality of links to sources that were used by the large language model to generate the large language model query response (see claim 14, corresponding and similar limitation). Regarding claim 2, ‘663 teaches further makes obvious the method of claim 1, further comprising rephrasing the query (his claim 2, see corresponding and similar limitation). Regarding claim 5, ‘663 teaches further makes obvious the method of claim 1, wherein a prompt to generate the plurality of subtopics on the query includes a corresponding number of the subtopics and a corresponding number of the keywords for each of the plurality of subtopics (claim 1, lines 43-45, see corresponding and similar limitation). Regarding claim 6, ‘663 teaches further makes obvious the method of claim 1, further comprising receiving from the large language model the plurality of subtopics and the corresponding plurality of keywords for each of the plurality of subtopics (claim 1, lines 46-48, see corresponding and similar limitation). Regarding claim 7, ‘663 teaches further makes obvious the method of claim 1, wherein evaluating the plurality of responses includes counting, for each response included in the plurality of responses, a number of times the corresponding plurality of keywords appears in the plurality of responses (see claim 3, corresponding and similar limitation). Regarding claim 8, ‘663 teaches further makes obvious the method of claim 7, wherein the plurality of responses includes a plurality of search result snippets (see claim 4, corresponding and similar limitation). Regarding claim 9, ‘663 teaches further makes obvious the method of claim 7, wherein evaluating the plurality of responses includes ranking the plurality of responses (see claim 5, corresponding and similar limitation). Regarding claim 10, ‘663 teaches further makes obvious the method of claim 9, wherein the plurality of responses is ranked based on the number of times the corresponding plurality of keywords appears in a corresponding response (see claim 6, corresponding and similar limitation). Regarding claim 11, ‘663 teaches further makes obvious the method of claim 9, wherein the plurality of responses is ranked based on a corresponding domain associated with the plurality of responses (see claim 7, corresponding and similar limitation). Regarding claim 12, ‘663 teaches further makes obvious the method of claim 9, wherein evaluating the plurality of responses includes generating a corresponding subset of responses for each of the plurality of subtopics based on the plurality of ranked responses (see claim 8, corresponding and similar limitation). Regarding claim 13, ‘663 teaches further makes obvious the method of claim 12, wherein a top number or a top percentage of the ranked responses is included in the corresponding subset of responses (see claim 9, corresponding and similar limitation). Regarding claim 14, ‘663 teaches further makes obvious the method of claim 12, further comprising parsing text included in pages linked to the corresponding subset of responses (see claim 10, corresponding and similar limitation). Regarding claim 15, ‘663 teaches further makes obvious the method of claim 14, further comprising ranking sentences included in the parsed texted included in the pages linked to the corresponding subset of responses based on the number of times the corresponding plurality of keywords appears in the sentences (see claim 11, corresponding and similar limitation). Regarding claim 16, ‘663 teaches further makes obvious the method of claim 15, wherein each of the plurality of subtopics is associated with a corresponding subset of sentences that are selected from the ranked sentences (see claim 12, corresponding and similar limitation). Regarding claim 17, ‘663 teaches further makes obvious the method of claim 16, wherein each of the plurality of subtopics is associated with a word limit (see claim 13, corresponding and similar limitation). Regarding claim 18, ‘663 teaches further makes obvious the method of claim 1, further comprising utilizing one or more web agents to parse text associated with one or more online sources (see claim 15, corresponding and similar limitation). Regarding claim 19, ‘663 teaches further makes obvious the method of claim 18, wherein at least one of the one or more web agents utilizes login credentials associated with a user of the client device to access the one of the one or more online sources (see claim 16, corresponding and similar limitation). Regarding claim 20, ‘663 teaches further makes obvious the method of claim 1, further comprising: receiving from the client device one or more subsequent queries related to the query (see claim 17, corresponding and similar limitation); and storing the query and the one or more subsequent queries related to the query as a branched query topic (ibid). Regarding claim 21, ‘663 teaches further makes obvious the method of claim 20, further comprising providing access to the branched query topic via a query topic board (see claim 18, corresponding and similar limitation). Regarding claims 22 and 23, claims 22 and 23 set forth limitation similar to claim 1, and is thus rejected under similar reasons and rationale, wherein ‘663, teaches the system and computer program product embodied in a non-transitory computer readable medium (see ‘663 claims 19, and 20, respectively). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6 and 20-23 are rejected under 35 U.S.C. 103 as being unpatentable over Mashiach et al. (Mashiach, US 2016/0147893), in view of in view of Sarukkai (US 8,412,698), in view of Cantu et al. (Cantu, US 2025/0028746). As per claim 1, Mashiach teaches a method, comprising: a method, comprising: receiving from a client device a query (Fig. 1, his client system, paragraph [0050]-his user search query); providing to a large language model a prompt to generate a plurality of subtopics on the query and to generate a corresponding plurality of keywords for each of the plurality of subtopics (ibid-paragraphs [0050, 0051]-his social-networking system as the large language model (LLM) hereinafter, and generating topics/as subtopics hereinafter, and populating each subtopic with keywords, his list of topics having generated and associated keywords corresponding therewith); utilizing one or more search engines to perform a plurality of searches utilizing the plurality of subtopics and the corresponding plurality of keywords received from the large language model (ibid-paragraphs [0050, 0051, 0058, 0062]-as his search engine, retrieving objects which match the search query, the search query comprising the topics and associated keywords, Figs. 3, 4, Fig. 4 including search query, multiple subtopics, and multiple keywords corresponding to the search topics); receiving from the one or more search engines a plurality of responses corresponding to the plurality of subtopics and the corresponding plurality of keywords (ibid, Fig. 4, see also paragraph [0066]-his search results page, as plurality of responses); evaluating the plurality of responses based on the corresponding plurality of keywords (ibid, paragraph [0063]-his search query result, scored based on keyword match frequency); providing to the large language model the received query and a corresponding subset of sentences associated with each of the plurality of subtopics [selected from a subset of the plurality of responses] (ibid-Mashiach, paragraph [0042, 0049, 0050]-his query and pages associated with the subtopics, received by the language model and corresponding response generated and output to the user, See Figs. 1, 6-8-pages, search query supplied to the LLM, analysis and output to the user); receiving a query response from the large language model that is generated based on the received query and the corresponding subset of sentences associated with each of the plurality of subtopics selected from the subset of the plurality of responses (ibid-Mashiach, paragraph [0042, 0049, 0050]-his query and pages associated with the subtopics, received by the language model and corresponding response generated and output to the user, See Figs. 1, 6-8-pages, search query supplied to the LLM, analysis and output to the user); and [providing to the client device the query response that includes a plurality of links to sources that were used by the large language model to generate the large language model query response]. Mashiach lacks teaching that which Sarukkai teaches providing to the large language model the received query and a corresponding subset of sentences associated with each of the plurality of subtopics selected from a subset of the plurality of responses (Sarukkai, Figs. 5, 7, 8-his plurality of subtopics, professional basketball, amateur basket basketball, all ranked responses, and subset of responses in his list for each and every subtopic, C.8 lines 16-58-his query and subset of filter records associated with each subtopic). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Mashiach and Sarukkai to combine the prior art element of a prompt to generate a plurality of subtopics and keywords as taught by Mashiach with having a search query based on a query and subset of subtopics, including ranking search results based on subtopics and generated results for all subtopics as taught by Sarukkai as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be filtering/enhancing queries to rank search results (ibid, Sarukkai). Mashiach with Sarukkai lack that ‘663 lacks teaching that which Cantu teaches, providing to the client device the query response that includes a plurality of links to sources that were used by the large language model to generate the large language model query response (paragraphs [0165, 0161-0165, 0140]-his response and corresponding link to source document, used by his LLM to generate the response). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Mashiach with Sarukkai with Cantu to combine the prior art element of generating a response as taught by Mashiach with providing a link to a source of a generated response (including knowledge bases, articles, documents, etc.) as taught by Cantu as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be providing a user a response that includes interactive data elements, such as a link to a data source for the response (ibid, Cantu, paragraph [0161-0165]). As per claim 2, Mashiach further makes obvious the method of claim 1, further comprising rephrasing the query (paragraph [0046]-the unstructured query, rephrased into a formal query syntax). As per claim 3, Mashiach further makes obvious the method of claim 2, wherein the query is rephrased to fix grammatical and/or spelling errors (ibid-paragraph [0046]-his query rephrased to conform to standard grammar rules). As per claim 4, Mashiach further makes obvious the method of claim 2, wherein the query is rephrased to present the query in an improved format for the large language model prompt (ibid-paragraph [0045-0046]-his query rephrased to conform to standard grammar rules, and corresponding structured query, as an improved format for execution by the language model as described above). As per claim 6, Mashiach further makes obvious the method of claim 1, further comprising receiving from the large language model the plurality of subtopics and the corresponding plurality of keywords for each of the plurality of subtopics (ibid-see claim 1, LLM discussion, Mashiach paragraph [0050]). As per claim 20, Mashiach with Sarukkai with Cantu further make obvious the method of claim 1, Sarukkai teaches that which the others lack, further comprising: receiving from the client device one or more subsequent queries related to the query (C.8 lines 16-58-his stored query history including related keywords, topic history, Fig. 5); and storing the query and the one or more subsequent queries related to the query as a branched query topic (ibid-his filter records, based on related topic, “basketball” and each branched query topic therefrom, “professional basketball” and “amateur basketball” record history). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Mashiach and Sarukkai to combine the prior art element of a prompt to generate a plurality of subtopics and keywords as taught by Mashiach with including a topic dashboard with branched query topic history as taught by Sarukkai as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be providing selectable search topic history for filtering (ibid, Sarukkai). As per claim 21, Mashiach with Sarukkai with Cantu further makes obvious the method of claim 20, further comprising, as lacking by the others, and taught by Sarukkai, providing access to the branched query topic via a query topic board (ibid, Sarukkai-Fig. 5 as his query topic board, including his filter records, based on related topic, “basketball” and each branched query topic therefrom, “professional basketball” and “amateur basketball” record history, see also Fig. 6-including query topic history, as similarly combined and motivated.). As per claim 22, claim 22 sets forth limitations similar to claim 1 and is thus rejected under similar reasons and rationale, wherein the system is deemed to embody the method, such that Mashiach with Sarukkai with Cantu make obvious a system (Mashiach, Figs. 1, 7-his system and processor and coupled memory, paragraphs [0079-0086]-see his system discussion), comprising: a processor configured to (ibid): receive from a client device a query (ibid-see claim 1, corresponding and similar limitation); provide to a large language model a prompt to generate a plurality of subtopics on the query and to generate a corresponding plurality of keywords for each of the plurality of subtopics (ibid); utilize one or more search engines to perform a plurality of searches utilizing the plurality of subtopics and the corresponding plurality of keywords received from the large language model (ibid); receive from the one or more search engines a plurality of responses corresponding to the plurality of subtopics and the corresponding plurality of keywords (ibid); evaluate the plurality of responses based on the corresponding plurality of keywords (ibid); provide to the large language model the received query and a corresponding subset of sentences associated with each of the plurality of subtopics selected from a subset of the plurality of responses (ibid); receive a query response from the large language model that is generated based on the received query and the corresponding subset of sentences associated with each of the plurality of subtopics selected from the subset of the plurality of responses (ibid); and provide to the client device the query response that includes a plurality of links to sources that were used by the large language model to generate the large language model query response (ibid); and a memory coupled to the processor and configured to provide the processor with instructions (Figs. 1, 7-his system and processor and coupled memory, paragraphs [0079-0086]-see his system and instructions discussion). As per claim 23, claim 23 sets forth limitations similar to claim 1 and is thus rejected under similar reasons and rationale, wherein the computer readable medium is deemed to embody the method, such that Mashiach with Sarukkai with Cantu make obvious, a computer program product embodied in a non-transitory computer readable medium and comprising computer instructions for (Mashiach, Figs. 1, 7-see his computer-readable non-transitory medium and instructions discussion): receiving from a client device a query (ibid-see claim 1, corresponding and similar limitation); providing to a large language model a prompt to generate a plurality of subtopics on the query and to generate a corresponding plurality of keywords for each of the plurality of subtopics (ibid); utilizing one or more search engines to perform a plurality of searches utilizing the plurality of subtopics and the corresponding plurality of keywords received from the large language model (ibid); receiving from the one or more search engines a plurality of responses corresponding to the plurality of subtopics and the corresponding plurality of keywords (ibid); evaluating the plurality of responses based on the corresponding plurality of keywords (ibid); providing to the large language model the received query and a corresponding subset of sentences associated with each of the plurality of subtopics selected from a subset of the plurality of responses (ibid); receiving a query response from the large language model that is generated based on the received query and the corresponding subset of sentences associated with each of the plurality of subtopics selected from the subset of the plurality of responses (ibid); and providing to the client device the query response that includes a plurality of links to sources that were used by the large language model to generate the large language model query response (ibid). Claims 7-9 and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Mashiach in view of Sarukkai in view of Cantu, as applied to claim 1, and further in view of Sekine (US 2020/0125594). As per claim 7, Mashiach further makes obvious the method of claim 1, but lacks teaching that which Sekine teaches wherein evaluating the plurality of responses includes counting, for each response included in the plurality of responses, a number of times the corresponding plurality of keywords appears in the plurality of responses (paragraph [0062, 0066]-is page of presentation, correspond ranked pages based on search term, the ranking abed on frequency of the term in the page). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Mashiach and Sekine to combine the prior art element of a prompt to generate a plurality of subtopics and keywords as taught by Mashiach with ranking search results based on frequency of a search term found in the page as taught by Sekine as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be providing ranked results (ibid, Sekine). As per claim 8, Mashiach further makes obvious the method of claim 7, wherein the plurality of responses includes a plurality of search result snippets (ibid-Mashiach, paragraph [0066], Fig. 5, items 530A-530d, as selectable search result snippets, linking to the full object page). As per claim 9, Mashiach further makes obvious the method of claim 7, wherein evaluating the plurality of responses includes ranking the plurality of responses (ibid-Mashiach, paragraph [0074]-his ranked search results). As per claim 11, Mashiach further make obvious the method of claim 9, but lacks teaching that which Sarukkai teaches wherein the plurality of responses is ranked based on a corresponding domain associated with the plurality of responses (C.10 lines 19-35-his ranking based on domain names, Figs. 6-8, including ranked results). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Mashiach and Sarukkai to combine the prior art element of a prompt to generate a plurality of subtopics and keywords as taught by Mashiach with ranking search results based on domain information as taught by Sarukkai as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be ranking search results (ibid, Sarukkai). As per claim 12, Mashiach further make obvious the method of claim 9, but lacks teaching that which Sarukkai teaches wherein evaluating the plurality of responses includes generating a corresponding subset of responses for each of the plurality of subtopics based on the plurality of ranked responses (Sarukkai, Figs. 5, 7, 8-his plurality of subtopics, professional basketball, amateur basket basketball, all ranked responses, and subset of responses in his list for each and every subtopic). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Mashiach and Sarukkai to combine the prior art element of a prompt to generate a plurality of subtopics and keywords as taught by Mashiach with ranking search results based on subtopics and generated results for all subtopics as taught by Sarukkai as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be ranking search results (ibid, Sarukkai). As per claim 13, Mashiach with Sarukkai with Sekine further make obvious the method of claim 12, Sarukkai teaching that which the others lack, wherein a top number or a top percentage of the ranked responses is included in the corresponding subset of responses (ibid-see above ranked responses discussion, Sarukkai, C.10 lines 14-18-his configured select number of ranked documents to be included in the subset of responses, Figs. 6-8). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Mashiach and Sarukkai to combine the prior art element of a prompt to generate a plurality of subtopics and keywords as taught by Mashiach with including a top number of ranked results as taught by Sarukkai as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be ranking search results (ibid, Sarukkai). Claims 10, and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Mashiach, in view of Sarukkai, in view of Cantu, in view of Sekine (US 2020/0125594), as applied to claim 9 above, and further in view of Bitan et al. (Bitan, US 2014/0280174). As per claim 10, Mashiach further makes obvious the method of claim 9, but lacks explicitly teaching that which Bitan teaches wherein the plurality of responses is ranked based on the number of times the corresponding plurality of keywords appears in a corresponding response (paragraphs [0042, 0283, 0284]-his ranking of document responses based on keywords included, Fig. 8). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Mashiach and Bitan to combine the prior art element of a prompt to generate a plurality of subtopics and keywords as taught by Mashiach with ranking search results based on keyword information as taught by Bitan as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be ranking search results (ibid, Bitan). As per claim 14, Mashiach with Sarukkai with Cantu with Sekine with Bitan further make obvious the method of claim 12, Bitan teaches, that which the others lack, further comprising parsing text included in pages linked to the corresponding subset of responses (Bitan, paragraphs [0089, 0283, 0284], Fig. 8-his linked search result pages, parsed and keyword frequencies counted for ranking of search results). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Mashiach and Bitan to combine the prior art element of a prompt to generate a plurality of subtopics and keywords as taught by Mashiach with ranking search results based on search results parsed for keyword information as taught by Bitan as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be ranking search results (ibid, Bitan). As per claim 15, Mashiach with Sarukkai with Cantu with Sekine with Bitan further make obvious the method of claim 14, further comprising, as taught by Bitan, and lacking from the others, ranking sentences included in the parsed texted included in the pages linked to the corresponding subset of responses based on the number of times the corresponding plurality of keywords appears in the sentences (ibid-Bitan, paragraph [0042]-his keyword frequency in body of text as sentences on the page). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Mashiach and Bitan to combine the prior art element of a prompt to generate a plurality of subtopics and keywords, paragraph [0064], his sentence/title object for matching keywords, as taught by Mashiach with ranking search results based on search results parsed for keyword information as taught by Bitan as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be ranking search results (ibid, Bitan). As per claim 16, Mashiach with Sarukkai with Cantu with Sekine with Bitan further make obvious the method of claim 15, Sarukkai, teaching that which the others lack, wherein each of the plurality of subtopics is associated with a corresponding subset of sentences that are selected from the ranked sentences (ibid-Sarukkai-ranking discussion, Figs. 7-8, see all his subtopics and corresponding ranked sentences, associated with each subtopic, displayed to the user, based on keywords in the displayed sentences, as similarly combined and motivated). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Mashiach and Sarukkai with Bitan to combine the prior art element of a prompt to generate a plurality of subtopics and keywords, paragraph [0064], his sentence/title object for matching keywords, as taught by Mashiach with selecting a subset of sentences from ranked sentences as taught by Sarukkai as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be ranking search results (ibid, Bitan). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Mashiach in view of Sarukkai in view of Cantu, in view of Sekine (US 2020/0125594) in view of Bitan, as applied to claim 16 above, and further in view of Mcleod (US 2021/0141822). As per claim 17, Mashiach with Sarukkai with Cantu with Sekine with Bitan make obvious the method of claim the method of claim 16, but lack teaching that which McLeod teaches wherein each of the plurality of subtopics is associated with a word limit (paragraph [0026]-his every topic, having a limited amount of words, Figs. 5A). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Mashiach and McLeod to combine the prior art element of a prompt to generate a plurality of subtopics and keywords as taught by Mashiach with having a limit to the words for a topic as taught by McLeod as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be setting a parameter and limits for the topic generations (ibid, McLeod). Claim(s) 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Mashiach in view of Sarukkai in view of Cantu, as applied to claim 1, and further in view of Bitan. As per claim 18, Mashiach with Sarukkai with Cantu, further make obvious the method of claim 1, but lacks teaching that which Bitan teaches further comprising utilizing one or more web agents to parse text associated with one or more online sources (Bitan, paragraph [0358]-his web search engine and parsing the text associated with the web source data). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Mashiach and Bitan to combine the prior art element of a prompt to generate a plurality of subtopics and keywords as taught by Mashiach with ranking search results based on search results sources parsed for keyword information as taught by Bitan as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be ranking search results (ibid, Bitan). As per claim 19, Mashiach with Sarukkai with Cantu with Bitan make obvious the method of claim 18, wherein at least one of the one or more web agents utilizes login credentials associated with a user of the client device to access the one of the one or more online sources (ibid, Mashiach paragraph [0028-0030]-his registered user, user-profile authorization/privacy server). Allowable Subject Matter Claim 5 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (See PTO-892). Kimber et al. (US 2023/0126266) teaches a user selection identifying a specific number of topics to be selected. Sundaram et al. (US 12,505,302) teaches selecting a predetermined number of keyphrases as candidate topics. Kunjithapatham et al. (US 2011/0040767) teaches a predetermined number of keywords are selected to be sub-topics. The selected number of heighted ranked keywords are selected as sub-topics. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAMONT M SPOONER whose telephone number is (571)272-7613. The examiner can normally be reached 8:00 AM -5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached on (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LAMONT M SPOONER/Primary Examiner, Art Unit 2657 2/19/2026
Read full office action

Prosecution Timeline

May 01, 2024
Application Filed
Feb 20, 2026
Non-Final Rejection — §103, §DP
Apr 09, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602542
Text Analysis System, and Characteristic Evaluation System for Message Exchange Using the Same
2y 5m to grant Granted Apr 14, 2026
Patent 12596881
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12591737
Systems and Methods for Word Offensiveness Detection and Processing Using Weighted Dictionaries and Normalization
2y 5m to grant Granted Mar 31, 2026
Patent 12572744
Generative Systems and Methods of Feature Extraction for Enhancing Entity Resolution for Watchlist Screening
2y 5m to grant Granted Mar 10, 2026
Patent 12518107
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
86%
With Interview (+11.8%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 603 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month