Prosecution Insights
Last updated: April 19, 2026
Application No. 18/665,978

METHODS FOR IMPROVING LISTWISE RANKING IN LARGE LANGUAGE MODELS

Non-Final OA §101§102§103§112
Filed
May 16, 2024
Examiner
CASTILLO-TORRES, KEISHA Y
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Comcast Cable Communications LLC
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
80 granted / 108 resolved
+12.1% vs TC avg
Strong +30% interview lift
Without
With
+30.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
32 currently pending
Career history
140
Total Applications
across all art units

Statute-Specific Performance

§101
26.2%
-13.8% vs TC avg
§103
42.9%
+2.9% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 108 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 06/25/2025 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 6 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 6 recites the limitation “6. The method of claim 1, wherein the differing orders of the plurality of the list of items are determined randomly”. There is insufficient antecedent basis for this limitation in the claim. The Examiner notes that the limitation should read “…the different orders…” Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. More specifically directed to the abstract idea grouping of: mental process and/or mathematical concept. The independent claim(s) recite(s): 1. A method comprising: receiving, by a device, a large language model (LLM) input prompt comprising a list of items in a first order; generating a plurality of the list of items in different orders; generating a plurality of LLM outputs from a plurality LLM inputs comprising one of the plurality of the list of items; determining a final LLM output based on aggregating the plurality of LLM outputs; and causing a response to the original LLM input using the final LLM. This reads on a human (e.g., mentally and/or using pen and paper): Receiving a request (e.g., written) with a list of items in a first order; Re-ordering said items in a different order; Using predetermined set of steps/rules (i.e., LLM) to generate or write down a response to the received requests comprising a list of items; Using predetermined set of steps/rules (i.e., aggregating (e.g., mathematical concept)) to select a final response; Writing down said response. 9. A method comprising: receiving, by a first device, an original large language model (LLM) input comprising instructions and a list of items having a first order; generating a plurality of the list of items reordered differently; generating a plurality of LLM inputs each comprising the instructions and one of the plurality of the list of items; generating a final LLM output by aggregating a plurality of LLM outputs from the plurality of LLM inputs; and sending, to a second device, the final LLM output. This reads on a human (e.g., mentally and/or using pen and paper): Receiving a request (e.g., written) with instructions and a list of items in a first order; Re-ordering said items in a different order; Using predetermined set of steps/rules (i.e., LLM) to generate or write down a response to the received requests comprising the instructions and a list of items; Using predetermined set of steps/rules (i.e., aggregating (e.g., mathematical concept)) to select a final response; Writing down said response. 16. A method comprising: receiving, by a first device, a first large language model (LLM) input comprising a list in an original order; generating a plurality of lists each comprising the list in random different orders; sending, to a second device, a plurality of LLM inputs each comprising one of the plurality of lists; receiving a plurality of LLM outputs based on the plurality of LLM inputs; generating a final LLM output based on an aggregation of the plurality of LLM outputs; and causing a response to the first LLM input using the final LLM. This reads on a human (e.g., mentally and/or using pen and paper): Receiving a request (e.g., written) with a list of items in a first order; Re-ordering said items in a different order; Using predetermined set of steps/rules (i.e., LLM) to generate or write down a response to the received requests comprising a list of items; Using predetermined set of steps/rules (i.e., aggregating (e.g., mathematical concept)) to select a final response; Writing down said response. This judicial exception is not integrated into a practical application because for example: claims 1, 9, and 16 recite “a device,” “a first device” and/or “a second device”. As an example, in ¶ [0020 and 0022] of the as filed specification, disclose: “[0020] The gateway 111 may also comprise one or more local network interfaces to communicate, via one or more local networks, with devices in the premises 102a. Such devices may comprise, e.g., display devices 112 (e.g., televisions), other devices 113 (e.g., a DVR or STB), personal computers 114, laptop computers 115, wireless devices 116 (e.g., wireless routers, wireless laptops, notebooks, tablets and netbooks, cordless phones (e.g., Digital Enhanced Cordless Telephone—DECT phones), mobile phones, mobile televisions, personal digital assistants (PDA)), landline phones 117 (e.g., Voice over Internet Protocol—VoIP phones), and any other desired devices… [0022] FIG. 2 shows hardware elements of a computing device 200 that may be used to implement any of the computing devices shown in FIG. 1 (e.g., the mobile devices 125, any of the devices shown in the premises 102a, any of the devices shown in the local office 103, any of the wireless access points 127, any devices with the external network 109) and any other computing devices discussed herein (e.g., a content server 106, an LLM server 122, a mobile device 125, a wireless device 116, a personal computer 114, a laptop computer 115, etc.).”. Therefore, a general-purpose computer or computing device is described and mainly used as an application thereof. Accordingly, these additional elements do not integrate the abstract idea into a practical idea because it does not impose any meaningful limits on practicing the abstract idea. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using a computer is listed as a general computing device as noted. The claim is not patent eligible. With respect to claims 2 and 17, the claim(s) recite: 2. The method of claim 1, further comprising sending the final LLM output to a second device. 17. The method of claim 16, further comprising sending the final LLM output to a third device. This reads on a human (e.g., mentally and/or using pen and paper): Writing down the response. Additional limitations of “second/third device” present and analysis as described in independent claims, above. With respect to claim 3, the claim(s) recite: 3. The method of claim 1, wherein the LLM input prompt further comprises instructions; and wherein the plurality of the LLM inputs further comprise the instructions. This reads on a human (e.g., mentally and/or using pen and paper): The request (e.g., written) comprising instructions No additional limitations are present. With respect to claims 4, 11, and 19, the claim(s) recite: 4. The method of claim 1, wherein aggregating the plurality of LLM outputs comprises determining a Kendall tau distance between each of the plurality of LLM outputs; and the final LLM output is determined based on the Kendall tau distance. 11. The method of claim 9, wherein aggregating the plurality of LLM outputs comprises determining a distance between each of the plurality of LLM outputs. 19. The method of claim 16, wherein generating the final LLM output comprises determining a distance between each of the plurality of LLM outputs, wherein the distance is determined based on the Kendall tau distance; and wherein the aggregation of the plurality of LLM outputs is based on the distances. This reads on a human (e.g., mentally and/or using pen and paper): Using predetermined set of steps/rules (i.e., aggregating according to a distance such as Kendall tau distance (e.g., mathematical concept)) to select a final response No additional limitations are present. With respect to claims 5, 12, and 18, the claim(s) recite: 5. The method of claim 1, wherein determining the final LLM output further comprises determining a similarity between each of the plurality of LLM outputs. 12. The method of claim 9, wherein determining the final LLM output of the plurality of LLM outputs comprises determining a similarity between each of the plurality of LLM outputs. 18. The method of claim 16, further comprising determining a similarity between each of the plurality of LLM outputs; and wherein the aggregation of the plurality of LLM outputs is based on the similarity between the LLM outputs. This reads on a human (e.g., mentally and/or using pen and paper):Using predetermined set of steps/rules (i.e., aggregating / similarity (e.g., mathematical concept)) to select a final response No additional limitations are present. With respect to claims 6 and 13, the claim(s) recite: 6. The method of claim 1, wherein the differing orders of the plurality of the list of items are determined randomly. 13. The method of claim 9, wherein, for each of the plurality of the list of items, the order of the list of items is determined randomly. This reads on a human (e.g., mentally and/or using pen and paper): Re-ordering said items in a different order in a random manner. No additional limitations are present. With respect to claim 7, the claim(s) recite: 7. The method of claim 1, wherein aggregating the plurality of LLM outputs comprises determining a number of swaps between the plurality of LLM outputs; and wherein determining the final LLM output is based on the number of swaps. This reads on a human (e.g., mentally and/or using pen and paper): Using predetermined set of steps/rules (i.e., swapping) to select a final response No additional limitations are present. With respect to claims 8, 15, and 20, the claim(s) recite: 8. The method of claim 1, wherein the device is a server. 15. The method of claim 9, wherein the first device is a server and the second device is mobile device or a server. 20. The method of claim 16, wherein the first device comprises a wireless device and the second device comprises a server. This reads on a human (e.g., mentally and/or using pen and paper): Writing down the response The additional limitations present of “server”, “first device”, or “wireless device” follow the same discussion as applied to independent claims above. With respect to claim 10, the claim(s) recite: 10. The method of claim 9, wherein the instructions are to sort the list. This reads on a human (e.g., mentally and/or using pen and paper): Following instructions of sorting a list. No additional limitations are present. With respect to claim 14, the claim(s) recite: 14. The method of claim 9, wherein a number of the plurality of inputs is based on the number of items in the list. This reads on a human (e.g., mentally and/or using pen and paper): Identifying inputs based on a number of items in a list. No additional limitations are present. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 3, and 6-8 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Qin et al. (US 20250124067 A1). As to independent claim 1, Qin et al. teaches: 1. A method (see ¶ [0007]: “According to another example embodiment of the present disclosure, a computer-implemented method for prompt-based ranking can be performed by one or more computing devices and can include generating a prompt comprising a query, a first set of text associated with a first candidate result, and a second set of text associated with a second candidate result…”) comprising: receiving, by a device, a large language model (LLM) input prompt comprising a list of items in a first order (see ¶ [0007] citation as in preamble above and further: “…generating a prompt comprising a query, a first set of text associated with a first candidate result, and a second set of text associated with a second candidate result. The computer-implemented method can further include prompting a generative sequence processing model with the prompt…” and ¶ [0041]: “FIG. 2 depicts a block diagram of an example machine-learned model 200 according to example embodiments of the present disclosure. In some implementations, a ranking system 214 (e.g., software or a component of a computing device) can receive a set of input data 204 comprising a query and input data 212 comprising a plurality of sets of text (e.g., passage 1 through passage N), such as documents, and, as a result of receipt of the input data 204 and 212, initiate the machine-learned model 200. Thus, in some implementations, the machine-learned 200 can include a generative sequence processing model 202 (e.g., a large language model) that is operable to be prompted with a query (e.g., input data 204) and pairs of sets of text of the plurality of sets of text (e.g., input data 212), each set of text associated with a candidate result of the generative sequence processing model 202, and provide output data 216 comprising generated text and/or output data 218 comprising a score…”); generating a plurality of the list of items in different orders (see ¶ [0007 and 0041] citations as in limitation(s) above and further Fig. 3B (300: machine-learned model, 302: LLM, 308: ordered list, 310: final ranking) and ¶ [0049]: “FIG. 3B depicts a block diagram of an example machine-learned model 300 according to example embodiments of the present disclosure. The machine-learned model 300 is similar to the machine-learned model 200 of FIG. 2 except that machine-learned model 300 further includes pairwise ranking prompting with the machine-learned model 300. Thus, in some implementations, a ranking system 314 (e.g., software or a component of a computing device) can receive a query and a plurality of sets of text (e.g., passage 1 through passage N), such as documents, and the machine-learned model 300 can include a generative sequence processing model 302 (e.g., a large language model) that is operable to obtain an ordered list 308 of the plurality of sets of text and compare the entries by starting at the bottom of the ordered list 308 (e.g., the passage on the right side) and comparing and swapping the entry to the entry above it on the list (e.g., the passage to its left) with a stride of 1, so one pass requires O(N) complexity where N is the number of documents or passages. For instance, the final entry (e.g., passage 1 on the right side) in the ordered list is compared to the entry above the final entry in the list (e.g., passage 1 is compared with passage 5, which is to the left of passage 1). Next, the entry above the final entry (e.g., passage 1 after the swap) can be compared and swapped with the entry above it in the list (e.g., passage 1 is compared with passage 4, which is to the left of passage 1) with a stride of 1. The comparing and swapping can be performed for each entry in the ordered list until the first entry of the list is compared and swapped to generate a final ranking 310.”); generating a plurality of LLM outputs from a plurality LLM inputs comprising one of the plurality of the list of items (see Fig. 3B and ¶ [0007, 0041, and 0049] citations as in limitation(s) above and further Fig. 2 (212: plurality of passages (i.e., Passage 1 [Wingdings font/0xE0] Passage N) and plurality of outputs (304 306 and 308, 310)) and ¶ [0047]: “ FIG. 3A depicts a block diagram of an example machine-learned model 300 according to example embodiments of the present disclosure. The machine-learned model 300 is similar to the machine-learned model 200 of FIG. 2 except that machine-learned model 300 further includes pairwise ranking prompting with the machine-learned model 300. Thus, in some implementations, the machine-learned model 300 can include a generative sequence processing model 302 (e.g., a large language model) that is operable to perform the one or more pairwise comparisons between the first set of text (e.g., passage 1) and the second set of text (e.g., passage 2) based on the query by obtaining an initial ranking 304 of the sets of text (e.g., input data 206), such as a local ordering, in the form of a list. For example, the first entry in the list may be the second set of data (e.g., passage 2) which is to the left and the second entry in the list may be the first set of data (e.g., passage 1) which is to the right and is also the final entry in the list in this example because there are two passages input into the generative sequence processing model 202.”); determining a final LLM output based on aggregating the plurality of LLM outputs (see Figs. 2 and 3A-B and ¶ [0007, 0041, 0047, and 0049] citations as in limitation(s) above and further Fig. 4 and ¶ [0057]: “At 408, the computing system generates, by the generative sequence processing model based on the one or more pairwise comparisons, an output comprising generated text identifying the first set of text or the second set of text as a higher ranked set of text in response to the query. In some examples, the computing system generates an output comprising a first score for the first set of text and a second score for the second set of text in response to the query and determines, based on the first score and the second score, that the first set of text or the second set of text is a higher ranked set of text in response to the query, and the first score identifies a probability of the generative sequence processing model generating the first set of text in response to the query and the second score identifies a probability of the generative sequence processing model generating the second set of text in response to the query.”); and causing a response to the original LLM input using the final LLM (see Figs. 2, 3A-B, and 4 and ¶ [0007, 0041, 0047, 0049, and 0057] citations as in limitation(s) above and further ¶ [0057]: “At 408, the computing system generates, by the generative sequence processing model based on the one or more pairwise comparisons, an output comprising generated text identifying the first set of text or the second set of text as a higher ranked set of text in response to the query. In some examples, the computing system generates an output comprising a first score for the first set of text and a second score for the second set of text in response to the query and determines, based on the first score and the second score, that the first set of text or the second set of text is a higher ranked set of text in response to the query, and the first score identifies a probability of the generative sequence processing model generating the first set of text in response to the query and the second score identifies a probability of the generative sequence processing model generating the second set of text in response to the query.”). Regarding claim 3, Qin et al. further teaches: 3. The method of claim 1, wherein the LLM input prompt further comprises instructions (see ¶ [0007] citation as in claim 1 above and further: “…generating a prompt comprising a query, a first set of text associated with a first candidate result, and a second set of text associated with a second candidate result. The computer-implemented method can further include prompting a generative sequence processing model with the prompt…” and ¶ [0041]: “FIG. 2 depicts a block diagram of an example machine-learned model 200 according to example embodiments of the present disclosure. In some implementations, a ranking system 214 (e.g., software or a component of a computing device) can receive a set of input data 204 comprising a query and input data 212 comprising a plurality of sets of text (e.g., passage 1 through passage N),…”); and wherein the plurality of the LLM inputs further comprise the instructions (see ¶ [0007 and 0041] citations as in claim 1 and/or limitation above and further: ¶ [0074]: “FIG. 7 is a block diagram of an example processing flow for using machine-learned model(s) 1 to process input(s) 2 to generate output(s) 3.” ¶ [0078]: “Input(s) 2 can generally include or otherwise represent various types of data. Input(s) 2 can include one type or many different types of data…” ¶ [0079]: “Example data types for input(s) 2 or output(s) 3 include natural language text data, … Data can be raw or processed and can be in any format or schema.” ¶ [0144]: “Model host 31 can execute machine-learned model(s) 1 to perform inference for various tasks using various types of data. For example, various different input(s) 2 and output(s) 3 can be used for various different tasks...” and ¶ [0146]: “In some implementations, input(s) 2 can be or otherwise represent natural language data…”). Regarding claim 6, Qin et al. further teaches: 6. The method of claim 1, wherein the differing orders of the plurality of the list of items are determined randomly (see ¶ [0007, 0041, and 0049] citations as in claim 1 above and further Fig. 3B (300: machine-learned model, 302: LLM, 308: ordered list, 310: final ranking)). Regarding claim 7, Qin et al. further teaches: 7. The method of claim 1, wherein aggregating the plurality of LLM outputs comprises determining a number of swaps between the plurality of LLM outputs (see Figs. 2 and 3A-B and ¶ [0007, 0041, 0047, and 0049] citations as in claim 1 above. More specifically: “[0049]… Thus, in some implementations, a ranking system 314 (e.g., software or a component of a computing device) can receive a query and a plurality of sets of text (e.g., passage 1 through passage N), such as documents, and the machine-learned model 300 can include a generative sequence processing model 302 (e.g., a large language model) that is operable to obtain an ordered list 308 of the plurality of sets of text and compare the entries by starting at the bottom of the ordered list 308 (e.g., the passage on the right side) and comparing and swapping the entry to the entry above it on the list (e.g., the passage to its left) with a stride of 1, so one pass requires O(N) complexity where N is the number of documents or passages. For instance, the final entry (e.g., passage 1 on the right side) in the ordered list is compared to the entry above the final entry in the list (e.g., passage 1 is compared with passage 5, which is to the left of passage 1). Next, the entry above the final entry (e.g., passage 1 after the swap) can be compared and swapped with the entry above it in the list (e.g., passage 1 is compared with passage 4, which is to the left of passage 1) with a stride of 1. The comparing and swapping can be performed for each entry in the ordered list until the first entry of the list is compared and swapped to generate a final ranking 310.” and further Fig. 4 and ¶ [0057]: “At 408, the computing system generates, by the generative sequence processing model based on the one or more pairwise comparisons, an output comprising generated text identifying the first set of text or the second set of text as a higher ranked set of text in response to the query. In some examples, the computing system generates an output comprising a first score for the first set of text and a second score for the second set of text in response to the query and determines, based on the first score and the second score, that the first set of text or the second set of text is a higher ranked set of text in response to the query, and the first score identifies a probability of the generative sequence processing model generating the first set of text in response to the query and the second score identifies a probability of the generative sequence processing model generating the second set of text in response to the query.”); and wherein determining the final LLM output is based on the number of swaps (see Figs. 2 and 3A-B and ¶ [0007, 0041, 0047, and 0049] citations as in claim 1 above. More specifically: [0049]: “…the machine-learned model 300 can include a generative sequence processing model 302 (e.g., a large language model) that is operable to obtain an ordered list 308 of the plurality of sets of text and compare the entries by starting at the bottom of the ordered list 308 (e.g., the passage on the right side) and comparing and swapping the entry to the entry above it on the list (e.g., the passage to its left) with a stride of 1, so one pass requires O(N) complexity where N is the number of documents or passages. …The comparing and swapping can be performed for each entry in the ordered list until the first entry of the list is compared and swapped to generate a final ranking 310.”). Regarding claim 8, Qin et al. further teaches: 8. The method of claim 1, wherein the device is a server (see ¶ [0136]: “For example, model host 31 can operate on a server system that provides a machine-learning service to client device(s) that operate client(s) 32 (e.g., over a local or wide-area network). Client device(s) can be end-user devices used by individuals. Client device(s) can be server systems that operate client(s) 32 to provide various functionality as a service to downstream end-user devices.” and ¶ [0164]: “…Machine-learned model(s) 55 can be received from server computing system(s) 60, model development platform system 70, third party system(s) 80 (e.g., an application distribution platform), or developed locally on computing device 50…”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 2 is/are rejected under 35 U.S.C. 103 as being unpatentable Qin et al. (US 20250124067 A1) as applied to claim 1 above, and further in view of Sharpe et al. (US 20250259012 A1). Regarding claim 2, Qin et al. teaches the limitations as in claim 1, above. However, Qin et al. does not explicitly teach, but Sharpe et al. does teach: 2. The method of claim 1, further comprising sending the final LLM output to a second device (see ¶ [0174-0177]: “[0174] Embodiment #3: The method of embodiment #1 further comprising: [0175] determining, by the one or more computing devices, a second sequential data token for the first event based on inserting the first data value and the second data value into a natural language template for the first event data; [0176] providing, by the one or more computing devices, the second sequential data token as input to a second language model that outputs generative text; and [0177] based on the second sequential data token, receiving, by the one or more computing devices, second generative text from the second language model.”). Qin et al. and Sharpe et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in large language models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Qin et al. to incorporate the teachings of Sharpe et al. of sending the final LLM output to a second device which provides the benefit of improving the quality of output provided by the large language model ([0039] of Sharpe et al.). Claim 4-5 is/are rejected under 35 U.S.C. 103 as being unpatentable Qin et al. (US 20250124067 A1) as applied to claim 1 above, and further in view of Nitsure et al. ("Risk aware benchmarking of large language models." arXiv preprint arXiv:2310.07132 (2023)). Regarding claim 4, Qin et al. teaches the limitations as in claim 1, above. However, Qin et al. does not explicitly teach, but Nitsure et al. does teach: 4. The method of claim 1, wherein aggregating the plurality of LLM outputs comprises determining a Kendall tau distance between each of the plurality of LLM outputs (see ¶ 5 of 1. Introduction: “Our main contributions are: 1. Interpretable Metrics-Portfolio (Section 4). Drawing inspiration from econometrics and mathematical finance, we define a metrics-portfolio for aggregating metrics. This portfolio normalizes and aggregates metrics, yielding a single interpretable number assessing each output of a LLM…” ¶ A. Ablation Studies Metrics Aggregation Versus Portfolio: “For portoflio, computing ranking using FSD and SSD including the portfolio computation on 5K samples for 5 bootstrap samples , we have mean execution time of 32.01 ± 4.51 s. For FSd and SSD ranking computation for all metrics, followed by rank using pearson distance the execution time is of 254.99 ± 16.76 s. On the other hand, we observe on the mix-instruct dataset a consistency of ranks between these two approaches (FSD or SDD on portfolio & FSD or SSD on all metrics followed by rank aggregation) as quantified by the kendall-tau similarity between the ranks: 1. Kendall Tau(R-SSD@P, RA(R-SSD@M)) = 0.848, 2. Kendall Tau(R-FSD@P, RA(R-FSD@M)) =0.878. We see that these two approaches lead to similar ranks while portfolio approach leads to 7x speedups.” ¶ F.3 Rank Aggregation: “Given N ranks πi , i = 1 . . . N represented as permutations in Sk, the rank aggregation in [Pihur et al., 2009] solves the following problem : ( PNG media_image1.png 58 140 media_image1.png Greyscale ) where αi ≥ 0, PN i=1 αi = 1 represent importance of each ranking and d is a distance between permutations. [Pihur et al., 2009] have multiple choices of distance such as Pearson or Kendall’s-Tau…”); and the final LLM output is determined based on the Kendall tau distance (see ¶ 5 Intro, ¶ A. Ablation Studies Metrics Aggregation Versus Portfolio, and ¶ F.3 Rank Aggregation citations as in limitation above. More specifically, ¶ 5 of 1. Introduction: “…This portfolio normalizes and aggregates metrics, yielding a single interpretable number assessing each output of a LLM…”). Qin et al. and Nitsure et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in large language model. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Qin et al. to incorporate the teachings of Nitsure et al. of wherein aggregating the plurality of LLM outputs comprises determining a Kendall tau distance between each of the plurality of LLM outputs; and the final LLM output is determined based on the Kendall tau distance which provides the benefit of yielding a single interpretable number assessing each output of a LLM (¶ 5 of 1. Introduction of Nitsure et al.). Regarding claim 5, Qin et al. teaches the limitations as in claim 1, above. However, Qin et al. does not explicitly teach, but Nitsure et al. does teach: 5. The method of claim 1, wherein determining the final LLM output further comprises determining a similarity between each of the plurality of LLM outputs (see ¶ 5 Intro, ¶ A. Ablation Studies Metrics Aggregation Versus Portfolio, and ¶ F.3 Rank Aggregation citations as in limitation above. “kendall-tau similarity ”). Qin et al. and Nitsure et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in large language model. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Qin et al. to incorporate the teachings of Nitsure et al. of wherein determining the final LLM output further comprises determining a similarity between each of the plurality of LLM outputs which provides the benefit of yielding a single interpretable number assessing each output of a LLM (¶ 5 of 1. Introduction of Nitsure et al.). Claims 9-10, 13-17, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable Qin et al. (US 20250124067 A1) further in view of Sharpe et al. (US 20250259012 A1). As to independent claim 9, Qin et al. teaches: 9. A method (see ¶ [0007] as in claim 1, above.) comprising: receiving, by a first device, an original large language model (LLM) input comprising instructions and a list of items having a first order (see ¶ [0007] citation as in preamble above and further: “…generating a prompt comprising a query, a first set of text associated with a first candidate result, and a second set of text associated with a second candidate result. The computer-implemented method can further include prompting a generative sequence processing model with the prompt…” and ¶ [0041]: “FIG. 2 depicts a block diagram of an example machine-learned model 200 according to example embodiments of the present disclosure. In some implementations, a ranking system 214 (e.g., software or a component of a computing device) can receive a set of input data 204 comprising a query and input data 212 comprising a plurality of sets of text (e.g., passage 1 through passage N),…); generating a plurality of the list of items reordered differently (see ¶ [0007 and 0041] citations as in claim 1 above and further Fig. 3B (300: machine-learned model, 302: LLM, 308: ordered list, 310: final ranking) and ¶ [0049]); generating a plurality of LLM inputs each comprising the instructions and one of the plurality of the list of items (see ¶ [0007] citation as in preamble above and further: “…generating a prompt comprising a query, a first set of text associated with a first candidate result, and a second set of text associated with a second candidate result…”); generating a final LLM output by aggregating a plurality of LLM outputs from the plurality of LLM inputs (see Fig. 3B and ¶ [0007, 0041, and 0049] citations as in limitation(s) above and further Fig. 2 (212: plurality of passages (i.e., Passage 1 [Wingdings font/0xE0] Passage N) and plurality of outputs (304 306 and 308, 310)) and ¶ [0047]: “ FIG. 3A depicts a block diagram of an example machine-learned model 300 according to example embodiments of the present disclosure. The machine-learned model 300 is similar to the machine-learned model 200 of FIG. 2 except that machine-learned model 300 further includes pairwise ranking prompting with the machine-learned model 300. Thus, in some implementations, the machine-learned model 300 can include a generative sequence processing model 302 (e.g., a large language model) that is operable to perform the one or more pairwise comparisons between the first set of text (e.g., passage 1) and the second set of text (e.g., passage 2) based on the query by obtaining an initial ranking 304 of the sets of text (e.g., input data 206), such as a local ordering, in the form of a list. For example, the first entry in the list may be the second set of data (e.g., passage 2) which is to the left and the second entry in the list may be the first set of data (e.g., passage 1) which is to the right and is also the final entry in the list in this example because there are two passages input into the generative sequence processing model 202.” and Fig. 4 and ¶ [0057]: “At 408, the computing system generates, by the generative sequence processing model based on the one or more pairwise comparisons, an output comprising generated text identifying the first set of text or the second set of text as a higher ranked set of text in response to the query. In some examples, the computing system generates an output comprising a first score for the first set of text and a second score for the second set of text in response to the query and determines, based on the first score and the second score, that the first set of text or the second set of text is a higher ranked set of text in response to the query, and the first score identifies a probability of the generative sequence processing model generating the first set of text in response to the query and the second score identifies a probability of the generative sequence processing model generating the second set of text in response to the query.”); and However, Qin et al. does not explicitly teach, but Sharpe et al. does teach: sending, to a second device, the final LLM output (see ¶ [0174-0177]: “[0174] Embodiment #3: The method of embodiment #1 further comprising: [0175] determining, by the one or more computing devices, a second sequential data token for the first event based on inserting the first data value and the second data value into a natural language template for the first event data; [0176] providing, by the one or more computing devices, the second sequential data token as input to a second language model that outputs generative text; and [0177] based on the second sequential data token, receiving, by the one or more computing devices, second generative text from the second language model.”). Qin et al. and Sharpe et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in large language models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Qin et al. to incorporate the teachings of Sharpe et al. of sending, to a second device, the final LLM output which provides the benefit of improving the quality of output provided by the large language model ([0039] of Sharpe et al.). As to independent claim 16, Qin et al. teaches: 16. A method (see ¶ [0007] as in claim 1, above.) comprising: receiving, by a first device, a first large language model (LLM) input comprising a list in an original order (see ¶ [0007] citation as in preamble above and further: “…generating a prompt comprising a query, a first set of text associated with a first candidate result, and a second set of text associated with a second candidate result. The computer-implemented method can further include prompting a generative sequence processing model with the prompt…” and ¶ [0041]: “FIG. 2 depicts a block diagram of an example machine-learned model 200 according to example embodiments of the present disclosure. In some implementations, a ranking system 214 (e.g., software or a component of a computing device) can receive a set of input data 204 comprising a query and input data 212 comprising a plurality of sets of text (e.g., passage 1 through passage N),…”); generating a plurality of lists each comprising the list in random different orders (see ¶ [0007, 0041, and 0049] citations as in claim 1 above and further Fig. 3B (300: machine-learned model, 302: LLM, 308: ordered list, 310: final ranking).); receiving a plurality of LLM outputs based on the plurality of LLM inputs (see Fig. 3B and ¶ [0007, 0041, 0047, and 0049] citations as in claim 1 above and further Fig. 2 (212: plurality of passages (i.e., Passage 1 [Wingdings font/0xE0] Passage N) and plurality of outputs (304 306 and 308, 310))); generating a final LLM output based on an aggregation of the plurality of LLM outputs (see Figs. 2 and 3A-B and ¶ [0007, 0041, 0047, and 0049] citations as in claim 1 above and further Fig. 4 and ¶ [0057]: “At 408, the computing system generates, by the generative sequence processing model based on the one or more pairwise comparisons, an output comprising generated text identifying the first set of text or the second set of text as a higher ranked set of text in response to the query…”); and causing a response to the first LLM input using the final LLM (see Figs. 2, 3A-B, and 4 and ¶ [0007, 0041, 0047, 0049, and 0057] citations as in claim 1 above.). However, Qin et al. does not explicitly teach, but Sharpe et al. does teach: sending, to a second device, a plurality of LLM inputs each comprising one of the plurality of lists (see ¶ [0174-0177]: “[0174] Embodiment #3: The method of embodiment #1 further comprising: [0175] determining, by the one or more computing devices, a second sequential data token for the first event based on inserting the first data value and the second data value into a natural language template for the first event data; [0176] providing, by the one or more computing devices, the second sequential data token as input to a second language model that outputs generative text; and [0177] based on the second sequential data token, receiving, by the one or more computing devices, second generative text from the second language model.”) Qin et al. and Sharpe et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in large language models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Qin et al. to incorporate the teachings of Sharpe et al. of sending, to a second device, a plurality of LLM inputs each comprising one of the plurality of lists which provides the benefit of improving the quality of output provided by the large language model ([0039] of Sharpe et al.). Regarding claim 10, Qin et al. further teaches: 10. The method of claim 9, wherein the instructions are to sort the list (see ¶ [0056]: “In some examples, the computing system performs, by the generative sequence processing model, the one or more pairwise comparisons between the first set of text and the second set of text based on the query by initiating a sorting algorithm with the first set of text and the second set of text and receiving an output of the sorting algorithm comprising an ordered list, and the higher ranked set of text in response to the query can include the first set of text when the first set of text is first in the ordered list and include the second set of text when the second set of text is first in the ordered list. The sorting algorithm may be a heapsort algorithm in some examples.”). Regarding claim 13, Qin et al. in combination with Sharpe et al. teach the limitations as in claim 9, above. Qin et al. further teaches: 13. The method of claim 9, wherein, for each of the plurality of the list of items, the order of the list of items is determined randomly (see ¶ [0007, 0041, and 0049] citations as in claim 1 above and further Fig. 3B (300: machine-learned model, 302: LLM, 308: ordered list, 310: final ranking)). Regarding claim 14, Qin et al. in combination with Sharpe et al. teach the limitations as in claim 9, above. Qin et al. further teaches: 14. The method of claim 9, wherein a number of the plurality of inputs is based on the number of items in the list (Figs. 2 and 3A-B and ¶ [0007, 0041, 0047, and 0049] citations as in claim 1 above. More specifically: “[0049]… Thus, in some implementations, a ranking system 314 (e.g., software or a component of a computing device) can receive a query and a plurality of sets of text (e.g., passage 1 through passage N), such as documents, and the machine-learned model 300 can include a generative sequence processing model 302 (e.g., a large language model) that is operable to obtain an ordered list 308 of the plurality of sets of text and compare the entries by starting at the bottom of the ordered list 308 (e.g., the passage on the right side) and comparing and swapping the entry to the entry above it on the list (e.g., the passage to its left) with a stride of 1, so one pass requires O(N) complexity where N is the number of documents or passages. For instance, the final entry (e.g., passage 1 on the right side) in the ordered list is compared to the entry above the final entry in the list (e.g., passage 1 is compared with passage 5, which is to the left of passage 1). Next, the entry above the final entry (e.g., passage 1 after the swap) can be compared and swapped with the entry above it in the list (e.g., passage 1 is compared with passage 4, which is to the left of passage 1) with a stride of 1. The comparing and swapping can be performed for each entry in the ordered list until the first entry of the list is compared and swapped to generate a final ranking 310.” and further Fig. 4 and ¶ [0057]: “At 408, the computing system generates, by the generative sequence processing model based on the one or more pairwise comparisons, an output comprising generated text identifying the first set of text or the second set of text as a higher ranked set of text in response to the query. In some examples, the computing system generates an output comprising a first score for the first set of text and a second score for the second set of text in response to the query and determines, based on the first score and the second score, that the first set of text or the second set of text is a higher ranked set of text in response to the query, and the first score identifies a probability of the generative sequence processing model generating the first set of text in response to the query and the second score identifies a probability of the generative sequence processing model generating the second set of text in response to the query.”). Regarding claim 15, Qin et al. in combination with Sharpe et al. teach the limitations as in claim 9, above. Qin et al. further teaches: 15. The method of claim 9, wherein the first device is a server and the second device is mobile device or a server (see ¶ [0136]: “For example, model host 31 can operate on a server system that provides a machine-learning service to client device(s) that operate client(s) 32 (e.g., over a local or wide-area network). Client device(s) can be end-user devices used by individuals. Client device(s) can be server systems that operate client(s) 32 to provide various functionality
Read full office action

Prosecution Timeline

May 16, 2024
Application Filed
Dec 11, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573402
GENERATING AND/OR UTILIZING UNINTENTIONAL MEMORIZATION MEASURE(S) FOR AUTOMATIC SPEECH RECOGNITION MODEL(S)
2y 5m to grant Granted Mar 10, 2026
Patent 12536989
Language-agnostic Multilingual Modeling Using Effective Script Normalization
2y 5m to grant Granted Jan 27, 2026
Patent 12531050
VOICE DATA CREATION DEVICE
2y 5m to grant Granted Jan 20, 2026
Patent 12499332
TRANSLATING TEXT USING GENERATED VISUAL REPRESENTATIONS AND ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Dec 16, 2025
Patent 12488180
SYSTEMS AND METHODS FOR GENERATING DIALOG TREES
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+30.5%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 108 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month