Prosecution Insights
Last updated: April 19, 2026
Application No. 18/784,900

METHOD OF GENERATING TRAINING DATA, READABLE MEDIUM, AND ELECTRONIC DEVICE

Non-Final OA §101§102
Filed
Jul 25, 2024
Examiner
COLUCCI, MICHAEL C
Art Unit
2655
Tech Center
2600 — Communications
Assignee
BEIJING YOUZHUJU NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
91%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
749 granted / 990 resolved
+13.7% vs TC avg
Strong +15% interview lift
Without
With
+15.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
41 currently pending
Career history
1031
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
6.0%
-34.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 990 resolved cases

Office Action

§101 §102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception such as a natural phenomenon, abstract idea, or law of nature, without significantly more and/or a practical application per se, specifically with one or more of: 1) Not integrating a judicial exception into a practical application (see explanation below), and 2) Not reciting elements that would amount to significantly more than the judicial exception (see explanation below). Accordingly, claims 1-20 are directed towards patent ineligible subject matter under 35 U.S.C. 101. The independent claims: When taking the current claim limitations of the present invention, we see that they are directed to creating a table of data using a selection of templates where a user asks another user questions to provide a custom table or spreadsheet. Regarding the claim limitations of claim(s) 1, 9, and 15 as recited: 1. A method of generating training data, comprising: acquiring sample data; determining a data generation template according to the sample data; generating question data according to first data in the sample data and the data generation template, and determining answer data according to second data other than the first data in the sample data; and combining the question data and the answer data into the training data. Step 1: IS THE CLAIM DIRECTED TO A PROCESS, MACHINE, MANUFACTURE OR COMPOSITION OF MATTER? Yes. Step 2A.1: IS THE CLAIM DIRECTED TO A LAW OF NATURE, A NATURAL PHENOMENON (PRODUCT OF NATURE) OR AN ABSTRACT IDEA? Yes. Step 2A.2: DOES THE CLAIM RECITE ADDITIONAL ELEMENTS THAT INTEGRATE THE JUDICIAL EXCEPTION INTO A PRACTICAL APPLICATION? No. Regarding the independent claims. No, analogous to Solutran, Inc. v. Elavon, Inc., 931 F.3d 1161, 2019 USPQ2d 281076 (Fed. Cir. 2019), the claims are directed to creating a table of data using a selection of templates where a user asks another user questions to provide a custom table or spreadsheet, such as lacking a clear improvement of function/technology in the realm of AI or LLM prompting. Further as demonstrated in Solutran, Inc. v. Elavon, Inc., 931 F.3d 1161, 2019 USPQ2d 281076 (Fed. Cir. 2019), the claims were to methods for electronically processing paper checks, all of which contained limitations setting forth receiving merchant transaction data from a merchant, crediting a merchant’s account, and receiving and scanning paper checks after the merchant’s account is credited. In part one of the Alice/Mayo test, the Federal Circuit determined that the claims were directed to the abstract idea of crediting the merchant’s account before the paper check is scanned. The court first determined that the recited limitations of “crediting a merchant’s account as early as possible while electronically processing a check” is a “long-standing commercial practice” like in Alice and Bilski. 931 F.3d at 1167, 2019 USPQ2d 281076, at *5 (Fed. Cir. 2019). The Federal Circuit then continued with its analysis under part one of the Alice/Mayo test finding that the claims are not directed to an improvement in the functioning of a computer or an improvement to another technology. In particular, the court determined that the claims “did not improve the technical capture of information from a check to create a digital file or the technical step of electronically crediting a bank account” nor did the claims “improve how a check is scanned.” Id. Regarding the December 5th 2025 Memo in light of September 26, 2025 Appeals Review Panel Decision in Ex parte Desjardins, Appeal 2024-000567 for Application 16/319,040, in deciding if a recited abstract idea does or does not direct the entire claim to an abstract idea, when a claim is considered as a whole. The claim which demonstrated improvements to technology and/or function recites: "adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task.". The decision recites that “We are persuaded that constitutes an improvement to how the machine learning model itself operates, and not, for example, the identified mathematical calculation.” When considering the limitation decided upon, there are clear improvements to machine learning that are not rudimentary or a long-standing practice, for instance adjusting for optimization and protection of performance, as claimed, are improvements to a machine learning models operations, not simply a general mathematical or generic recitation, but rather an improvement to function. Specifically, Ex Parte Desjardins explained the following: Enfish ranks among the Federal Circuit's leading cases on the eligibility of technological improvements. In particular, Enfish recognized that “[m]uch of the advancement made in computer technology consists of improvements to software that, by their very nature, may not be defined by particular physical features but rather by logical structures and processes.” 822 F.3d at 1339. Moreover, because “[s]oftware can make non-abstract improvements to computer technology, just as hardware improvements can,” the Federal Circuit held that the eligibility determinations should turn on whether “the claims are directed to an improvement to computer functionality versus being directed to an abstract idea.” Id. at 1336. (Desjardins, page 8). Further, specifically: “Paragraph 21 of the Specification, which the Appellant cites, identifies improvements in training the machine learning model itself. Of course, such an assertion in the Specification alone is insufficient to support a patent eligibility determination, absent a subsequent determination that the claim itself reflects the disclosed improvement. See MPEP § 2106.05(a) (citing Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316 (Fed. Cir. 2016)). Here, however, we are persuaded that the claims reflect such an improvement. For example, one improvement identified in the 8 Appeal2024-000567 Application 16/319,040 Specification is to "effectively learn new tasks in succession whilst protecting knowledge about previous tasks." Spec. ,r 21. The Specification also recites that the claimed improvement allows artificial intelligence (AI) systems to "us[e] less of their storage capacity" and enables "reduced system complexity." Id. When evaluating the claim as a whole, we discern at least the following limitation of independent claim 1 that reflects the improvement: "adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task." We are persuaded that constitutes an improvement to how the machine learning model itself operates, and not, for example, the identified mathematical calculation. Under a charitable view, the overbroad reasoning of the original panel below is perhaps understandable given the confusing nature of existing § 101 jurisprudence, but troubling, because this case highlights what is at stake. Categorically excluding AI innovations from patent protection in the United States jeopardizes America's leadership in this critical emerging technology. Yet, under the panel's reasoning, many AI innovations are potentially unpatentable-even if they are adequately described and nonobvious-because the panel essentially equated any machine learning with an unpatentable "algorithm" and the remaining additional elements as "generic computer components," without adequate explanation. Dec. 24. Examiners and panels should not evaluate claims at such a high level of generality.” Further in Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), the claimed invention was a method of training a machine learning model on a series of tasks. The Appeals Review Panel (ARP) overall credited benefits including reduced storage, reduced system complexity and streamlining, and preservation of performance attributes associated with earlier tasks during subsequent computational tasks as technological improvements that were disclosed in the patent application specification. Specifically, the ARP upheld the Step 2A Prong One finding that the claims recited an abstract idea (i.e., mathematical concept). In Step 2A Prong Two, the ARP then determined that the specification identified improvements as to how the machine learning model itself operates, including training a machine learning model to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting” encountered in continual learning systems. Importantly, the ARP evaluated the claims as a whole in discerning at least the limitation “adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task” reflected the improvement disclosed in the specification. Accordingly, the claims as a whole integrated what would otherwise be a judicial exception instead into a practical application at Step 2A Prong Two, and therefore the claims were The claim itself does not need to explicitly recite the improvement described in the specification (e.g., “thereby increasing the bandwidth of the channel”). See, e.g., Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), in which the specification identified the improvement to machine learning technology by explaining how the machine learning model is trained to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting,” and that the claims reflected the improvement identified in the specification. Indeed, enumerated improvements identified in the Desjardins specification included disclosures of the effective learning of new tasks in succession in connection with specifically protecting knowledge concerning previously accomplished tasks; allowing the system to reduce use of storage capacity; and the enablement of reduced complexity in the system. Such improvements were tantamount to how the machine learning model itself would function in operation and therefore not subsumed in the identified mathematical calculation. The second paragraph of MPEP § 2106.05(a), subsection I, is revised to add new examples xiii and xiv to the list of examples that may show an improvement in computer functionality: xiii. An improved way of training a machine learning model that protected the model’s knowledge about previous tasks while allowing it to effectively learn new tasks; Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential); and xiv. Improvements to computer component or system performance based upon adjustments to parameters of a machine learning model associated with tasks or workstreams; Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential). Step 2B: DOES THE CLAIM RECITE ADDITIONAL ELEMENTS THAT AMOUNT TO SIGNIFICANTLY MORE THAN THE JUDICIAL EXCEPTION? No, the claims amount to creating a table of data using a selection of templates where a user asks another user questions to provide a custom table or spreadsheet. At best the task can be accomplished with the use of a computer by the second user to assist the first user, amounting to extra solution activity. • Collecting and comparing known information (Classen) • Collecting, displaying, and manipulating data (Int. Ventures v. Cap One Financial) • Collecting information, analyzing it, and displaying certain results of the collection and analysis (Electric Power Group; West View†) • Comparing new and stored information and using rules to identify options (Smartgene)† • Delivering user‐selected media content to portable devices (Affinity Labs v. Amazon.com) Assistance for Applicant in amending to overcome 101: Limitations that the courts have found to qualify as “significantly more” when recited in a claim with a judicial exception include: i. Improvements to the functioning of a computer, e.g., a modification of conventional Internet hyperlink protocol to dynamically produce a dual-source hybrid webpage, as discussed in DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258-59, 113 USPQ2d 1097, 1106-07 (Fed. Cir. 2014) (see MPEP § 2106.05(a)); ii. Improvements to any other technology or technical field, e.g., a modification of conventional rubber-molding processes to utilize a thermocouple inside the mold to constantly monitor the temperature and thus reduce under- and over-curing problems common in the art, as discussed in Diamond v. Diehr, 450 U.S. 175, 191-92, 209 USPQ 1, 10 (1981) (see MPEP § 2106.05(a)); iii. Applying the judicial exception with, or by use of, a particular machine, e.g., a Fourdrinier machine (which is understood in the art to have a specific structure comprising a headbox, a paper-making wire, and a series of rolls) that is arranged in a particular way to optimize the speed of the machine while maintaining quality of the formed paper web, as discussed in Eibel Process Co. v. Minn. & Ont. Paper Co., 261 U.S. 45, 64-65 (1923) (see MPEP § 2106.05(b)); iv. Effecting a transformation or reduction of a particular article to a different state or thing, e.g., a process that transforms raw, uncured synthetic rubber into precision-molded synthetic rubber products, as discussed in Diehr, 450 U.S. at 184, 209 USPQ at 21 (see MPEP § 2106.05(c)); v. Adding a specific limitation other than what is well-understood, routine, conventional activity in the field, or adding unconventional steps that confine the claim to a particular useful application, e.g., a non-conventional and non-generic arrangement of various computer components for filtering Internet content, as discussed in BASCOM Global Internet v. AT&T Mobility LLC, 827 F.3d 1341, 1350-51, 119 USPQ2d 1236, 1243 (Fed. Cir. 2016) (see MPEP § 2106.05(d)); or vi. Other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment, e.g., an immunization step that integrates an abstract idea of data comparison into a specific process of immunizing that lowers the risk that immunized patients will later develop chronic immune-mediated diseases, as discussed in Classen Immunotherapies Inc. v. Biogen IDEC, 659 F.3d 1057, 1066-68, 100 USPQ2d 1492, 1499-1502 (Fed. Cir. 2011) (see MPEP § 2106.05(e)). To help in amending the claims and for analysis purposes, example claims 3 and 4 are listed below from the courts, however such example amendment potentials are not limited to the provided examples and alternative amendments are possible using i-vi from the courts. The example below show differences between eligible claims (court claim 4) and ineligible claims (court claim 3), which thus illustrates significantly more which is tied to hardware that is not generally recited in the art. In this case general changing of font size in claim 3 versus a significant step of conditionally changing font size tied to hardware in claim 4. See below examples based on MPEP and not on the current claim set, to help amend to overcome 101 rejections: Regarding independent claim examples: For instance in the example claims, for example claims 3 and 4 below: Ineligible 3. A computer‐implemented method of resizing textual information within a window displayed in a graphical user interface, the method comprising: (not significant) generating first data for describing the area of a first graphical element; (not significant) generating second data for describing the area of a second graphical element containing textual information; (not significant) calculating, by the computer, a scaling factor for the textual information which is proportional to the difference between the first data and second data. The claim recites that the step of calculating a scaling factor is performed by “the computer” (referencing the computer recited in the preamble). Such a limitation gives “life, meaning and vitality” to the preamble and, therefore, the preamble is construed to further limit the claim. (See MPEP 2111.02.) However, the mere recitation of “computer‐implemented” is akin to adding the words “apply it” in conjunction with the abstract idea. Such a limitation is not enough to qualify as significantly more. With regards to the graphical user interface limitation, the courts have found that simply limiting the use of the abstract idea to a particular technological environment is not significantly more. (See, e.g., Flook.) Whereas in similar claim 4: Eligible 4. A computer‐implemented method for dynamically relocating textual information within an underlying window displayed in a graphical user interface, the method comprising: displaying a first window containing textual information in a first format within a graphical user interface on a computer screen; displaying a second window within the graphical user interface; constantly monitoring the boundaries of the first window and the second window to detect an overlap condition where the second window overlaps the first window such that the textual information in the first window is obscured from a user’s view; determining the textual information would not be completely viewable if relocated to an unobstructed portion of the first window; calculating a first measure of the area of the first window and a second measure of the area of the unobstructed portion of the first window; calculating a scaling factor which is proportional to the difference between the first measure and the second measure; scaling the textual information based upon the scaling factor; (significant step) automatically relocating the scaled textual information, by a processor, to the unobscured portion of the first window in a second format during an overlap condition so that the entire scaled textual information is viewable on the computer screen by the user; (significant step) automatically returning the relocated scaled textual information, by the processor, to the first format within the first window when the overlap condition no longer exists. These limitations are not merely attempting to limit the mathematical algorithm to a particular technological environment. Instead, these claim limitations recite a specific application of the mathematical algorithm that improves the functioning of the basic display function of the computer itself. As discussed above, the scaling and relocating the textual information in overlapping windows improves the ability of the computer to display information and interact with the user. The dependent claims are rejected as follows, for the same reasoning as being directed towards patent ineligible subject matter under 35 U.S.C. 101, and not adding eligible subject matter to the respective parent claim. Claims 2-8, 10-14, and 16-20 amount to the same scope of claims 1, 9, and 15 without either adding a practical application or showing significantly more than a judicial exception. For instance, in addition to creating a table of data using a selection of templates where a user asks another user questions to provide a custom table or spreadsheet, said dependent claims amount to further specified selection of templates, identifying entities e.g. data names in a spreadsheet, identifying titles in a spreadsheet, recognizing meaning, sorting tables by entity, with an extra solution step of using a model analogous to the human mind to identity terms or entities such as a product or category e.g. bikes, mountain bikes, bike racks, etc. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 12147758 B1 Thomas; Olivia Jean et al. (hereinafter Thomas). Re claim 1, Thomas teaches 1. A method of generating training data, comprising: (col 4 line 56 – col 4 line 7 an LLM is trained via learning) acquiring sample data; (sample of a spreadsheet or table in the input col 4 lines 43-55) determining a data generation template according to the sample data; (as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) generating question data according to first data in the sample data and the data generation template, and determining answer data according to second data other than the first data in the sample data; and (the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) combining the question data and the answer data into the training data. (the LLM is trained and learns based off of the prompt/response, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) Re claim 9, this claim has been rejected for teaching a broader, or narrower claim based on general inclusion of hardware alone (e.g. processor, memory, instructions), representation of claim 1 omitting/including hardware for instance, otherwise amounting to a virtually identical scope. Re claim 15, this claim has been rejected for teaching a broader, or narrower claim based on general inclusion of hardware alone (e.g. processor, memory, instructions), representation of claim 1 omitting/including hardware for instance, otherwise amounting to a virtually identical scope. Re claims 2, 10, and 16, Thomas teaches 2. The method according to claim 1, wherein the determining a data generation template according to the sample data, comprises: when the sample data comprises tabular data, determining the data generation template according to arrangement information of data in the tabular data; (user inputs partial table, system helps arrange it, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) when the sample data comprises text data, determining the data generation template by at least one selected from the group consisting of: performing semantic recognition on the sample data to obtain a semantic recognition result, and determining the data generation template according to the semantic recognition result; and (meaning of the input, user inputs partial table, system helps arrange it, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) performing entity recognition on the sample data through a preset entity recognition model to obtain an entity recognition result, and determining the data generation template according to the entity recognition result. (after the meaning is established, terms such as “sales” or “product” qualify as entities per se col 11 lines 4-16… generalized input as meaning/semantics includes the initial prompt of fig. 7c and entity recognition per se includes the modified semantics/meaning in fig. 7d, e.g. sorting by year or by sale or altering the table as specified in the user prompt, the title also changes, as well as rows, columns, order, etc.… user inputs partial table, system helps arrange it, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) Re claims 3, 11, and 17, Thomas teaches 3. The method according to claim 2, wherein the determining the data generation template according to the semantic recognition result, comprises: searching for at least one data generation template carrying a first template identifier in a preset template library according to the semantic recognition result, wherein the preset template library pre-stores a plurality of the data generation templates, and the first template identifier is a preset template identifier matching the semantic recognition result; and (using the variety of templates per scenario shown in fig. 8, after the meaning is established, terms such as “sales” or “product” qualify as entities per se col 11 lines 4-16… generalized input as meaning/semantics includes the initial prompt of fig. 7c and entity recognition per se includes the modified semantics/meaning in fig. 7d, e.g. sorting by year or by sale or altering the table as specified in the user prompt, the title also changes, as well as rows, columns, order, etc.… user inputs partial table, system helps arrange it, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) the determining the data generation template according to the entity recognition result, comprises: searching for at least one data generation template carrying a second template identifier in the preset template library according to the entity recognition result, wherein the second template identifier is a preset template identifier matching the entity recognition result. (using the variety of templates per scenario shown in fig. 8, after the meaning is established, terms such as “sales” or “product” qualify as entities per se col 11 lines 4-16… generalized input as meaning/semantics includes the initial prompt of fig. 7c and entity recognition per se includes the modified semantics/meaning in fig. 7d, e.g. sorting by year or by sale or altering the table as specified in the user prompt, the title also changes, as well as rows, columns, order, etc.… user inputs partial table, system helps arrange it, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) Re claims 4, 12, and 18, Thomas teaches 4. The method according to claim 2, wherein the determining the data generation template according to arrangement information of data in the tabular data, comprises: determining a question template for acquiring a concept explanation content as the data generation template with respect to any of following data of the tabular data selected from the group consisting of: a table name, first row data, and first column data; and (generalized input as meaning/semantics includes the initial prompt of fig. 7c and entity recognition per se includes the modified semantics/meaning in fig. 7d, e.g. sorting by year or by sale or altering the table as specified in the user prompt, the title also changes, as well as rows, columns, order, etc.… user inputs partial table, system helps arrange it, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55 as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) determining a question template with a condition as the data generation template with respect to non-first row data other than the table name or non-first column data other than the table name in the tabular data. (sorting alters the row and column by user request/prompt to modify or user agreeing to system suggestion thereof… generalized input as meaning/semantics includes the initial prompt of fig. 7c and entity recognition per se includes the modified semantics/meaning in fig. 7d, e.g. sorting by year or by sale or altering the table as specified in the user prompt, the title also changes, as well as rows, columns, order, etc.… user inputs partial table, system helps arrange it, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55 as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) Re claims 5, 13, and 19, Thomas teaches 5. The method according to claim 1, wherein the determining a data generation template according to the sample data, comprises: when the first data in the sample data is a title, determining a first data generation template, wherein the first data generation template is a question template for acquiring an explanation content of the first data; and (the initial prompt of fig. 7c and entity recognition per se includes the modified semantics/meaning in fig. 7d, e.g. sorting by year or by sale or altering the table as specified in the user prompt, the title also changes, as well as rows, columns, order, etc.… user inputs partial table, system helps arrange it, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55 as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) the determining answer data according to second data other than the first data in the sample data, comprises: taking a directory content corresponding to the sample data or an abstract content located after the first data in the sample data as the answer data. (directory as in ordered data, or abstract content such as if the user “what are the sales for each product” without referencing specific data… generalized input as meaning/semantics includes the initial prompt of fig. 7c and entity recognition per se includes the modified semantics/meaning in fig. 7d, e.g. sorting by year or by sale or altering the table as specified in the user prompt, the title also changes, as well as rows, columns, order, etc.… user inputs partial table, system helps arrange it, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55 as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) Re claims 6, 14, and 20, Thomas teaches 6. The method according to claim 1, wherein the determining a data generation template according to the sample data, comprises: when the first data in the sample data is a concept entity, determining a second data generation template, wherein the second data generation template is a question template for acquiring an explanation content of the first data; and (explanation is the response of the system and suggestions… generalized input as meaning/semantics includes the initial prompt of fig. 7c and entity recognition per se includes the modified semantics/meaning in fig. 7d, e.g. sorting by year or by sale or altering the table as specified in the user prompt, the title also changes, as well as rows, columns, order, etc.… user inputs partial table, system helps arrange it, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55 as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) the determining answer data according to second data other than the first data in the sample data, comprises: determining the answer data according to a context content of the first data in the sample data. (in context of any data the user submitted, the system can extrapolate intent when the user asks a vague question with minimal context e.g. “sort sales”… generalized input as meaning/semantics includes the initial prompt of fig. 7c and entity recognition per se includes the modified semantics/meaning in fig. 7d, e.g. sorting by year or by sale or altering the table as specified in the user prompt, the title also changes, as well as rows, columns, order, etc.… user inputs partial table, system helps arrange it, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55 as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) Re claim 7, Thomas teaches 7. The method according to claim 1, wherein the determining a data generation template according to the sample data, comprises: when the second data in the sample data is arrayed data, determining a third data generation template, (the table is an array with nth data progressing or reducing as user modifies the table… generalized input as meaning/semantics includes the initial prompt of fig. 7c and entity recognition per se includes the modified semantics/meaning in fig. 7d, e.g. sorting by year or by sale or altering the table as specified in the user prompt, the title also changes, as well as rows, columns, order, etc.… user inputs partial table, system helps arrange it, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55 as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) wherein the third data generation template is a question template for acquiring the arrayed data, and the first data is correspondingly data located before the second data in the sample data. (the table is an array in any order and sorted according to user instructions, with nth data progressing or reducing as user modifies the table… generalized input as meaning/semantics includes the initial prompt of fig. 7c and entity recognition per se includes the modified semantics/meaning in fig. 7d, e.g. sorting by year or by sale or altering the table as specified in the user prompt, the title also changes, as well as rows, columns, order, etc.… user inputs partial table, system helps arrange it, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55 as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) Re claim 8, Thomas teaches 8. The method according to claim 3, wherein the determining the data generation template according to arrangement information of data in the tabular data, comprises: determining a question template for acquiring a concept explanation content as the data generation template with respect to any of following data of the tabular data selected from the group consisting of: a table name, first row data, and first column data; and (the table includes column, row, title, etc. and is an array with nth data progressing or reducing as user modifies the table… generalized input as meaning/semantics includes the initial prompt of fig. 7c and entity recognition per se includes the modified semantics/meaning in fig. 7d, e.g. sorting by year or by sale or altering the table as specified in the user prompt, the title also changes, as well as rows, columns, order, etc.… user inputs partial table, system helps arrange it, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55 as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) determining a question template with a condition as the data generation template with respect to non-first row data other than the table name or non-first column data other than the table name in the tabular data. (the table can be sorted via user instruction/prompt, and includes column, row, title, etc. and is an array with nth data progressing or reducing as user modifies the table… generalized input as meaning/semantics includes the initial prompt of fig. 7c and entity recognition per se includes the modified semantics/meaning in fig. 7d, e.g. sorting by year or by sale or altering the table as specified in the user prompt, the title also changes, as well as rows, columns, order, etc.… user inputs partial table, system helps arrange it, the prompt is the question and the system response is the answer, utilizing samples of a spreadsheet or table in the input col 4 lines 43-55 as in fig. 6a-6c tabular data with columns and rows with further illustrations in fig. 7a-d, using multiple templates to select in fig. 8 with col 13 line 43 – col 14 line 57 under the premise on an LLM utilizing semantic tags to identify natural language elements including terms, entities, descriptors, titles, subjects, etc.) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 12524214 B1 Liguori; Clare E. et al. Templates for prompt/response AI or LLM systems Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C COLUCCI whose telephone number is (571)270-1847. The examiner can normally be reached on M-F 9 AM - 5 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571)272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL COLUCCI/Primary Examiner, Art Unit 2655 (571)-270-1847 Examiner FAX: (571)-270-2847 Michael.Colucci@uspto.gov
Read full office action

Prosecution Timeline

Jul 25, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592240
ENCODING AND DECODING OF ACOUSTIC ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586570
CHUNK-WISE ATTENTION FOR LONGFORM ASR
2y 5m to grant Granted Mar 24, 2026
Patent 12573405
WORD CORRECTION USING AUTOMATIC SPEECH RECOGNITION (ASR) INCREMENTAL RESPONSE
2y 5m to grant Granted Mar 10, 2026
Patent 12573380
MANAGING AMBIGUOUS DATE MENTIONS IN TRANSFORMING NATURAL LANGUAGE TO A LOGICAL FORM
2y 5m to grant Granted Mar 10, 2026
Patent 12567414
SYSTEM AND METHOD FOR DETECTING A WAKEUP COMMAND FOR A VOICE ASSISTANT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
91%
With Interview (+15.3%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 990 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month