Prosecution Insights
Last updated: April 19, 2026
Application No. 18/593,316

TECHNIQUES FOR MANUFACTURING TRAINING DATA TO TRANSFORM NATURAL LANGUAGE INTO A VISUALIZATION REPRESENTATION

Final Rejection §103§112
Filed
Mar 01, 2024
Examiner
SMITH, SEAN THOMAS
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Oracle International Corporation
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
5 granted / 6 resolved
+21.3% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
37 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
27.9%
-12.1% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§103 §112
DETAILED ACTION This Office action is responsive to amendments and arguments filed on January 6th, 2026. Claims 1, 3-8, 10-15 and 17-20 are amended, claims 1-20 are pending and have been examined; hence, this action is made FINAL. Any objections/rejections not mentioned in this Office action have been withdrawn by the Examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) is acknowledged. Accordingly, claims 1-20 have been afforded the benefit of the earlier filing date of Provisional Application 63/520877, filed August 21st, 2023. Information Disclosure Statement The information disclosure statement (IDS) submitted on March 1st, 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendments and Arguments Regarding rejections made under 35 U.S.C. 101, Applicant argues, “The amended claims, which now recite multiple, distinct data manufacturing pipelines for processing different categories of training data, are directed to a concrete improvement in computer technology. The amended claims do not recite a ‘mental process’ that can be performed by a human, but rather a specific, technical solution to the technical problem of efficiently and effectively generating robust, augmented training datasets for training a machine learning model to transform natural language into a visualization representation,” (page 17 of Remarks). Applicant’s argument is persuasive. Accordingly, the rejections under 35 U.S.C. 101 are withdrawn. Regarding rejections made under 35 U.S.C. 103, Applicant argues, “The amendments incorporating features from FIG. 6 patentably distinguish the claims from the combination of Luo and Han. Specifically, neither Luo nor Han, alone or in combination, teaches or suggests generating first, second, and third training data through separate, respective first, second, and third data manufacturing pipelines and then augmenting the original dataset with all three,” (page 17 of Remarks). Examiner respectfully disagrees. The combination of Luo and Han teaches methods for synthesizing training data from given examples (Luo, section 2.1, Solution Overview, “Figure 3 overviews the NL2SQL-to-NL2VIS synthesizer, which consists of two steps: vis synthesis (i.e., generating visualizations based on SQL queries) and NL synthesis (i.e., editing the NL queries of SQL queries based on the synthesized vis).”), as well as augmenting original examples with synthesized training data (Han, page 6, “Referring to FIGS. 3 and 4 , the data augmentation engine 152 first accepts an initial training dataset (step 200)… Subsequently, the data augmentation engine 152 may determine a schema transformation operation f1() to be applied to the database D1 for data augmentation (step 210). In addition, the data augmentation engine 152 may generate anew database D11 by applying the schema transformation operation f1() to the database D1 (operation220),” and page 7, “[Figure] 8 is a diagram showing a data flow in a process in which a plurality of learning dataset augmentation operations are continuously performed in an embodiment of the present invention.”). The claims only broadly recite data manufacturing pipelines that process training examples to synthesize additional examples, without further technical detail to differentiate the synthesis, augmentation, or function of the pipelines, which is obvious under the teachings of the prior art. Accordingly, the rejections under 35 U.S.C. 103 are maintained. Further details are provided below. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 7 and 14, depending from claims 4 and 11, respectively, recite the limitation "wherein the determination of whether the example is suitable for augmentation is performed for each example in the original training dataset that is accessed in accordance with (g) and (a).” There is insufficient antecedent basis for this limitation in the claims; step (g) does not appear in the claims, nor in their respective parent claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over "Synthesizing Natural Language to Visualization (NL2VIS) Benchmarks from NL2SQL Benchmarks" by Yuyu Luo et al. (hereinafter, "Luo") in view of WIPO Publication 2023/128021 to Han et al. (hereinafter, "Han"). Regarding claims 1, 8 and 15, Luo teaches a method, computer-readable media and system comprising: accessing an original training dataset, a visualization query dataset, an incremental visualization dataset, and a manipulation visualization dataset (page 1, Abstract, "In this paper, we propose a NL2VIS synthesizer (NL2SQL-to-NL2VIS) that synthesizes NL2VIS benchmarks by piggybacking NL2SQL benchmarks."); generating first training data by processing, in a first data manufacturing pipeline, first training examples in the original training dataset and the visualization query dataset, wherein processing the first training examples comprises synthesizing additional training examples from the first training examples (page 3, Example 2. "[Output.] The NL2SQL-to-NL2VIS will synthesize T`v with two VIS queries, t1 and t2… For t1, it synthesizes two NL queries n11 and n12; and for t2, it also synthesizes two NL queries n21 and n22. Hence, it will output four pairs {(n11, t1), (n12, t1), (n21, t2), (n22, t2)}."); and training, using the augmented training dataset, a machine learning model to convert a natural language utterance into meaning representation language (MRL) logical form that includes one or more visualization actions (page 3, Step 2. NL Synthesis, "The purpose of having variants of NL specifications, which is a way of data augmentation [18], is to train a robust model."). Luo does not explicitly teach “generating second training data by processing, in a second data manufacturing pipeline, second training examples in the incremental visualization dataset wherein processing the second training examples comprises synthesizing additional training examples from the second training examples,” “generating third training data by processing, in a third data manufacturing pipeline, third training examples in the manipulation visualization dataset, wherein processing the third training examples comprises synthesizing additional training examples from the third training examples,” or “augmenting the original training dataset by adding the first training data, second training data, and third training data to the original training dataset to generate an augmented training dataset,” and thus, Han is introduced. Han teaches generating second training data by processing, in a second data manufacturing pipeline, second training examples in the incremental visualization dataset wherein processing the second training examples comprises synthesizing additional training examples from the second training examples (page 5, "According to an exemplary embodiment of the present invention, the data augmentation engine 152 may generate new learning data through a database schema modification operation, rather than augmenting learning data including new natural language query data. That is, unlike the conventional training data augmentation method of augmenting training data based on new natural language query data, the data augmentation engine 152 according to an exemplary embodiment of the present invention extends the diversity of database schema to a given training dataset. A new type of SQL query that does not exist can be automatically created."); generating third training data by processing, in a third data manufacturing pipeline, third training examples in the manipulation visualization dataset, wherein processing the third training examples comprises synthesizing additional training examples from the third training examples (page 6, "As shown in FIG. 4, the above data augmentation operation is not terminated once, but is repeatedly performed for other schema transformation operations and for other initial training datasets, so that training data is converted into new training datasets. can create That is, after generating a new training dataset based on the first schema transformation operation f1(), the data augmentation engine 152 may generate a new training dataset based on the second schema transformation operation f2()."); and augmenting the original training dataset by adding the first training data, second training data, and third training data to the original training dataset to generate an augmented training dataset (page 5, "The data augmentation engine 152 accepts a training dataset and augments the training dataset to increase its quantity. Accordingly, the augmented training dataset output by the data augmentation engine 152 includes the training dataset added by the data augmentation engine 152 in addition to the original training dataset stored in the training dataset table 156.). Luo and Han are considered analogous because they are each concerned with machine learning model training. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Luo with the teachings of Han for the purpose of training a robust model as prompted by Luo, and improvements to model performance would have been predictable. Regarding claims 2, 9 and 16, Luo teaches (ii) each example in the visualization query dataset comprises a natural language utterance, a system programming language corresponding to the natural language utterance, a visualization type presented in the natural language utterance, and a schema (page 5, section 2.5, "For insertions such as grouping, aggregation, Order, and VIS types, we use NL extracted from Tableau’s Ask Data [50] and NL4DV2 as rules to enrich the text.");and (iv) the manipulation visualization dataset comprises one or more manipulation templates (page 12, Training Data Generation, "DBPal [63] augments training data based on a set of pre-defined (NL, SQL) templates."). Luo does not explicitly teach “each example in the original training dataset comprises a natural language utterance, a MRL logical form corresponding to the natural language utterance, and a schema,” or “the incremental visualization dataset comprises one or more data annotation and incremental natural language templates,” however, Han teaches (i) each example in the original training dataset comprises a natural language utterance, a MRL logical form corresponding to the natural language utterance, and a schema (page 5, "The data augmentation engine 152 accepts a training dataset and augments the training dataset to increase its quantity… In an exemplary embodiment, the training dataset may consist of a set of natural language queries, databases, and SQL queries. The natural language query and the SQL query may be stored in the learning dataset table 156 together with database information."),and (iii) the incremental visualization dataset comprises one or more data annotation and incremental natural language templates (page 2, "According to the template-based method, synthetic data is created using a template of a predefined natural language query and SQL pair. The reference to the data in the database is empty as a slot in the template."). Luo and Han are considered analogous because they are each concerned with machine learning model training. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Luo with the teachings of Han for the purpose of expanding dataset coverage. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claims 3, 10 and 17, Luo teaches (c) selecting a visualization type for the example based on constraints of the MRL logical form and popularity scores associated to each visualization type (page 4, section 2.3, "We will add a Visualize subtree. Also, we need to ensure that the VIS type (e.g., bar, line) added can lead to a valid VIS. We follow the rule of thumb (see Table 1) of VIS w.r.t. attribute types from the data visualization community [38, 65], which are encoded as rules in our system," and page 6, VIS and NL Queries, "Bar charts and its variants take a huge percentage matches real-world cases. As shown by Beagle [5] and SEEDB [56], bars (and histograms) are the most popular VIS types.");(d) adding a visualization clause to the natural language utterance associated with the example using a visualization clause template and the visualization type selected for the example to generate a visualization creation utterance, wherein the visualization clause includes a visualization action for the visualization type (page 2, section 2.1, "It takes a (NL, SQL) pair (nQ ,Q) as the input, and returns as output a set of (NL, VIS) pairs: {(n11, t1), … , (n1k, t1), … ,(nm1, tm), … ,(nmk ,tm)}".);(e) modifying, based on the visualization labeled schema and the visualization type selected for the example, the MRL logical form associated with the example to generate a visualization creation MRL logical form that corresponds to the visualization creation utterance, wherein the visualization creation MRL logical form comprises one or more visualization-related entities and a visualization clause that includes the visualization action for the visualization type (page 4, section 2.3, "Next, we discuss how to generate (candidate) VIS trees from one SQL tree. Intuitively, given one SQL tree, we can “delete” any tree nodes or “insert” any tree nodes, as long as we can produce a valid VIS tree rooted with “Visualize Q” (Figure 5).");(f) assembling the visualization labeled schema, the visualization creation utterance, and the visualization creation MRL logical form to generate a new visualization example (page 4, section 2.3, "After different combinations of insertions above, we will obtain candidate VIS set TV."). Luo does not explicitly teach “accessing an example from the original training dataset,” “adding, to the schema associated with the example, one or more visualization-related entities and schema-linking relations that link the one or more visualization-related entities to one or more entities within the schema to generate a visualization labeled schema,” or “repeating steps (a) and (c) - (f) for a random or predefined number of examples in the original training dataset to generate a visualization training dataset, the visualization training dataset comprising the new visualization examples,” however, Han teaches (a) accessing an example from the original training dataset (page 6, "Referring to FIGS. 3 and 4 , the data augmentation engine 152 first accepts an initial training dataset (step 200).");(b) adding, to the schema associated with the example, one or more visualization-related entities and schema-linking relations that link the one or more visualization-related entities to one or more entities within the schema to generate a visualization labeled schema (page 6, "Subsequently, the data augmentation engine 152 may determine a schema transformation operation f1() to be applied to the database D1 for data augmentation (step 210). In addition, the data augmentation engine 152 may generate a new database D11 by applying the schema transformation operation f1() to the database D1 (operation 220).”)and repeating steps (a) and (c) - (f) for a random or predefined number of examples in the original training dataset to generate a visualization training dataset, the visualization training dataset comprising the new visualization examples (page 6, "As shown in FIG. 4, the above data augmentation operation is not terminated once, but is repeatedly performed for other schema transformation operations and for other initial training datasets, so that training data is converted into new training datasets."). Luo and Han are considered analogous because they are each concerned with machine learning model training. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Luo with the teachings of Han for the purpose of expanding dataset coverage. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claims 4, 11 and 18, Luo teaches (a) accessing an example from the visualization query dataset, wherein the natural language utterance associated with the example comprises a visualization clause that includes a visualization action for the visualization type (page 3, Example 2, "Figure 4 gives a real example from a Flight dataset, where the original (NL, SQL) pair (nQ, Q) from the Spider [68] NL2SQL benchmark is given as input." Figure 4 shows the utterance nQ containing a visualization action.); PNG media_image1.png 393 701 media_image1.png Greyscale (b) converting the system programming language into MRL logical form corresponding to the natural language utterance (page 3, section 2.2, "An ideal grammar to bridge SQL and VIS queries is desired to be: (1) uniform: it can represent both SQL and VIS queries; (2) language-agnostic: it can be converted to either an SQL query or a VIS query with a specific language (e.g., Vega-Lite); and (3) extensible: it can be extended to support other SQL queries or more visualization types. One choice is based on Abstract Syntax Tree (AST). In particular, we extend SemQL [21], which was used for NL2SQL, to further support NL2VIS. The extended grammar is shown in Figure 5.");(c) adding, to the schema, one or more visualization-related entities and schema-linking relations that link the one or more visualization-related entities to one or more entities within the schema to generate a visualization labeled schema (page 3, Step 1. VIS Synthesis, "Tree edits may modify the SQL tree tQ by adding how to visualize (i.e., the vis type) and some vis related data operations (e.g., grouping and binning), as well as deleting some nodes (e.g., an SQL query may select more attributes than needed for VIS).");(d) modifying, based on the visualization labeled schema and the visualization type presented in the natural language utterance, the MRL logical form to generate a visualization creation MRL logical form that corresponds to the natural language utterance, wherein the visualization creation MRL logical form comprises one or more visualization-related entities and a visualization clause that includes the visualization action for the visualization type (page 4, section 2.3, "Next, we discuss how to generate (candidate) VIS trees from one SQL tree. Intuitively, given one SQL tree, we can “delete” any tree nodes or “insert” any tree nodes, as long as we can produce a valid VIS tree rooted with “Visualize Q” (Figure 5).");(e) assembling the visualization labeled schema, the natural language utterance, and the visualization creation MRL logical form to generate a new visualization example (page 4, section 2.3, "After different combinations of insertions above, we will obtain candidate VIS set TV."). Luo does not explicitly teach “repeating steps (a), (b), (d) and (e) for a random or predefined number of examples in the visualization query dataset to generate a visualization training dataset, the visualization training dataset comprising the new visualization examples,” however, Han teaches (f) repeating steps (a), (b), (d) and (e) for a random or predefined number of examples in the visualization query dataset to generate a visualization training dataset, the visualization training dataset comprising the new visualization examples (page 6, "As shown in FIG. 4, the above data augmentation operation is not terminated once, but is repeatedly performed for other schema transformation operations and for other initial training datasets, so that training data is converted into new training datasets."). Luo and Han are considered analogous because they are each concerned with machine learning model training. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Luo with the teachings of Han for the purpose of expanding dataset coverage. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claims 5, 12 and 19, Luo teaches (b) adding, to the schema, one or more visualization-related entities and schema-linking relations that link the one or more visualization-related entities to one or more entities within the schema to generate a visualization labeled schema (page 3, Step 1. VIS Synthesis, "Tree edits may modify the SQL tree tQ by adding how to visualize (i.e., the vis type) and some vis related data operations (e.g., grouping and binning), as well as deleting some nodes (e.g., an SQL query may select more attributes than needed for VIS).");(c) composing, based on the incremental natural language template, the base utterance, and the incremental use-case type, a visualization example utterance that comprises a visualization action for the incremental use-case type (page 3, Step 2. NL Synthesis, "Given the input (nQ, tQ), each good VIS query ti, and the tree operations Δi that convert tQ into ti , it will revise nQ to reflect the change of Δi , and get variants of NL specifications. The purpose of having variants of NL specifications, which is a way of data augmentation [18], is to train a robust model.");(d) constructing, based on the input MRL logical form and a set of MRL logical form construction rules defined for the incremental use-case type, a visualization incremental MRL logical form (page 4, Candidate VIS Generation, "Given an SQL tree tQ, we first perform different deletions on tQ to get a set I of intermediate SQL trees {tI1,…, tIl}. For each intermediate SQL tree tIi ∈ I, we then make insertions to get a set of VIS trees TV = {t1,…, tn}.");(e) assembling the visualization labeled schema, the visualization example utterance, and the visualization incremental MRL logical form to generate a new visualization example (page 4, section 2.3, "After different combinations of insertions above, we will obtain candidate VIS set TV."). Luo does not explicitly teach “accessing an incremental natural language template and data annotation from the incremental visualization dataset, wherein the incremental natural language template comprises a library of different text to be used for an incremental use-case type to be added to a visualization incremental utterance, and wherein the data annotation comprises a base utterance, an input MRL logical form, an incremental use-case type to be used in the visualization example utterance, and a schema,” or “repeating steps (a) and (c) - (e) for a random or predefined number of examples to generate a visualization training dataset, the visualization training dataset comprising the new visualization examples,” however, Han teaches (a) accessing an incremental natural language template and data annotation from the incremental visualization dataset, wherein the incremental natural language template comprises a library of different text to be used for an incremental use-case type to be added to a visualization incremental utterance, and wherein the data annotation comprises a base utterance, an input MRL logical form, an incremental use-case type to be used in the visualization example utterance, and a schema (page 2, "According to the template-based method, synthetic data is created using a template of a predefined natural language query and SQL pair. The reference to the data in the database is empty as a slot in the template. Given a database, data is created by randomly selecting data from the database and filling the corresponding slots.")and (f) repeating steps (a) and (c) - (e) for a random or predefined number of examples to generate a visualization training dataset, the visualization training dataset comprising the new visualization examples (page 6, "As shown in FIG. 4, the above data augmentation operation is not terminated once, but is repeatedly performed for other schema transformation operations and for other initial training datasets, so that training data is converted into new training datasets."). Luo and Han are considered analogous because they are each concerned with machine learning model training. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Luo with the teachings of Han for the purpose of expanding dataset coverage. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claims 6, 13 and 20, Luo teaches (a) accessing a manipulation template from the manipulation visualization dataset, wherein the manipulation template comprises a natural language utterance definition and a corresponding MRL logical form definition for a visualization manipulation use-case (page 12, Training Data Generation, "DBPal [63] augments training data based on a set of pre-defined (NL, SQL ) templates.");(b) composing, using the manipulation template, a new visualization example comprising a visualization example utterance and a corresponding visualization manipulation MRL logical form (page 5, section 2.5, "Each VIS ti ∈ T'V is associated with a set of tree edits Δi , i.e., deletions Δ-i and insertions Δ+i . Next, we need to modify the NL of the corresponding SQL query to reflect these changes.");(c) repeating steps (a) and (b) for a random or predefined number of examples to generate a visualization training dataset, the visualization training dataset comprising the new visualization examples (page 4, Candidate VIS Generation, "For each intermediate SQL tree tIi ∈ I, we then make insertions to get a set of VIS trees TV = {t1, … ,tn}."). Regarding claims 7 and 14, Luo teaches synthesizing additional training examples from the first training examples further comprises, after adding, to the schema associated with the example, determining whether the example is suitable for augmentation based on analysis of the MRL logical form using a set of filtering rules (page 3, Step 1. VIS Synthesis, "In order to ensure that each VIS query w.r.t. a VIS tree in TV is “good”, e.g., a bar chart with several hundred bars is not readable thus is bad. Hence, we need to filter “bad” charts, while only keeping good charts as T'V= {t1, . . . ,tm }."), and only performing (c)-(f) when the example is determined to be suitable for augmentation (page 4, section 2.4, "By doing so, we prune bad visualizations from the candidate VIS set TV and get a set of good VIS set T'V."), and wherein the determination of whether the example is suitable for augmentation is performed for each example in the original training dataset that is accessed in accordance with (g) and (a) (page 3, Step 1. VIS Synthesis, "In order to ensure that each VIS query w.r.t. a VIS tree in TV is “good”, e.g., a bar chart with several hundred bars is not readable thus is bad. Hence, we need to filter “bad” charts, while only keeping good charts as T'V= {t1, . . . ,tm }."). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: U.S. Patent 11,687,544 to Kelly et al. U.S. Patent 12,182,147 to Merchant et al. U.S. Patent Application Publication 2018/0293508 to Ko et al. U.S. Patent Application Publication 2020/0226212 to Tan et al. U.S. Patent Application Publication 2022/0138621 to Patil et al. U.S. Patent Application Publication 2022/0164540 to Setlur et al. U.S. Patent Application Publication 2022/0405314 to Du et al. U.S. Patent Application Publication 2023/0185834 to Arthur et al. U.S. Patent Application Publication 2024/0061833 to Tangari et al. U.S. Patent Application Publication 2024/0134846 to Datt et al. U.S. Patent Application Publication 2024/0386215 to Eisenschlos et al. U.S. Patent Application Publication 2024/0394249 to Cunningham et al. “Data2Vis: Automatic Generation of Data Visualizations Using Sequence-to-Sequence Recurrent Neural Networks” by Dibia and Demiralp. “Empowering Natural Language to Visualization Neural Translation Using Synthesized Benchmarks” by Luo et al. “NL2VIZ: Natural Language to Visualization via Constrained Syntax-Guided Synthesis” by Wu et al. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN T SMITH whose telephone number is (571)272-6643. The examiner can normally be reached Monday - Friday 8:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PIERRE-LOUIS DESIR can be reached at (571) 272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEAN THOMAS SMITH/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Mar 01, 2024
Application Filed
Oct 09, 2025
Non-Final Rejection — §103, §112
Dec 09, 2025
Applicant Interview (Telephonic)
Dec 10, 2025
Examiner Interview Summary
Jan 06, 2026
Response Filed
Feb 17, 2026
Final Rejection — §103, §112
Mar 24, 2026
Examiner Interview Summary
Mar 24, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602540
LEVERAGING A LARGE LANGUAGE MODEL ENCODER TO EVALUATE PREDICTIVE MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12530534
SYSTEM AND METHOD FOR GENERATING STRUCTURED SEMANTIC ANNOTATIONS FROM UNSTRUCTURED DOCUMENT
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+33.3%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month