Prosecution Insights
Last updated: April 19, 2026
Application No. 18/674,802

Domain-Specific Shorthand for Generation of Data Visualizations based on Context Free Grammar

Non-Final OA §102
Filed
May 24, 2024
Examiner
LEE, JANGWOEN
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Salesforce Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
36 granted / 44 resolved
+19.8% vs TC avg
Strong +24% interview lift
Without
With
+24.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
23 currently pending
Career history
67
Total Applications
across all art units

Statute-Specific Performance

§101
26.5%
-13.5% vs TC avg
§103
54.6%
+14.6% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
4.1%
-35.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 44 resolved cases

Office Action

§102
DETAILED ACTION This communication is in response to the Application filed on 05/24/2024. Claims 1-20 are pending and have been examined. Claims 1, 9 and 17 are independent. This Application was published as U.S. Pub. No. 20250363141. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-5, 8-13 and 16-20 are rejected under 35 U.S.C. 102(a) as being anticipated by Setlur et al. (US Pub No. 2020/0110779, hereinafter, Setlur). Regarding Claim 1, Setlur discloses a method for generating data visualizations from natural language expressions (Setlur, par [003], "…systems, methods, and user interfaces that enable users to interact with data visualizations and analyze data using natural language expressions..."), comprising: at a computing device having a display, one or more processors, and memory storing one or more programs configured for execution by the one or more processors (Setlur, Figs. 2, 9, par [159], "…a computing device 200 that has (904) a display 212, one or more processors 202, and memory 206. The memory 206 stores (906) one or more programs configured for execution by the one or more processors 202..."): receiving a user input to specify a natural language command directed to a data source (Fig.2, par [055], "…the graphical user interface includes a user input module 234 for receiving user input through the natural language box 124...a user inputs a natural language command or expression into the natural language box 124 identifying one or more data sources 242 (which may be stored on the computing device 200 or stored remotely) and/or data fields from the data source(s)..."); generating a prompt for generating a data visualization based on relevant data fields and data values, one or more rules that characterize the data visualization, and a context free grammar, wherein the relevant data fields and data values are identified based identifying key phrases in the natural language command (Fig.2,9, par [164], "…The computing device 200 forms (918) a first intermediate expression ( e.g., using the natural language processing module 238) according to a context-free grammar and a semantic model 248 of data fields in the data source by parsing the natural language command..."); prompting a trained large language model using the prompt to generate a structured document following a domain-specific schema based on a shorthand notation (Fig.2, par [061], "…The semantic model 248 represents the database schema and contains meta data about attributes..."; par [079], "…ArkLang can be generated from a set of semantic models (e.g., the semantic model 248) representing their corresponding database, a context-free grammar (CFG), and a set of semantic constraints….a dialect of ArkLang is a set of all syntactically valid and semantically meaningful analytical expressions ..."; par [166], "…the computing device 200 forms (920) the first intermediate expression using one or more pre-defined grammar rules governing the context free grammar..."). using a parser that uses the context free grammar to map the structured document to a visual specification (Fig.2, par [057], "…The natural language processing module 238 also translates (e.g., compiles) the intermediate expressions into database queries by employing a visualization query language to issue the queries against a database or data source 242 and to retrieve one or more data sets from the database or data source 242..."), wherein the visual specification specifies the data source, a plurality of visual variables, and a plurality of data fields from the data source (Fig.2, par [058], "…visual specifications 240, which are used to define characteristics of a desired data visualization. In some implementations, the information the user provides (e.g., user input) is stored as a visual specification..."); and generating and displaying a data visualization based on the visual specification, including displaying a plurality of visual marks representing data, retrieved from the data source, for the plurality of data fields (Fig.2, paras [055, 056, 060], "…The selected fields are used to define a visual graphic. The data visualization application 230 then displays the generated visual graphic in the user interface 100… generates and displays a corresponding visual graphic (also referred to as a "data visualization" or a "data viz") using the user input (e.g., the natural language input)...zero or more databases or data sources 242, which are used by the data visualization application 230..."). Regarding Claim 2, Setlur discloses the method of claim 1, further comprising: Setlur further discloses encoding data, for a current visualization, based on the shorthand notation and the context free grammar (Fig.2,9, par [164], "…The computing device 200 forms (918) a first intermediate expression ( e.g., using the natural language processing module 238) according to a context-free grammar and a semantic model 248 of data fields in the data source by parsing the natural language command..."; ); and while prompting the trained large language model, inputting the encoded data along with the prompt, to generate the structured document (Fig.2, par [057], "…The natural language processing module 238 also translates (e.g., compiles) the intermediate expressions into database queries by employing a visualization query language to issue the queries against a database or data source 242 and to retrieve one or more data sets from the database or data source 242..."). Regarding Claim 3, Setlur discloses the method of claim 1, further comprising: Setlur further discloses parsing the natural language command to identify key phrases (Setlur, par [057], "…the natural language processing module 238 parses the natural language command (e.g., into tokens) and translates the command into an intermediate language (e.g., ArkLang)…."); and identifying the relevant data fields and data values from the data source using semantic search, based on the key phrases (Fig.2, par [061], "…The semantic model 248 represents the database schema and contains meta data about attributes..."; par [079], "…ArkLang can be generated from a set of semantic models (e.g., the semantic model 248) representing their corresponding database, a context-free grammar (CFG), and a set of semantic constraints..."). Regarding Claim 4, Setlur discloses the method of claim 1, Setlur further discloses wherein the context free grammar includes one or more grammar rules for specifying data fields from an underlying data source to be used in the data visualization, field type, how field values are mapped to visual properties including color, size, shape and position, filters to apply to data used in the data visualization, and how data in the data visualization is to be sorted, type of chart to be used in the data visualization (Setlur, par [061], "…The semantic model 248 represents the database schema and contains metadata about attributes…The semantic model 248 includes data types, attributes, and a semantic role for data fields of the respective database or data source 242…the semantic model 248 is augmented with a grammar lexicon 250 that contains a set of analytical concepts 258 found in many query languages (e.g., average, filter, sort)...the semantic model 248 helps with inferencing and choosing salient attributes and values..."). Regarding Claim 5, Setlur discloses the method of claim 1, Setlur further discloses wherein the trained large language model is trained on a dataset of JSON, YAML, XML, and/or Python code, which represents a desired structure of output documents (Setlur, Fig.2, par [060], "…zero or more databases or data sources 242 which are used by the data visualization application230...the data sources are stored as spreadsheet files, CSV files, XML files, flat files, or JSON files, or stored in a relational database...a user selects one or more databases or data sources 242, selects data fields from the data source(s), and uses the selected fields to define a visual graphic..."). Regarding Claim 8, Setlur discloses the method of claim 1, Setlur further discloses wherein the domain-specific schema captures visualization components, including fields, filters, and sorting criteria, which are common across various visualization types (Setlur, par [061], "…The semantic model 248 represents the database schema and contains metadata about attributes…The semantic model 248 includes data types, attributes, and a semantic role for data fields of the respective database or data source 242…the semantic model 248 is augmented with a grammar lexicon 250 that contains a set of analytical concepts 258 found in many query languages (e.g., average, filter, sort)...the semantic model 248 helps with inferencing and choosing salient attributes and values..."; Fig.2D, paras [081-086], "…The analytical expressions along with their canonical forms in the dialect of ArkLang include...aggregation expressions 290...group expressions 292...filter expressions 294...limit expressions 296...sort expressions 298..."). Claim 9 is a computer device claim with limitations similar to the limitations of Claim 1 and is rejected under similar rationale. Claim 10 is a computer device claim with limitations similar to the limitations of Claim 2 and is rejected under similar rationale. Claim 11 is a computer device claim with limitations similar to the limitations of Claim 3 and is rejected under similar rationale. Claim 12 is a computer device] claim with limitations similar to the limitations of Claim 4 and is rejected under similar rationale. Claim 13 is a computer device claim with limitations similar to the limitations of Claim 5 and is rejected under similar rationale. Claim 16 is a computer device claim with limitations similar to the limitations of Claim 8 and is rejected under similar rationale. Claim 17 is a non-transitory computer readable storage medium claim with limitations similar to the limitations of Claim 1 and is rejected under similar rationale. Additionally, Setlur discloses a non-transitory computer readable storage medium storing one or more programs, the one or more programs configured for execution by a computing device (Setlur, Figs. 2, 9, par [159], "…correspond to instructions stored in the memory 206 or other non-transitory computer readable storage medium. The computer-readable storage medium may include...") Claim 18 is a non-transitory computer readable storage medium claim with limitations similar to the limitations of Claim 2 and is rejected under similar rationale. Claim 19 is a non-transitory computer readable storage medium claim with limitations similar to the limitations of Claim 3 and is rejected under similar rationale. Claim 20 is a non-transitory computer readable storage medium claim with limitations similar to the limitations of Claim 4 and is rejected under similar rationale. Allowable Subject Matter Claims 6-7 and 14-15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Lee et al. ("SJSON: A succinct representation for JSON documents." Information Systems 97 (2021): 101686) discloses a set of succinct representations for JSON documents, which we call SJSON, achieving both reduced RAM and disk usage while supporting efficient queries on the documents. The representations we propose are mainly based on the idea that JSON documents can be decomposed into structural part and raw data part. . Any inquiry concerning this communication or earlier communications from the examiner should be directed to JANGWOEN LEE whose telephone number is (703)756-5597. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BHAVESH MEHTA can be reached at (571)272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JANGWOEN LEE/Examiner, Art Unit 2656 /BHAVESH M MEHTA/Supervisory Patent Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

May 24, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597432
HUM NOISE DETECTION AND REMOVAL FOR SPEECH AND MUSIC RECORDINGS
2y 5m to grant Granted Apr 07, 2026
Patent 12586571
EFFICIENT SPEECH TO SPIKES CONVERSION PIPELINE FOR A SPIKING NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12573381
SPEECH RECOGNITION METHOD AND APPARATUS, STORAGE MEDIUM, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12567430
METHOD AND DEVICE FOR IMPROVING DIALOGUE INTELLIGIBILITY DURING PLAYBACK OF AUDIO DATA
2y 5m to grant Granted Mar 03, 2026
Patent 12566930
CONDITIONING OF PRODUCTIVITY APPLICATION FILE CONTENT FOR INGESTION BY AN ARTIFICIAL INTELLIGENCE MODEL
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+24.2%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 44 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month