DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The action is responsive to Applicant’s amendment filed on 12/17/2025.
Claims 21-23 are newly added.
Claims 1-23 are pending.
Response to Arguments
Applicant’s arguments with respect to the rejections previously made and the amended claims filed on 12/17/2025 have been fully considered but they are not persuasive. In view of the newly added claims, the rejections are being updated accordingly.
Double Patenting
In view of the approved terminal disclaimer, the rejections as set forth in the previous office action are hereby withdrawn.
35 USC 103 Rejections
Independent claim 1
Applicant argued that Subramanya and Du fail to disclose or render obvious "obtain a predicted visualization definition that is in an intermediate language interpretable by a large language model” and “generate a statistical representation as a visualization based at least in part on the result data and the visualization type according to which the result data is to be visualized indicated by the predicted visualization definition”.
In response to the arguments, it is submitted that limitation of “predicted visualization definition that is in an intermediate language interpretable by a large language model” is directed to non-functional descriptive material that describe that the definition is in a language that can be interpretable by a large language model, which does not to impact or change the functionality of the claimed steps or the outcome of the claim 1. Neither the definition nor the large language model is being use to impact or change the functionality of the claimed steps or the outcome of the claim 1. All the steps in claim 1 would be performed the same to achieve the same outcome regardless whether the definition is in an intermediate language interpretable by a large language model or not.
In addition, it is submitted that cited steps are properly addressed by the cited refences at least in view of Subramanya discloses obtain the visualization definition as set forth by the metadata information and mapping of the product, user info, which is in an intermediate language. The intermediate language is merely program language or instruction that can be interpreted by a large language model-- which merely a learning model-- neural network system to generate a statistical representation as a visualization based in part of result data and the visualization definition, such that a personalized visualization is being generated ([0050], [0059-0061], [0065-0068]).
Additionally, it is submitted that the cited step of generate is properly addressed by Du in view of generate a statistical representation as a visualization for the result data based at least in part on the predicted visualization definition--such as and not limited to the visualization template--that defines the representation type and the also based on the result data, such that the visualization is based at least in part of result data and the visualization type represented by one of the templates ([0024], [0034], [0046], Fig 3 & 5C-8).
Hence, it is submitted that the cited steps are being properly addressed by Subramanya and Du; see rejection below for detail.
Applicant also argued that Subramanya and Du fail to disclose or render obvious presently claimed predicted visualization definition in an intermediate language interpretable by a large language model, where the visualization type is determined based at least in part on the result data via a model that operates on a statistical abstraction of that result data.
In response to the arguments, it is submitted that claimed predicted visualization definition in an intermediate language interpretable by a large language model is properly addressed by at least Subramanya as stated above.
Nowhere in the claim recites “where the visualization type is determined based at least in part on the result data via a model that operates on a statistical abstraction of that result data”, and hence it is not required being taught by any of the cited references.
I. No teaching of a visualization type determined from statistical properties of the
result data
In response to the arguments, it is submitted that a visualization type determined from statistical properties of the result data is not being cited in claim 1, and hence is not required being taught by any of the cited references.
II. No teaching of a predicted visualization definition in an intermediate language interpretable by a large language model
In response to the arguments, it is submitted that as stated above, the limitation of the predicted visualization definition in an intermediate language interpretable by a large language model has been properly addressed.
Also, while the claim must be given their broadest reasonable interpretation in light of the specification, it is improper to import claim limitation from the specification in accordance to MPEP 2111.
While, it may be intended that “the intermediate language is a structural limitation: it is the format of the predicted visualization definition itself and constrains how the definition is generated and subsequently used (e.g., translated to Python or Go as recited in claim 15). Applicant's disclosure repeatedly describes this language as a high-level, LLM-interpretable representation that enables the second model (e.g., often an LLM as recited in claims 2-5) to predict precise visualization definitions”, however such intermediate visualization definition---or intermediate language-- is not being presented in claim 1. Claimed limitation of “obtain a predicted visualization definition that is in an intermediate language interpretable by a large language model, wherein the predicted visualization definition comprises an indication of a visualization type that is determined based at least in part on the result data” is different from such intermediate visualization definition.
Hence such intermediate visualization definition is not required being taught by any of the cited references.
III. Lack of Motivation
In response to the arguments, it is submitted that it is not lack of motivation. Since both Subramanya and Du are from the same field of endeavor as both are directed to natural language query processing with data adaptative management-- which is same field of endeavor as the clamed invention-- it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine their teaching of Subramanya and Du for a motivation of provide user with meaningful visualization of user interest data efficiently (Subramanya, [0002]; Du, [0001]).
Furthermore, it is submitted that all limitations in claims—including limitations in the newly added claims and those not specifically addressed in the Applicant’s remarks--are properly addressed. The reason is set forth in the rejections; see below for detail.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-15, 17-23 are rejected under 35 U.S.C. 103 as being unpatentable over Subramanya et al (Pub No. US 2018/0144385, hereinafter Subramanya) in view of Du et al (Pub No. US 2022/0405314, hereinafter Du).
Subramanya and Du are cited in the IDS filed on 11/7/2024.
With respect to claim 1, Subramanya discloses a system for visualizing data (abstract, Fig 3), comprising:
one or more processors (Fig 1-3) configured to:
obtain a natural language query ([0019], [0048]: a query is obtained from a user);
determine an intent for the natural language query ([0021], [0049], [0054]: determine an intent for the query);
obtain result data associated with the intent ([0060-0061], [0065-0067], Fig 4: obtain result data that is associated with the intent via query processing);
obtain a predicted visualization definition that is in an intermediate language interpretable by a large language model (the element of “a predicted visualization definition that is in an intermediate language interpretable by a large language mode” is directed to describe what the predicted visualization definition is, and neither the intermediate language interpretable nor the large language model would change the functionality of the claimed steps or the outcome of the claim; [0060-0061], [0065], [0067-0068]: obtain the visualization definition as set forth by the metadata information and mapping of the product, user info, which is in an intermediate language--which is merely a program language/instruction-- interpretable by a large language model which is merely a learned model), wherein the predicted visualization definition comprises an indication of a visualization type that is determined based at least in part on the result data([0020-0021], [0044], [0053], [0058], [0061], [0063-0066]], Fig 4-5: the visualization definition indicates display type correspond to the visualization type data according which result data is to be visualized, such as and not limited to a personalization type visualization with location type and/or price type according to the results data); and
generate a statistical representation as a visualization based at least in part on the result data according to which the result data is to be visualized indicated by the predicted visualization definition (a statistical representation is merely a representation and statistical is corresponding to a name of the representation which does not impact the functionality of the claimed steps or outcome of the clamed method;[0062], [0066-0068]:generate a representation corresponding to the claimed statistical representation as a visualization via display with result data based on the definition as set forth by the metadata and mapping and personalization for the result data); and
a memory coupled to the one or more processors and configured to provide the one or more processors with instructions (Fig 1-3).
Subramanya does not explicitly disclose generate a statistical representation as a visualization based at least in part on the visualization type as claimed.
Although one ordinary skilled in the art before the effective filing date of the claimed invention could reasonably interpreted that Subramanya implicitly discloses “generate a statistical representation as a visualization based at least in part on the result data and the visualization type according to which the result data is to be visualized indicated by the predicted visualization definition” based on personalization visualization type indicates a particular statistical representations such as and not limited to confidence level, average customer review, and/or product price range which involve statistic computations ([0058-0064]) and generate a statistical representation as a visualization based on the result data as set forth by predicted visualization definition and the visualization type such as and not limited to the attributes and metadata of the result data such that a particular type of visualization with specific product related information in specific format is being displayed([0065-0069, Fig 4).
Also, Du discloses generate a statistical representation as a visualization based at least in part on the result data and the visualization type according to which the result data is to be visualized indicated by the predicted visualization definition ([0024], [0034], [0046], Fig 3 & 5C-8: generate a statistical representation as a visualization for the result data based at least in part on the predicted visualization definition as such as and not limited to the visualization template that defines the representation type and the also based on the result data. Hence the visualization is based at least in part of result data and the visualization type represented by one of the templates).
Since, both Subramanya and Du are from the same field of endeavor as both are directed to natural language query processing with data adaptative management, which is same field of endeavor as the clamed invention, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine the teaching of Subramanya and Du by incorporate techniques request generation and visualization type with respect to statistical representation of Du to Subramanya for processing natural language query with intent and visualization definition as claimed. The motivation to combine is to provide user with meaningful visualization of user interest data efficiently (Subramanya, [0002]; Du, [0001]).
With respect to claim 2, the combined teachings of Subramanya and Du further disclose
wherein: the visualization type is obtained based at least in part on abstracting the result data to obtain a data abstraction (Subramanya, [0020-0021], [0044], [0053], [0058], [0061], [0063-0006]], Fig 4-5: determine a visualization type based on the data abstraction for the results data represent by the result data attributes. Such as and not limited determine a location type representing the visualization type based on the location attribute for the results data via personalization of results data), and querying, based at least in part on the data abstraction, a second model for the visualization type; and the second model is a machine learning model (Subramanya, [0022], [0050-0051]; Du, [0023], [0031], Fig 1-2: query a language model based on query, wherein the second model is a machine model is directed to a non-functional descriptive material as the second model is not being used).
With respect to claim 3, the combined teachings of Subramanya and Du further disclose wherein the second model is trained based on a training dataset comprising a set of data abstractions and corresponding data visualization language representations for the data abstraction (the term “is” indicates non-function descriptive material for describing what the model is, and the further describe what’s included in the dataset, and neither description is functionally involved; Subramanya, [0022], [0033], [0050-0051], [0063]; Du, [0040], [0047], [0059-0060], Fig 2-4 & 7-8: the machine learning model uses dataset to train, and the dataset includes different type of data such as the data abstraction, and representations as forth by the metadata and rules).
With respect to claim 4, the combined teachings of Subramanya and Du further disclose
wherein the second model is a large language model (the term “is” indicates non-function descriptive material for describing what the engine is, and the description is not functionally involved; Subramanya, [0022], [0033], [0050-0051], [0063]; Du, [0040] [0047], Fig 1-2: use machine learning model to predict and that model is corresponds to a large language model since a large language is merely a learned algorithm).
With respect to claim 5, the combined teachings of Subramanya and Du further disclose wherein the large language model is trained based on a training set comprising (i) a set of natural language queries or data abstractions, and (ii) a set of corresponding visualization definitions in a predefined data visualization language (the term “is” indicates non-function descriptive material for describing the training model and a training set, and neither description is functionally involved; Subramanya, Du, [0035], [0040], [0047], [0059-0060], Fig 2-4 & 7-8: the machine learning model uses dataset to train, and the dataset includes different type of data such as the data abstraction and definitions with respect to the representations as froth by the metadata and rules).
With respect to claim 6, the combined teachings of Subramanya and Du further disclose wherein the one or more processors are further configured to: determine the one or more selected data sources based at least in part on the intent (Subramanya, [0016], [0044]; Du, [0023-0024], [0033], Fig 2-4 & 7-8: determine a source based, e.g. online retail/database, based on the intent).
With respect to claim 7, the combined teachings of Subramanya and Du further disclose
wherein the one or more processors are further configured to: obtain the result data from the one or more selected data sources (Subramanya, [0061], [0067]; Du, [0023], [0025], [0031], [043], Fig 1-2 & 8: obtain result data from the source(s)).
With respect to claim 8, the combined teachings of Subramanya and Du further disclose
wherein the result data is abstracted in connection with obtaining the predicted visualization definition is based at least in part on determining one or more statistical properties pertaining to the result data (statistic property is merely a property which is a type of data; Subramanya, [0061], [0067]; Du, [0024], [0040], [0054],Fig 3& 5C-8: determine statistical properties such as and not limited to ranking, distance, and/or relevancy, graph, table pertain to the result data statistically).
With respect to claim 9, the combined teachings of Subramanya and Du further disclose
wherein the one or more statistical properties comprise one or more of columns, data in the columns, outlier data, and a distribution of numeric values (the data and values listed are types of data, which are functionally involved, and one or more as claimed indicates only one is needed read on the claim; Subramanya, [0061], [0067]; Du, [0024], [0040], [0054],Fig 3& 5C-8: determine statistical properties different type data values, such as and not limited to the distribution values of prices and/or distances with to graph, table pertain to the result data statistically).
With respect to claim 10, the combined teachings of Subramanya and Du further disclose
wherein determining the one or more statistical properties pertaining to the result data comprises: analyzing the result data, including applying one or more predefined rules to obtain the one or more statistical properties (Subramanya, [0061-0067]; Du, [0024], [0040], [0054],Fig 3& 5C-8: apply different types of rules-- e.g. rules on user info--to analyze the result data such that personalized result is provided to represent result data via rules of templates, format and other statistic properties).
With respect to claim 11, the combined teachings of Subramanya and Du further disclose
wherein the predicted data abstraction is determined based at least in part on the one or more statistical properties (the term “is” indicates non-function descriptive material for describing the abstraction, and the description is not functionally involved; Subramanya, [0061-0067]; Du, [0024], [0040], [0054],Fig 3& 5C-8: determine the abstraction based in part on the properties as set forth by the rules, metadata, templates, format and other statistic properties).
With respect to claim 12, the combined teachings of Subramanya and Du further disclose
wherein generating the statistical representation as the visualization for the result data based at least in part on the predicted visualization definition comprises determining the visualization type based at least in part on the predicted visualization definition, and creating the visualization based at least in part on the visualization type (Subramanya, [0061-0067]; Du, [0024], [0034], [0046], Fig 3 & 5C-8: determine the type of visualization based on definition as represented by the result visualization presented, and the creation of visualization is based on the type, as set forth by the rules, metadata, templates, format and other visualization properties).
With respect to claim 13, the combined teachings of Subramanya and Du further disclose
wherein the predicted visualization definition corresponds to a data visualization language representation for the natural language query in accordance with the predefined data visualization language (the limitation is directed to non-function descriptive material for describing the definition, and the description is not functionally involved; Subramanya, [0019-0020], [0061-0067]; Du, [0024], [0034], [0046], Fig 3 & 5C-8: the visualization definition corresponds to the language representation for natural language processing for output as set forth by the rules, metadata, templates, format and other visualization properties).
With respect to claim 14, the combined teachings of Subramanya and Du further disclose
wherein the intermediate language comprises an indication of a first dimension of the data to be visualized, a second dimension of the data to be visualized, and the visualization type (Subramanya, [0061-0067]; Du, [0024], [0034], [0046], Fig 3 & 5C-8: visualization language includes an indication for the dimensions and type in according to the rules and metadata for the output with respect the template and format and other visualization properties. E.g. result data in table includes dimensions).
With respect to claim 15, the combined teachings of Subramanya and Du further disclose wherein generating the statistical representation for the result data comprises: translating the predicted visualization definition to another high-level programming language to obtain a translated representation (Subramanya, [0061-0067]; Du, [0024], [0034], [0046], Fig 1-3 & 5C-8: using high level programming language for outputting customized result as set forth by the configurations/rules); and
generating the visualization based on the translated representation (Subramanya, [0061-0067]; Du, [0024], [0034], [0046], Fig 3 & 5C-8: generate visualization for the customized result based on the translated representation as set forth by the configurations/rules).
With respect to claim 17, the combined teachings of Subramanya and Du further disclose
wherein the second model is updated based at least in part on user feedback received in response to the visualization being provided to the user (the term “is” indicates non-function descriptive material for describing what the engine is, and the description is not functionally involved; Subramanya, [0022], [0033], [0050-0051], [0059], [0063]; Du, [0040] [0047]: updated in the model using different types of information, including user feedback, such as via user reviews, to train the model).
With respect to claim 18, the combined teachings of Subramanya and Du further disclose
wherein the intent is determined based at least in part on querying a first model (the term “is” indicates non-function descriptive material for describing what the intent is, and the description is not functionally involved; Subramanya, [0021], [0049], [0054]: Du, [0040-0042]: determine an intent based on the querying a model correspond to the 1st model via query processing with a learned process).
.
With respect to claim 19, Subramanya discloses a method for visualizing data (abstract), comprising:
obtaining, by one or more processors, a natural language query ([0019], [0048]: a query is obtained from a user);
determining an intent for the natural language query ([0021], [0049], [0054]: determine an intent for the query);
obtaining result data associated with the intent ([0060-0061], [0065-0067], Fig 4: obtain result data that is associated with the intent via query processing);
obtaining a predicted visualization definition that is in an intermediate language interpretable by a large language model (the element of “a predicted visualization definition that is in an intermediate language interpretable by a large language mode” is directed to describe what the predicted visualization definition is, and neither the intermediate language interpretable nor the large language model would change the functionality of the claimed steps or the outcome of the claim; [0060-0061], [0065], [0067-0068]: obtain the visualization definition as set forth by the metadata information and mapping of the product, user info, which is in an intermediate language--which is merely a program language/instruction-- interpretable by a large language model which is merely a learned model), wherein the predicted visualization definition comprises an indication of a visualization type that is determined based at least in part on the result data ([0020-0021], [0044], [0053], [0058], [0061], [0063-0066]], Fig 4-5: the visualization definition indicates display type correspond to the visualization type data according which result data is to be visualized, such as and not limited to a personalization type visualization with location type and/or price type according to the results data); and
generating a statistical representation as a visualization based at least in part on the result data according to which the result data is to be visualized indicated by the predicted visualization definition (a statistical representation is merely a representation and statistical is corresponding to a name of the representation which does not impact the functionality of the claimed steps or outcome of the clamed method;[0062], [0066-0068]:generate a representation corresponding to the claimed statistical representation as a visualization via display with result data based on the definition as set forth by the metadata and mapping and personalization for the result data).
Subramanya does not explicitly disclose generating a statistical representation as a visualization based at least in part on the visualization type as claimed.
Although one ordinary skilled in the art before the effective filing date of the claimed invention could reasonably interpreted that Subramanya implicitly discloses “generating a statistical representation as a visualization based at least in part on the result data and the visualization type according to which the result data is to be visualized indicated by the predicted visualization definition” based on personalization visualization type indicates a particular statistical representations such as and not limited to confidence level, average customer review, and/or product price range which involve statistic computations ([0058-0064]) and generate a statistical representation as a visualization based on the result data as set forth by predicted visualization definition and the visualization type such as and not limited to the attributes and metadata of the result data such that a particular type of visualization with specific product related information in specific format is being displayed([0065-0069, Fig 4).
Also, Du discloses generating a statistical representation as a visualization based at least in part on the result data and the visualization type according to which the result data is to be visualized indicated by the predicted visualization definition ([0024], [0034], [0046], Fig 3 & 5C-8: generate a statistical representation as a visualization for the result data based at least in part on the predicted visualization definition as such as and not limited to the visualization template that defines the representation type and the also based on the result data. Hence the visualization is based at least in part of result data and the visualization type represented by one of the templates).
Since, both Subramanya and Du are from the same field of endeavor as both are directed to natural language query processing with data adaptative management, which is same field of endeavor as the clamed invention, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine the teaching of Subramanya and Du by incorporate techniques request generation and visualization type with respect to statistical representation of Du to Subramanya for processing natural language query with intent and visualization definition as claimed. The motivation to combine is to provide user with meaningful visualization of user interest data efficiently (Subramanya, [0002]; Du, [0001]).
With respect to claim 20, Subramanya discloses a non-transitory computer readable medium embodied computer program product for visualizing data, and the computer program product comprising computer instructions that, when executed by one or more processors, cause the one or more processors to perform a method (abstract, Fig 1), the method comprising:
obtaining, by one or more processors, a natural language query ([0019], [0048]: a query is obtained from a user);
determining an intent for the natural language query ([0021], [0049], [0054]: determine an intent for the query);
obtaining result data associated with the intent ([0060-0061], [0065-0067], Fig 4: obtain result data that is associated with the intent via query processing);
obtaining a predicted visualization definition that is in an intermediate language interpretable by a large language model (the element of “a predicted visualization definition that is in an intermediate language interpretable by a large language mode” is directed to describe what the predicted visualization definition is, and neither the intermediate language interpretable nor the large language model would change the functionality of the claimed steps or the outcome of the claim; [0060-0061], [0065], [0067-0068]: obtain the visualization definition as set forth by the metadata information and mapping of the product, user info, which is in an intermediate language--which is merely a program language/instruction-- interpretable by a large language model which is merely a learned model), wherein the predicted visualization definition comprises an indication of a visualization type that is determined based at least in part on the result data ([0020-0021], [0044], [0053], [0058], [0061], [0063-0006]], Fig 4-5: the visualization definition indicates display type correspond to the visualization type data according which result data is to be visualized, such as and not limited to a personalization type visualization with location type and/or price type according to the results data); and
generating a statistical representation as a visualization based at least in part on the result data according to which the result data is to be visualized indicated by the predicted visualization definition, wherein the statistical representation is based at least part of the result data (a statistical representation is merely a representation and statistical is corresponding to a name of the representation which does not impact the functionality of the claimed steps or outcome of the clamed method;[0062], [0066-0068]:generate a representation corresponding to the claimed statistical representation as a visualization via display with result data based on the definition as set forth by the metadata and mapping and personalization for the result data. Hence the statistical representation is based in part of the result data).
Subramanya does not explicitly disclose generating a statistical representation as a visualization based at least in part on the visualization type as claimed.
Although one ordinary skilled in the art before the effective filing date of the claimed invention could reasonably interpreted that Subramanya implicitly discloses “generating a statistical representation as a visualization based at least in part on the result data and the visualization type according to which the result data is to be visualized indicated by the predicted visualization definition” based on personalization visualization type indicates a particular statistical representations such as and not limited to confidence level, average customer review, and/or product price range which involve statistic computations ([0058-0064]) and generate a statistical representation as a visualization based on the result data as set forth by predicted visualization definition and the visualization type such as and not limited to the attributes and metadata of the result data such that a particular type of visualization with specific product related information in specific format is being displayed([0065-0069, Fig 4).
Also, Du discloses generating a statistical representation as a visualization based at least in part on the result data and the visualization type according to which the result data is to be visualized indicated by the predicted visualization definition ([0024], [0034], [0046], Fig 3 & 5C-8: generate a statistical representation as a visualization for the result data based at least in part on the predicted visualization definition as such as and not limited to the visualization template that defines the representation type and the also based on the result data. Hence the visualization is based at least in part of result data and the visualization type represented by one of the templates).
Since, both Subramanya and Du are from the same field of endeavor as both are directed to natural language query processing with data adaptative management, which is same field of endeavor as the clamed invention, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine the teaching of Subramanya and Du by incorporate techniques request generation and visualization type with respect to statistical representation of Du to Subramanya for processing natural language query with intent and visualization definition as claimed. The motivation to combine is to provide user with meaningful visualization of user interest data efficiently (Subramanya, [0002]; Du, [0001]).
With respect to claim 21, the combined teachings of Subramanya and Du further disclose
wherein obtaining the predicted visualization definition comprises:
abstracting the result data to obtain a data abstraction comprising one or more statistical properties of the result data (Subramanya, [0059-0060], [0064]; Du, [0039-0040], [0050]: obtaining data abstraction represented by the metadata or attributes of the result data, and the metadata or attributes are corresponding one or more properties of the result data, whereas statistical properties are merely properties, which are metadata or attributes); and
querying a second model with the data abstraction (Subramanya, [0059-0060], [0063-0065], [0068]; Du, [0039-0040], [0044], [0050]: query a model or component represented a second model, such as a personalization or a mapping model of Subramanya , and/or a visualization template model or visualization engine of Du with the metadata/attributes correspond to the data abstraction to provide a personalized visualization of the result data) wherein the predicted visualization definition is in an intermediate language interpretable by a large language model (the limitation is directed to describe what the predicted visualization definition is, and neither the intermediate language interpretable nor the large language model would change the functionality of the claimed steps or the outcome of the claim; Subramanya, , [0050-0053], [0063-0065], [0068]; Du, [0039-0040], [0044], [0050]: the predicted visualization definition represented by the visualization metadata, attributes and mapping information is directed to intermediate instruction/programing language correspond to the intermediate language that can be interpreted by a large language model in a neural network system of Subramanya and Du).
With respect to claim 22, the combined teachings of Subramanya and Du further disclose wherein the second model is a large language model trained on a training set comprising (i) a set of data abstractions, and (ii) a set of corresponding visualization definitions in a predefined data visualization language (the limitation is directed to describe what the second model is, neither the set of data abstractions nor the set of corresponding visualization definitions which are merely types of data would change the functionality of the claimed steps or the outcome of the claim; Subramanya, [0050], [0060], [0068]; Du,[0038-0040], [0063], [0073]: the second model, such as but not limited to the personalization, mapping or visualization template is a model corresponding to the large language model since the large language model is merely a model with respect to a neural network, which is being trained using different types of data set including data abstraction data and definition data represented by metadata or attributes).
With respect to claim 23, the combined teachings of Subramanya and Du further disclose wherein generating the statistical representation as the visualization comprises:
translating the predicted visualization definition from the intermediate language to another high-level programming language to obtain a translated representation (Subramanya, [0061-0067]; Du, [0024], [0034], [0046], Fig 1-3 & 5C-8: using high level programming language for outputting customized result as set forth by the configurations/rules); and
generating the visualization based on the translated representation (Subramanya, [0061-0067]; Du, [0024], [0034], [0046], Fig 3 & 5C-8: generate visualization for the customized result based on the translated representation as set forth by the configurations/rules).
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Subramanya in view of Du, and further in view of Yin et al (Patent No. US 11,062,371, hereinafter Yin).
Yin is cited in the IDS filed on 11/7/2024.
With respect to claim 16, the combined teachings of Subramanya and Du does not explicitly disclose wherein the other high-level programming language comprises Python or Go.
However, Yin further discloses wherein the other high-level programming language comprises Python or Go (Col. 13, lines Col. 13, lines 28-30: Python may be used).
Since, Subramanya, Du, Yin are from the same field of endeavor as all are directed to query processing with data learning management, which is same field of endeavor as the clamed invention, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine the teaching of Subramanya , Du and Yin by incorporate techniques of high-level programming language Python of Yin in to Subramanya and Du for processing query with intent as claimed. The motivation to combine is to provide user with items that are more likely to be interest to the user for more favorable results in e-commerce (Subramanya, [0002]; Du, [0001], Yin, Col. 1, lines 16-17).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michelle Owyang whose telephone number is (571)270-1254. The examiner can normally be reached Monday-Friday, 8am-6pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached at (571)272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHELLE N OWYANG/Primary Examiner, Art Unit 2168