DETAILED ACTION
In response to communication filed on 17 September 2025, claims 1-15 are canceled. Claims 16, 20, 23, 24 and 27 are amended. Claims 16-27.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 17 September 2025 has been entered.
Response to Arguments
Applicant’s arguments, see “Claim Rejections – 35 U.S.C. § 112”, filed 17 September 2025, have been carefully considered. Based on the claim amendments, the rejection has been withdrawn.
Applicant’s arguments, see “Claim Rejections – 35 U.S.C. § 103”, filed 17 September 2025, have been carefully considered but are not considered to be persuasive.
APPLICANT’S ARGUMENT: Applicant argues that the claimed invention is an open-ended evaluation platform whose purpose is to evaluate the performance of a separate and distinct "first Al model". In the claimed invention, the synthetic data is not the end product; it is a vetted tool used as "evaluation data" for a target model under test. Its final output is the performance evaluation of the first Al model.
EXAMINER’S RESPONSE: Examiner has carefully considered the argument but respectfully disagrees. Goodsitt reference includes plurality of data models and generative adversarial models as being taught in [0002]. Goodsitt reference in [0080], also teaches how synthetic dataset and reference dataset are applied to perform analysis i.e. evaluation and generate histograms. However, Goodsitt does not explicitly teach evaluation data for the first artificial intelligence model. As a result, Tinaz reference has been incorporated that teaches specifically a first model 104 and a second model 116 and each include generative AI machine learning models in [0048]. Tinaz reference further teaches in [0076] that evaluation is performed for outputs from the first model 104. Therefore, as per the mapping, a combination of Goodsitt and Tinaz appears to be teaching the above argued limitation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of evaluating models, as being disclosed and taught by Tinaz, in the system taught by Goodsitt to yield the predictable results of improving AI models over time (see Tinaz, [0133]). Thus a combination of Goodsitt and Tinaz teaches the above argued limitation. As a result the above argument cannot be considered to be persuasive.
APPLICANT’S ARGUMENT: Applicant further argues that the subject of evaluation is different. In Goodsitt, the "parent model" is the generative model itself. The "performance criteria" are metrics used to assess how well the generative model is being trained. In contrast, the claimed "first Al model" is a separate target model to be evaluated which imported from the model source. The claimed invention uses synthetic data to evaluate a third-party model (e.g., first AI model imported from the model source), not to evaluate the generative model itself.
EXAMINER’S RESPONSE: Examiner has carefully considered the argument but respectfully disagrees. Goodsitt teaches in [0141]-[0143] and [claim 15] where synthetic results are routed to one of the training models such as training model 1, 2 or 3 which are different independent training models. Goodsitt reference also teaches in [0111] and [0134] that performance criteria is evaluated for synthetic data models and for plurality of training models. Goodsitt does not explicitly teach the first artificial intelligence model and as a result, Tinaz reference has been incorporated that teaches specifically a first model 104 and a second model 116 and each include generative AI machine learning models in [0048]. Therefore, as per the mapping, a combination of Goodsitt and Tinaz appears to be teaching the above argued limitation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of evaluating models, as being disclosed and taught by Tinaz, in the system taught by Goodsitt to yield the predictable results of improving AI models over time (see Tinaz, [0133]). As a result the above argument cannot be considered to be persuasive.
APPLICANT’S ARGUMENT: Applicant also argues that the purpose of evaluation is different. Goodsitt makes it clear in [0095] that the purpose of determining performance criteria is to "determine whether to terminate training of one or more parent models." This is an internal feedback loop for optimizing the training process of the generative model. The purpose of the claimed invention, however, is not to manage a training process, but to evaluate the performance of a separate and external Al model after synthetic data passes a quality gate. Therefore, the process described in Goodsitt [0094] is merely a step in the internal training and optimization of the generative model. It does not teach or suggest the core concept of the claimed invention: using qualified synthetic data to evaluate the performance of a distinct and external AI model.
EXAMINER’S RESPONSE: Examiner has carefully considered the argument but respectfully disagrees. The claim language mentions obtaining first resulting data indicating a quality of the synthetic data by inputting the synthetic data and a reference dataset into the second artificial intelligence model. Goodsitt reference teaches in [0078], [0032] and [0110] as determining the data quality score based on the synthetic data and reference dataset into a new synthetic model. Goodsitt does not explicitly teach the second artificial intelligence model and as a result, Tinaz reference has been incorporated that teaches specifically a first model 104 and a second model 116 and each include generative AI machine learning models in [0048]. Therefore, as per the mapping, a combination of Goodsitt and Tinaz appears to be teaching the above mentioned claim limitation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of evaluating models, as being disclosed and taught by Tinaz, in the system taught by Goodsitt to yield the predictable results of improving AI models over time (see Tinaz, [0133]). The claim language further mentions inputting the synthetic data into the first artificial intelligence model and obtaining second resulting data representing a performance of the first artificial intelligence model. The claim language is silent if the same synthetic data that has passed the quality gate is being inputted into the first artificial intelligence model (as being argued above). There are two steps, but there does not appear to be a chronological order in which these results are generated. The claim language does not appear to clarify that the performance evaluation happens after synthetic data passes a quality gate, since the claim language states inputting the synthetic data. According to MPEP [2145 (VI)] “Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims”. Therefore, Goodsitt reference has been cited that teaches in [0141]-[0143] and [claim 15] where synthetic results are routed to one of the training models such as training model 1, 2 or 3. Goodsitt further also teaches in [0111] and [0134] that performance criteria is evaluated for synthetic data models and for plurality of training models. Goodsitt does not explicitly teach the first artificial intelligence model and as a result, Tinaz reference has been incorporated that teaches specifically a first model 104 and a second model 116 and each include generative AI machine learning models in [0048]. Therefore, as per the mapping, a combination of Goodsitt and Tinaz appears to be teaching the above argued limitation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of evaluating models, as being disclosed and taught by Tinaz, in the system taught by Goodsitt to yield the predictable results of improving AI models over time (see Tinaz, [0133]). As a result the above argument cannot be considered to be persuasive.
APPLICANT’S ARGUMENT: Applicant further argues that one could combine Goodsitt's "data management system" and "parent Al model" to arrive at the claimed "second Al model" constitutes impermissible hindsight. The components in Goodsitt are configured for the purpose of optimizing a data generative model. A person of ordinary skill in the art, working within the context of Goodsitt, would have no motivation to disassemble these components and reconfigure them to create a system for an entirely different purpose (e.g., evaluating a distinct Al model). Such a modification would change the fundamental architecture and data flow of the system in a way not suggested by Goodsitt.
EXAMINER’S RESPONSE: Examiner has carefully considered the argument but respectfully disagrees. Based on claim amendments and further consideration Tinaz reference has been included to teach the limitations of second AI model. As a result the above argument cannot be considered to be persuasive.
APPLICANT’S ARGUMENT: Applicant also argues that conditional two steps, where a first result from a quality model serves as a strict prerequisite for generating a second result from a target model, provides a technical advantage of ensuring reliable and unbiased model evaluation. This two-step structured process is entirely absent from Goodsitt, which describes a more general and iterative process of tuning a generative model, rather than the two-step process (e.g., quality gate and performance evaluation) involving two distinct types of "resulting data."
EXAMINER’S RESPONSE: Examiner has carefully considered the argument but respectfully disagrees. Examiner has carefully considered the argument but respectfully disagrees. The claim language mentions obtaining first resulting data indicating a quality of the synthetic data by inputting the synthetic data and a reference dataset into the second artificial intelligence model. Goodsitt reference teaches in [0078], [0032] and [0110] as determining the data quality score based on the synthetic data and reference dataset into a new synthetic model. Goodsitt does not explicitly teach the second artificial intelligence model and as a result, Tinaz reference has been incorporated that teaches specifically a first model 104 and a second model 116 and each include generative AI machine learning models in [0048]. Therefore, as per the mapping, a combination of Goodsitt and Tinaz appears to be teaching the above mentioned claim limitation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of evaluating models, as being disclosed and taught by Tinaz, in the system taught by Goodsitt to yield the predictable results of improving AI models over time (see Tinaz, [0133]). The claim language further mentions inputting the synthetic data into the first artificial intelligence model and obtaining second resulting data representing a performance of the first artificial intelligence model. The claim language does not appear to state that a first result from a quality model serves as a strict prerequisite for generating a second result (as being argued above). There are two steps, but there does not appear to be a chronological order in which these results are generated. The claim language does not appear to clarify that the performance evaluation happens after synthetic data passes a quality gate, since the claim language states inputting the synthetic data. According to MPEP [2145 (VI)] “Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims”. Therefore, Goodsitt reference has been cited that teaches in [0141]-[0143] and [claim 15] where synthetic results are routed to one of the training models such as training model 1, 2 or 3. Goodsitt further also teaches in [0111] and [0134] that performance criteria is evaluated for synthetic data models and for plurality of training models. Goodsitt does not explicitly teach the first artificial intelligence model and as a result, Tinaz reference has been incorporated that teaches specifically a first model 104 and a second model 116 and each include generative AI machine learning models in [0048]. Therefore, as per the mapping, a combination of Goodsitt and Tinaz appears to be teaching the above argued limitation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of evaluating models, as being disclosed and taught by Tinaz, in the system taught by Goodsitt to yield the predictable results of improving AI models over time (see Tinaz, [0133]). As a result the above argument cannot be considered to be persuasive.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55 for KR10-1023-0080079, KR10-2024-0026906, KR10-2024-0026907, KR10-2024-0026908 and KR10-2024-0026909.
Claim Interpretation
Claim 22 recites “using a pre-processing engine”. These claim limitations appear to be citing intended use in terms of what the pre-processing engine is used for. Examiner suggests amending the claim to recite the functionality performed by the claimed method, instead of reciting what the claim elements are used for.
Claim 27 recites “using the second artificial intelligence model” twice. These claim limitations appear to be citing intended use in terms of what the second artificial intelligence model is used for. Examiner suggests amending the claim to recite the functionality performed by the claimed method, instead of reciting what the claim elements are used for.
Claim Objections
Claim 19 is objected to because of the following informalities:
Claim 19 recites “for searching models in the model source” should read as -- for searching models in the pre-stored model source-- as it appears to be a typographical error and may cause antecedent basis issue.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 16-27 are rejected under 35 U.S.C. 103 as being unpatentable over Goodsitt et al. (US 2020/0012892 A1, hereinafter “Goodsitt”) in view of Tinaz et al. (US 2024/0385576 A1, hereinafter “Tinaz”) further in view of Mishchenko et al. (US 11,355,102 B1, hereinafter “Mishchenko”).
Regarding claim 16, Goodsitt teaches
A method for processing data performed by at least one processor included in a computing device, the method comprising: (see Goodsitt, [0006] “a system for returning synthetic database query results… one or more processors configured to execute the instructions to perform operations”).
receiving a first input query requesting data generation; (see Goodsitt, [0135] “a first query input may be received at a first user interface”).
… based on the first input query (see Goodsitt, [0135] “a first query input may be received at a first user interface”; [0031] “the disclosed systems can perform generation of synthetic query results based upon a query input from a user at an interface”) and inputting parameters… (see Goodsitt, [0053] “can be configured to receive at least some training parameters from model optimizer”; [0042] “interface 113 can provide a data model generation request to model optimizer 107. The data model generation request can include data and/or instructions describing the type of data model to be generated”) to a generative model; (see Goodsitt, [0053] “can be configured to receive at least some training parameters from model optimizer”; [0042] “interface 113 can provide a data model generation request to model optimizer 107. The data model generation request can include data and/or instructions describing the type of data model to be generated”; [0055] “the data model can be a generative adversarial network trained”).
obtaining synthetic data from at least one layer of the generative model; (see Goodsitt, [0031] “can include a secure environment for training a model of sensitive data, and a non-secure environment for generating synthetic data with similar structure and statistics as the original sensitive data and/or original database data; [0046] “can be configured to provide instructions for improving the quality of the synthetic data model. If a user requires synthetic data reflecting less correlation or similarity with the original data, the use can change the models' parameters to make them perform worse (e.g., by decreasing number of layers in GAN models, or reducing the number of training iterations). If the users want the synthetic data to have better quality, they can change the models' parameters to make them perform better (e.g., by increasing number of layers in GAN models, or increasing the number of training iterations)”).
…. a pre-stored model source (see Goodsitt, [0037] “Model storage 109 can include one or more databases configured to store data models and descriptive information for the data models. Model storage 109 can be configured to provide information regarding available data models to a user or another system”) based on the first input query; (see Goodsitt, [0135] “a first query input may be received at a first user interface”; [0031] “the disclosed systems can perform generation of synthetic query results based upon a query input from a user at an interface”).
importing from the pre-stored model source… (see Goodsitt, [0036]-[0037] “Model optimizer 107 can include one or more computing systems configured to manage training of data models for system 100. Model optimizer 107 can be configured to generate models for export to computing resources 101… Model optimizer 107 can be configured to provide trained models and descriptive information concerning the trained models to model storage 109… Model storage 109 can include one or more databases configured to store data models and descriptive information for the data models. Model storage 109 can be configured to provide information regarding available data models to a user or another system”; [0002] “The disclosed embodiments concern a platform for management of artificial intelligence systems… These data models can be used to generate synthetic data for testing or training artificial intelligence systems… concern returning results based on a query input, and improvements to generative adversarial network models and adversarially learned inference models.” – there are plurality of models) a first artificial intelligence model and a second artificial intelligence model; (see Goodsitt, [0036]-[0037] “Model optimizer 107 can include one or more computing systems configured to manage training of data models for system 100. Model optimizer 107 can be configured to generate models for export to computing resources 101… Model optimizer 107 can be configured to provide trained models and descriptive information concerning the trained models to model storage 109… Model storage 109 can include one or more databases configured to store data models and descriptive information for the data models. Model storage 109 can be configured to provide information regarding available data models to a user or another system”; [0002] “The disclosed embodiments concern a platform for management of artificial intelligence systems… These data models can be used to generate synthetic data for testing or training artificial intelligence systems… concern returning results based on a query input, and improvements to generative adversarial network models and adversarially learned inference models.” – there are plurality of models).
obtaining first resulting data indicating a quality of the synthetic data (see Goodsitt, [0078] “system 100 (e.g., model optimizer 107, computational resources 101, or the like) can determine a similarity metric value using the normalized reference dataset and the synthetic training dataset… the similarity metric value can include at least one of… data quality score (e.g., a score dependent on at least one of a number of duplicate elements in each of the synthetic dataset and normalized reference dataset, a prevalence of the most common value in each of the synthetic dataset and normalized reference dataset, a maximum difference of rare values in each of the synthetic dataset and normalized reference dataset, the differences in schema between the synthetic dataset and normalized reference dataset, or the like)”; [0092] “the similarity criterion can concern at least one of… a data quality score for the synthetic dataset”; [0145] “user interface may finally display the synthetic results generated in response to the query input from a user”) by inputting the synthetic data and (see Goodsitt, [0032] “Environment 100 can be configured to support generation and storage of synthetic data,… optimized choice of parameters for machine learning, and imposition of rules on synthetic data and data models… Environment 100 can include… a model optimizer 107, a model storage 109”; [0048] “can include the steps of retrieving a synthetic dataset model from model storage 109… providing synthetic data to computing resources 101… providing a trained data model to model optimizer 107… can allow system 100 to generate a model using synthetic data”) a reference dataset into new synthetic data model (see Goodsitt, [0110] “Computing resources 1204 can be configured to train the new synthetic data model using reference data stream data… system 1200 (e.g., computing resources 1204 or model optimizer 1203) can be configured to include reference data stream data into the training data as it is received from streaming data source 1201… can be configured to use the stored reference data stream data for training the new synthetic data model”).
designating the synthetic data as evaluation data… responsive to determining, from the first resulting data, that an addition of the synthetic data to the reference dataset improves homogeneity and (see Goodsitt, [0080] “the similarity metric can depend on a univariate value distribution of an element of the synthetic dataset and a univariate value distribution of an element of the normalized reference dataset… for corresponding elements of the synthetic dataset and the normalized reference dataset, system 100 can be configured to generate histograms having the same bins… The synthetic dataset can include corresponding columns of data. The normalized reference dataset and the synthetic dataset can include the same number of rows”; [0145] “user interface may finally display the synthetic results generated in response to the query input from a user”) reduces bias of the reference dataset: and (see Goodsitt, [0093] “the generative adversarial network can be configured to penalize generation of synthetic data that is too similar to the normalized reference dataset, up to a certain threshold… System 100 can then update the loss function such that the likelihood of generating synthetic data like the current synthetic data is reduced. In this manner, system 100 can train the generative adversarial network using a loss function that penalizes generation of data differing from the reference dataset by less than the predetermined amount”).
inputting the synthetic data into a selected training model… (see Goodsitt, [claim 15] “routing the query input and the synthetic result to a selected one of the training models”; [0141]-[0143] “The expected database result may include a synthetic database return result… At steps 1613, 1615, and 1617, the user query and real database result may be routed to each of training model 1, training model 2, or training model N”) and obtaining second resulting data representing a performance of models (see Goodsitt, [0111] “Model optimizer 1203 can be configured to evaluate performance criteria of a newly created synthetic data model”; [0134] “model optimizer 107 may train a plurality of models based on an expected input and expected output… The expected input may include predetermined input data, and the expected output may include predetermined output data… Model optimizer 107 may evaluate performance criteria of the plurality of training models”).
map the reference dataset and the synthetic data (see Goodsitt, [0078] “the similarity metric value can include at least one of a statistical correlation score (e.g., a score dependent on the covariances or univariate distributions of the synthetic data and the normalized reference dataset); [0068] “for training a generative adversarial network using a normalized reference dataset… the generative adversarial network can be used by system 100 (e.g., by dataset generator 103) to generate synthetic data… to learn a mapping from a sample space (e.g., a random number or vector) to a data space (e.g. the values of the sensitive data)”).
obtain the first resulting data by evaluating the quality of the synthetic data based on distributions of the synthetic data and of the reference dataset (see Goodsitt, [0111] “can be configured to evaluate performance criteria of a newly created synthetic data model… the performance criteria can include… data quality score, as described herein)… model optimizer 1203 can be configured to compare the covariances or univariate distributions of a synthetic dataset generated by the new synthetic data model and a reference data stream dataset”; [0145] “user interface may finally display the synthetic results generated in response to the query input from a user”).
Goodsitt does not explicitly teach obtaining prompt data; and inputting the prompt data to a generative model; generating a structured query processed in a predetermined manner suitable for a pre-stored model source; based on the structured query, the second intelligence model; evaluation data for the first artificial intelligence model, the first artificial intelligence model; the first artificial intelligence model, wherein the second artificial intelligence model is configured to: map the reference dataset and the synthetic data onto an embedding space defined by a specific dimension; and the synthetic data and of reference data set within the embedding space.
However, Tinaz discloses AI models and teaches
obtaining prompt data (see Tinaz, [0086] “The prompt management system 228 can include a prompt generator 236. The prompt generator 236 can generate, from data of the data repository 204, one or more training data elements that include a prompt and a completion corresponding to the prompt… the prompt generator 236 automatically determines prompts and completions from data of the data repository 204, such as by using any of various natural language processing algorithms to detect prompts and completions from data”) provide the prompt data to the machine learning model (see Tinaz, [0096] “Responsive to receiving the one or more prompts, the application session 308 can provide the one or more prompts as input to the machine learning model 268”).
generating a structured query processed in a predetermined manner suitable for generative AI model (see Tinaz, [0135] “the input received in step 908 accurately and sufficiently defines the user's query in a manner suitable for used by the at least one generative AI models”).
… based on the structured query, (see Tinaz, [0135] “the input received in step 908 accurately and sufficiently defines the user's query in a manner suitable for used by the at least one generative AI models”).
… the second artificial intelligence model; (see Tinaz, [0048] “the first model 104 and the second model 116 each include generative AI machine learning models, such as LLMs (e.g., GPT-based LLMs) and/or diffusion models”; [0060] “Responsive to selecting the one or more parameters to maintain, the model updater 108 can apply, as input to the second model 116”).
evaluate data for the first artificial intelligence model, (see Tinaz, [0076] “that can evaluate and/or modify outputs from the first model 104”).
… the first artificial intelligence model generates synthetic data (see Tinaz, [0041] “the first model 104 can predict or generate new data (e.g., artificial data; synthetic data; data not explicitly represented in data used for configuring the first model 104)”) the first artificial intelligence model, (see Tinaz, [0048] “the first model 104 and the second model 116 each include generative AI machine learning models, such as LLMs (e.g., GPT-based LLMs) and/or diffusion models”).
wherein the second artificial intelligence model is configured to:… (see Tinaz, [0048] “The second model 116 can be configured using processes”; [0048] “the first model 104 and the second model 116 each include generative AI machine learning models, such as LLMs (e.g., GPT-based LLMs) and/or diffusion models”).
convert the input sequence into a modified input sequence onto an embedding space (see Tinaz, [0043] “the GPT model can convert the input sequence into a modified input sequence, such as by applying an embedding matrix to the token tokens of the input sequence (e.g., using a neural network embedding function”).
convert the input sequence into a modified input sequence within the embedding space (see Tinaz, [0043] “the GPT model can convert the input sequence into a modified input sequence, such as by applying an embedding matrix to the token tokens of the input sequence (e.g., using a neural network embedding function”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of prompt data, structured query, first artificial intelligence model, second artificial intelligence model, embedding space defined by a specific dimension, evaluating models, pre-processing engine, multi-modal engine, output to client device, satisfying predetermined criteria and not satisfying predetermined criteria, as being disclosed and taught by Tinaz, in the system taught by Goodsitt to yield the predictable results of improving AI models over time (see Tinaz, [0133] “Any feedback or adjustments provided with respect to the central utility plant model can be used for further training and fine-tuning of the at least one AI model used in steps 904 and 906, for example with the goal of driving model training to reduce any need for manual user adjustment of central utility plant models output by the at least one AI models. Various teachings above can be used to implement such model improvements over time”).
The proposed combination of Goodsitt and Tinaz does not explicitly map the reference dataset and the synthetic data onto an embedding space defined by a specific dimension.
However, Mishchenko discloses mapping in an N-dimensional embedding space and teaches
embedding space defined by a specific dimension; (see Mishchenko, [col 5 lines 3-8] “processing first audio data representing a first word using the neural-network model may result in a first set of values represented by a vector—a “feature vector”—for each of the N dimensions of the embedding space. This feature vector may thus correspond to a first point in the embedding space”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of dimension for an embedding space and extracting keyword as being disclosed and taught by Mishchenko, in the system taught by the proposed combination of Goodsitt and Tinaz to yield the predictable results of accurately training a model (see Mishchenko, [col 3 lines 17-21] “Training the neural-network model may then require significant computing resources to process the training data using the neural-network model and configure the neural-network model to accurately indicate when the wakeword is represented in the training data… A system may thus train a model to detect a wakeword and then distribute that model to device(s) 110 so the device 110 may, at runtime, detect the wakeword. Any updates to the model then typically happen remotely with updates pushed to local devices. Such a system may not provide sufficient flexibility if the system is to be configured to detect a new wakeword”).
Regarding claim 27, Goodsitt teaches
A computing device comprising: a memory; and at least one processor electronically connected to the memory, wherein the at least one processor is configured to: (see Goodsitt, [0033] “Computing resources 101 can include one or more computing devices configurable to train data models”; [Abstract] “The system may include a memory unit for storing instructions, and a processor configured to execute the instructions to perform operations comprising”).
receive a first input query requesting data generation; (see Goodsitt, [0135] “a first query input may be received at a first user interface”).
… based on the first input query (see Goodsitt, [0135] “a first query input may be received at a first user interface”; [0031] “the disclosed systems can perform generation of synthetic query results based upon a query input from a user at an interface”) and input parameters… (see Goodsitt, [0053] “can be configured to receive at least some training parameters from model optimizer”; [0042] “interface 113 can provide a data model generation request to model optimizer 107. The data model generation request can include data and/or instructions describing the type of data model to be generated”) to a generative model; (see Goodsitt, [0053] “can be configured to receive at least some training parameters from model optimizer”; [0042] “interface 113 can provide a data model generation request to model optimizer 107. The data model generation request can include data and/or instructions describing the type of data model to be generated”; [0055] “the data model can be a generative adversarial network trained”).
obtain synthetic data from at least one layer of the generative model; (see Goodsitt, [0031] “can include a secure environment for training a model of sensitive data, and a non-secure environment for generating synthetic data with similar structure and statistics as the original sensitive data and/or original database data; [0046] “can be configured to provide instructions for improving the quality of the synthetic data model. If a user requires synthetic data reflecting less correlation or similarity with the original data, the use can change the models' parameters to make them perform worse (e.g., by decreasing number of layers in GAN models, or reducing the number of training iterations). If the users want the synthetic data to have better quality, they can change the models' parameters to make them perform better (e.g., by increasing number of layers in GAN models, or increasing the number of training iterations)”).
… a pre-stored model source (see Goodsitt, [0037] “Model storage 109 can include one or more databases configured to store data models and descriptive information for the data models. Model storage 109 can be configured to provide information regarding available data models to a user or another system”) based on the first input query; (see Goodsitt, [0135] “a first query input may be received at a first user interface”; [0031] “the disclosed systems can perform generation of synthetic query results based upon a query input from a user at an interface”).
import, from the pre-stored model source… (see Goodsitt, [0036]-[0037] “Model optimizer 107 can include one or more computing systems configured to manage training of data models for system 100. Model optimizer 107 can be configured to generate models for export to computing resources 101… Model optimizer 107 can be configured to provide trained models and descriptive information concerning the trained models to model storage 109… Model storage 109 can include one or more databases configured to store data models and descriptive information for the data models. Model storage 109 can be configured to provide information regarding available data models to a user or another system”; [0002] “The disclosed embodiments concern a platform for management of artificial intelligence systems… These data models can be used to generate synthetic data for testing or training artificial intelligence systems… concern returning results based on a query input, and improvements to generative adversarial network models and adversarially learned inference models.” – there are plurality of models) a first artificial intelligence model and a second artificial intelligence model; (see Goodsitt, [0036]-[0037] “Model optimizer 107 can include one or more computing systems configured to manage training of data models for system 100. Model optimizer 107 can be configured to generate models for export to computing resources 101… Model optimizer 107 can be configured to provide trained models and descriptive information concerning the trained models to model storage 109… Model storage 109 can include one or more databases configured to store data models and descriptive information for the data models. Model storage 109 can be configured to provide information regarding available data models to a user or another system”; [0002] “The disclosed embodiments concern a platform for management of artificial intelligence systems… These data models can be used to generate synthetic data for testing or training artificial intelligence systems… concern returning results based on a query input, and improvements to generative adversarial network models and adversarially learned inference models.” – there are plurality of models).
obtain first resulting data indicating a quality of the synthetic data (see Goodsitt, [0078] “system 100 (e.g., model optimizer 107, computational resources 101, or the like) can determine a similarity metric value using the normalized reference dataset and the synthetic training dataset… the similarity metric value can include at least one of… data quality score (e.g., a score dependent on at least one of a number of duplicate elements in each of the synthetic dataset and normalized reference dataset, a prevalence of the most common value in each of the synthetic dataset and normalized reference dataset, a maximum difference of rare values in each of the synthetic dataset and normalized reference dataset, the differences in schema between the synthetic dataset and normalized reference dataset, or the like)”; [0092] “the similarity criterion can concern at least one of… a data quality score for the synthetic dataset”; [0145] “user interface may finally display the synthetic results generated in response to the query input from a user”) by inputting the synthetic data and (see Goodsitt, [0032] “Environment 100 can be configured to support generation and storage of synthetic data,… optimized choice of parameters for machine learning, and imposition of rules on synthetic data and data models… Environment 100 can include… a model optimizer 107, a model storage 109”; [0048] “can include the steps of retrieving a synthetic dataset model from model storage 109… providing synthetic data to computing resources 101… providing a trained data model to model optimizer 107… can allow system 100 to generate a model using synthetic data”) a reference dataset into new synthetic data model (see Goodsitt, [0110] “Computing resources 1204 can be configured to train the new synthetic data model using reference data stream data… system 1200 (e.g., computing resources 1204 or model optimizer 1203) can be configured to include reference data stream data into the training data as it is received from streaming data source 1201… can be configured to use the stored reference data stream data for training the new synthetic data model”).
map,… the reference dataset and the synthetic data (see Goodsitt, [0078] “the similarity metric value can include at least one of a statistical correlation score (e.g., a score dependent on the covariances or univariate distributions of the synthetic data and the normalized reference dataset); [0068] “for training a generative adversarial network using a normalized reference dataset… the generative adversarial network can be used by system 100 (e.g., by dataset generator 103) to generate synthetic data… to learn a mapping from a sample space (e.g., a random number or vector) to a data space (e.g. the values of the sensitive data)”).
obtain,… the first resulting data by evaluating the quality of the synthetic data based on distributions of the synthetic data and of the reference dataset (see Goodsitt, [0111] “can be configured to evaluate performance criteria of a newly created synthetic data model… the performance criteria can include… data quality score, as described herein)… model optimizer 1203 can be configured to compare the covariances or univariate distributions of a synthetic dataset generated by the new synthetic data model and a reference data stream dataset”; [0145] “user interface may finally display the synthetic results generated in response to the query input from a user”)
designate the synthetic data as evaluation data… responsive to determining, from the first resulting data, that an addition of the synthetic data to the reference dataset improves homogeneity and (see Goodsitt, [0080] “the similarity metric can depend on a univariate value distribution of an element of the synthetic dataset and a univariate value distribution of an element of the normalized reference dataset… for corresponding elements of the synthetic dataset and the normalized reference dataset, system 100 can be configured to generate histograms having the same bins… The synthetic dataset can include corresponding columns of data. The normalized reference dataset and the synthetic dataset can include the same number of rows”; [0145] “user interface may finally display the synthetic results generated in response to the query input from a user”) reduces bias of the reference dataset and (see Goodsitt, [0093] “the generative adversarial network can be configured to penalize generation of synthetic data that is too similar to the normalized reference dataset, up to a certain threshold… System 100 can then update the loss function such that the likelihood of generating synthetic data like the current synthetic data is reduced. In this manner, system 100 can train the generative adversarial network using a loss function that penalizes generation of data differing from the reference dataset by less than the predetermined amount”).
input the synthetic data into a selected training model… (see Goodsitt, [claim 15] “routing the query input and the synthetic result to a selected one of the training models”; [0141]-[0143] “The expected database result may include a synthetic database return result… At steps 1613, 1615, and 1617, the user query and real database result may be routed to each of training model 1, training model 2, or training model N”) and obtain second resulting data representing a performance of models (see Goodsitt, [0111] “Model optimizer 1203 can be configured to evaluate performance criteria of a newly created synthetic data model”; [0134] “model optimizer 107 may train a plurality of models based on an expected input and expected output… The expected input may include predetermined input data, and the expected output may include predetermined output data… Model optimizer 107 may evaluate performance criteria of the plurality of training models”).
Goodsitt does not explicitly teach obtain prompt data; and input the prompt data to a generative model; generate a structured query processed in a predetermined manner suitable for a pre-stored model source; based on the structured query, the second intelligence model; map, using the second artificial intelligence model, the reference dataset and the synthetic data onto an embedding space defined by a specific dimension; obtain, using the second artificial intelligence model, evaluating the quality of the synthetic data based on distributions of the synthetic data and of the reference dataset within the embedding space; evaluation data for the first artificial intelligence model, the first artificial intelligence model; the first artificial intelligence model.
However, Tinaz discloses AI models and teaches
obtain prompt data (see Tinaz, [0086] “The prompt management system 228 can include a prompt generator 236. The prompt generator 236 can generate, from data of the data repository 204, one or more training data elements that include a prompt and a completion corresponding to the prompt… the prompt generator 236 automatically determines prompts and completions from data of the data repository 204, such as by using any of various natural language processing algorithms to detect prompts and completions from data”) provide the prompt data to the machine learning model (see Tinaz, [0096] “Responsive to receiving the one or more prompts, the application session 308 can provide the one or more prompts as input to the machine learning model 268”).
generate a structured query processed in a predetermined manner suitable for generative AI model (see Tinaz, [0135] “the input received in step 908 accurately and sufficiently defines the user's query in a manner suitable for used by the at least one generative AI models”).
… based on the structured query, (see Tinaz, [0135] “the input received in step 908 accurately and sufficiently defines the user's query in a manner suitable for used by the at least one generative AI models”).
the second artificial intelligence model; (see Tinaz, [0048] “the first model 104 and the second model 116 each include generative AI machine learning models, such as LLMs (e.g., GPT-based LLMs) and/or diffusion models”; [0060] “Responsive to selecting the one or more parameters to maintain, the model updater 108 can apply, as input to the second model 116”).
using the second artificial intelligence model, (see Tinaz, [0048] “the first model 104 and the second model 116 each include generative AI machine learning models, such as LLMs (e.g., GPT-based LLMs) and/or diffusion models”) convert the input sequence into a modified input sequence onto an embedding space (see Tinaz, [0043] “the GPT model can convert the input sequence into a modified input sequence, such as by applying an embedding matrix to the token tokens of the input sequence (e.g., using a neural network embedding function”).
using the second artificial intelligence model,… (see Tinaz, [0048] “the first model 104 and the second model 116 each include generative AI machine learning models, such as LLMs (e.g., GPT-based LLMs) and/or diffusion models”) convert the input sequence into a modified input sequence within the embedding space; (see Tinaz, [0043] “the GPT model can convert the input sequence into a modified input sequence, such as by applying an embedding matrix to the token tokens of the input sequence (e.g., using a neural network embedding function”).
evaluate data for the first artificial intelligence model, (see Tinaz, [0076] “that can evaluate and/or modify outputs from the first model 104”).
the first artificial intelligence model generates synthetic data (see Tinaz, [0041] “the first model 104 can predict or generate new data (e.g., artificial data; synthetic data; data not explicitly represented in data used for configuring the first model 104)”) the first artificial intelligence model, (see Tinaz, [0048] “the first model 104 and the second model 116 each include generative AI machine learning models, such as LLMs (e.g., GPT-based LLMs) and/or diffusion models”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of prompt data, structured query, first artificial intelligence model, second artificial intelligence model and embedding space defined by a specific dimension as being disclosed and taught by Tinaz, in the system taught by Goodsitt to yield the predictable results of improving AI models over time (see Tinaz, [0133] “Any feedback or adjustments provided with respect to the central utility plant model can be used for further training and fine-tuning of the at least one AI model used in steps 904 and 906, for example with the goal of driving model training to reduce any need for manual user adjustment of central utility plant models output by the at least one AI models. Various teachings above can be used to implement such model improvements over time”).
The proposed combination of Goodsitt and Tinaz does not explicitly map, the reference dataset and the synthetic data onto an embedding space defined by a specific dimension.
However, Mishchenko discloses mapping in an N-dimensional embedding space and teaches
embedding space defined by a specific dimension; (see Mishchenko, [col 5 lines 3-8] “processing first audio data representing a first word using the neural-network model may result in a first set of values represented by a vector—a “feature vector”—for each of the N dimensions of the embedding space. This feature vector may thus correspond to a first point in the embedding space”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of dimension for an embedding space and extracting keyword as being disclosed and taught by Mishchenko, in the system taught by the proposed combination of Goodsitt and Tinaz to yield the predictable results of accurately training a model (see Mishchenko, [col 3 lines 17-21] “Training the neural-network model may then require significant computing resources to process the training data using the neural-network model and configure the neural-network model to accurately indicate when the wakeword is represented in the training data… A system may thus train a model to detect a wakeword and then distribute that model to device(s) 110 so the device 110 may, at runtime, detect the wakeword. Any updates to the model then typically happen remotely with updates pushed to local devices. Such a system may not provide sufficient flexibility if the system is to be configured to detect a new wakeword”).
Regarding claim 17, the proposed combination of Goodsitt, Tinaz and Mishchenko teaches
wherein the first input query (see Goodsitt, [0135] “a first query input may be received at a first user interface”) comprises a natural language query (see Goodsitt, [0132] “A processor may determine, based on natural language processing, a type of the query input”) requesting generation of data (see Goodsitt, [0130] “in response to the data generation request”; [0133] “may return, based on an output data format, a first result based on the data in the database in response to the query input. Synthetic data may be generated as raw text as a field to generate the first result”) for evaluating the first artificial intelligence model (see Tinaz, [0076] “that can evaluate and/or modify outputs from the first model 104”). The motivation for the proposed combination is maintained.
Regarding claim 18, the proposed combination of Goodsitt, Tinaz and Mishchenko teaches
wherein the prompt data is generated by (see Tinaz, [0086] “The prompt management system 228 can include a prompt generator 236. The prompt generator 236 can generate, from data of the data repository 204, one or more training data elements that include a prompt and a completion corresponding to the prompt… the prompt generator 236 automatically determines prompts and completions from data of the data repository 204, such as by using any of various natural language processing algorithms to detect prompts and completions from data”) extracting at least one keyword that is relevant to (see Mishchenko, [col 1 lines 57-60] “natural-language understanding (NLU) to recognize words spoken by a user, based on the various qualities of received audio, and understand the intent of the user given the recognized words”) generating data based on the first input query (see Goodsitt, [0130] “in response to the data generation request”; [0133] “may return, based on an output data format, a first result based on the data in the database in response to the query input. Synthetic data may be generated as raw text as a field to generate the first result”). The motivation for the proposed combination is maintained.
Regarding claim 19, the proposed combination of Goodsitt, Tinaz and Mishchenko teaches
wherein the structured query has a structure for searching models (see Tinaz, [0135] “the input received in step 908 accurately and sufficiently defines the user's query in a manner suitable for used by the at least one generative AI models”) in the model source (see Goodsitt, [0037] “Model storage 109 can include one or more databases configured to store data models and descriptive information for the data models. Model storage 109 can be configured to provide information regarding available data models to a user or another system”). The motivation for the proposed combination is maintained.
Regarding claim 20, the proposed combination of Goodsitt, Tinaz and Mishchenko teaches
wherein the at least one processor is configured to (see Goodsitt, [0006] “one or more processors configured to execute the instructions to perform operations”) cause the generative model to generate, based on the reference dataset, the synthetic data reflecting conditions (see Goodsitt, [0055] “the data model can be a generative adversarial network trained to generate synthetic data satisfying a similarity criterion”; [0134] “may generate, with dataset generator 103, a synthetic dataset for training a generation model using a generative network of a generative adversarial network, the generative network being trained to generate output data differing at least a predetermined amount from a reference dataset according to a similarity metric”) indicated by the prompt data (see Tinaz, [0027] “inputs provided to the LLMs (e.g., vary inputted prompts) until the output of the LLMs meets various objective or subjective criteria”). The motivation for the proposed combination is maintained.
Regarding claim 21, the proposed combination of Goodsitt, Tinaz and Mishchenko teaches
wherein the quality of the synthetic data comprises intrinsic characteristics of the synthetic data, and wherein the intrinsic characteristics comprise at least one of (see Goodsitt, [0078] “system 100 (e.g., model optimizer 107, computational resources 101, or the like) can determine a similarity metric value using the normalized reference dataset and the synthetic training dataset… the similarity metric value can include at least one of… data quality score (e.g., a score dependent on at least one of a number of duplicate elements in each of the synthetic dataset and normalized reference dataset, a prevalence of the most common value in each of the synthetic dataset and normalized reference dataset, a maximum difference of rare values in each of the synthetic dataset and normalized reference dataset, the differences in schema between the synthetic dataset and normalized reference dataset, or the like)”; [0092] “the similarity criterion can concern at least one of… a data quality score for the synthetic dataset”) a distribution of data, (see Goodsitt, [0060] “selecting subclasses of the synthetic data class according to a distribution model based on the actual data).
Regarding claim 22, the proposed combination of Goodsitt, Tinaz and Mishchenko teaches
further comprising: by the at least one processor, (see Goodsitt, [0006] “one or more processors configured to execute the instructions to perform operations”) analyzing, using a pre-processing engine, (see Tinaz, [0085] “The pre-processor 232 can perform various operations”) a relationship between the reference dataset and the synthetic data, (see Goodsitt, [0092] “configured to determine that the synthetic dataset satisfies a similarity criterion… the similarity criterion can concern at least one of a statistical correlation score between the synthetic dataset and the normalized reference dataset, a data similarity score between the synthetic dataset and the reference dataset”) wherein the pre-processing engine comprises a multi-modal engine (see Tinaz, [0085] “The pre-processor 232 can perform various operations”; [0029] “receiving data from technician service reports in various formats, including various modalities and/or multi-modal formats (e.g., text, speech, audio, image, and/or video)”). The motivation for the proposed combination is maintained.
Regarding claim 23, the proposed combination of Goodsitt, Tinaz and Mishchenko teaches
further comprising: by the at least one processor, (see Goodsitt, [0006] “one or more processors configured to execute the instructions to perform operations”) providing output data including the synthetic data and (see Goodsitt, [0049] “query input and/or output may include synthetic training data”) the first resulting data (see Goodsitt, [0078] “system 100 (e.g., model optimizer 107, computational resources 101, or the like) can determine a similarity metric value using the normalized reference dataset and the synthetic training dataset… the similarity metric value can include at least one of… data quality score (e.g., a score dependent on at least one of a number of duplicate elements in each of the synthetic dataset and normalized reference dataset, a prevalence of the most common value in each of the synthetic dataset and normalized reference dataset, a maximum difference of rare values in each of the synthetic dataset and normalized reference dataset, the differences in schema between the synthetic dataset and normalized reference dataset, or the like)”; [0145] “user interface may finally display the synthetic results generated in response to the query input from a user”) to a client device (see Tinaz, [00958] “The client device 304 can include various user input and output devices to facilitate receiving and presenting inputs and outputs”). The motivation for the proposed combination is maintained.
Regarding claim 24, the proposed combination of Goodsitt, Tinaz and Mishchenko teaches
further comprising: by the at least one processor, (see Goodsitt, [0006] “one or more processors configured to execute the instructions to perform operations”) identifying whether the first resulting data (see Goodsitt, [0078] “system 100 (e.g., model optimizer 107, computational resources 101, or the like) can determine a similarity metric value using the normalized reference dataset and the synthetic training dataset… the similarity metric value can include at least one of… data quality score (e.g., a score dependent on at least one of a number of duplicate elements in each of the synthetic dataset and normalized reference dataset, a prevalence of the most common value in each of the synthetic dataset and normalized reference dataset, a maximum difference of rare values in each of the synthetic dataset and normalized reference dataset, the differences in schema between the synthetic dataset and normalized reference dataset, or the like)”; [0092] “the similarity criterion can concern at least one of… a data quality score for the synthetic dataset”; [0145] “user interface may finally display the synthetic results generated in response to the query input from a user”) satisfies predetermined criteria; and (see Tinaz, [0113] “responsive to the data satisfying the criteria”).
if the first resulting data (see Goodsitt, [0078] “system 100 (e.g., model optimizer 107, computational resources 101, or the like) can determine a similarity metric value using the normalized reference dataset and the synthetic training dataset… the similarity metric value can include at least one of… data quality score (e.g., a score dependent on at least one of a number of duplicate elements in each of the synthetic dataset and normalized reference dataset, a prevalence of the most common value in each of the synthetic dataset and normalized reference dataset, a maximum difference of rare values in each of the synthetic dataset and normalized reference dataset, the differences in schema between the synthetic dataset and normalized reference dataset, or the like)”; [0092] “the similarity criterion can concern at least one of… a data quality score for the synthetic dataset”; [0145] “user interface may finally display the synthetic results generated in response to the query input from a user”) satisfies the predetermined criteria, (see Tinaz, [0113] “responsive to the data satisfying the criteria”) storing the synthetic data in a database (see Goodsitt, [0052] “to provide the synthetic dataset to database 105 for storage”). The motivation for the proposed combination is maintained.
Regarding claim 25, the proposed combination of Goodsitt, Tinaz and Mishchenko teaches
further comprising: by the at least one processor, (see Goodsitt, [0006] “one or more processors configured to execute the instructions to perform operations”) training a target model based on data stored in a database (see Goodsitt, [0048] “retrieving data from database 105, providing synthetic data to computing resources 101, providing an initialized data model to computing resources 101, and providing a trained data model to model optimizer 107.”; [0066] “system 100 can receive training data sequences from, for example, a dataset. The dataset providing the training data sequences can be a component of system 100 (e.g., database 105) or a component of another system”). The motivation for the proposed combination is maintained.
Regarding claim 26, the proposed combination of Goodsitt, Tinaz and Mishchenko teaches
further comprising: by the at least one processor, (see Goodsitt, [0006] “one or more processors configured to execute the instructions to perform operations”) identifying whether the first resulting data (see Goodsitt, [0078] “system 100 (e.g., model optimizer 107, computational resources 101, or the like) can determine a similarity metric value using the normalized reference dataset and the synthetic training dataset… the similarity metric value can include at least one of… data quality score (e.g., a score dependent on at least one of a number of duplicate elements in each of the synthetic dataset and normalized reference dataset, a prevalence of the most common value in each of the synthetic dataset and normalized reference dataset, a maximum difference of rare values in each of the synthetic dataset and normalized reference dataset, the differences in schema between the synthetic dataset and normalized reference dataset, or the like)”; [0092] “the similarity criterion can concern at least one of… a data quality score for the synthetic dataset”; [0145] “user interface may finally display the synthetic results generated in response to the query input from a user”) satisfies a predetermined criterion; and (see Tinaz, [0113] “responsive to the data satisfying the criteria”).
adjusting the synthetic data (see Goodsitt, [0120] “can update the current synthetic data model with the new synthetic data model and then dataset generator 1207 can retrieve the updated current synthetic data model”) if the first resulting data (see Goodsitt, [0078] “system 100 (e.g., model optimizer 107, computational resources 101, or the like) can determine a similarity metric value using the normalized reference dataset and the synthetic training dataset… the similarity metric value can include at least one of… data quality score (e.g., a score dependent on at least one of a number of duplicate elements in each of the synthetic dataset and normalized reference dataset, a prevalence of the most common value in each of the synthetic dataset and normalized reference dataset, a maximum difference of rare values in each of the synthetic dataset and normalized reference dataset, the differences in schema between the synthetic dataset and normalized reference dataset, or the like)”; [0092] “the similarity criterion can concern at least one of… a data quality score for the synthetic dataset”; [0145] “user interface may finally display the synthetic results generated in response to the query input from a user”) does not satisfy the predetermined criterion (see Tinaz, [0113] “the data not satisfying the criteria”). The motivation for the proposed combination is maintained.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAISHALI SHAH whose telephone number is (571)272-8532. The examiner can normally be reached Monday - Friday (7:30 AM to 4:00 PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AJAY BHATIA can be reached at (571)272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VAISHALI SHAH/Primary Examiner, Art Unit 2156