Prosecution Insights
Last updated: April 19, 2026
Application No. 18/104,325

COMPUTER IMPLEMENTED METHOD FOR TRAINING A MODEL COMPRISING PLURALITY OF DATA SYNTHESIZERS FOR DATA SYNTHESIS WITH ENHANCED SCALABILITY AND A CORRESPONDING COMPUTER IMPLEMENTED METHOD FOR GENERATING A DATASET

Non-Final OA §101§103§112
Filed
Feb 01, 2023
Examiner
GARNER, CASEY R
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Ydata Labs Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
87%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
184 granted / 261 resolved
+15.5% vs TC avg
Strong +17% interview lift
Without
With
+16.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
19 currently pending
Career history
280
Total Applications
across all art units

Statute-Specific Performance

§101
30.6%
-9.4% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
7.1%
-32.9% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 261 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the Application filed on 02/01/2023. Claims 1-15 are pending in the case. Claims 1, 9, and 14 are independent claims. Claim Rejections - 35 U.S.C. § 112 The following is a quotation of the first paragraph of 35 U.S.C. § 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-13 are directed towards the statutory category of a process. Claims 14-15 are directed towards the statutory category of a machine. With respect to claim 1: 2A Prong 1: This claim is directed to a judicial exception. A computer implemented method for training a model comprising plurality of data synthesizers for data synthesis with enhanced scalability, the method comprising computationally performing the following steps (mental process – high level training); training each data synthesizer with a corresponding block, wherein a transformation is performed to provide statistical independence between rows and/or statistical independency between columns of the dataset X (mental process – high level training); 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: providing a dataset X of M columns and N rows (adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)); segmenting the N rows into segments of N/n rows, and slicing the M columns into slices of M/m columns (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); combining each segment with a corresponding slice, thereby providing a providing a plurality of blocks, each block comprising a combination of a segment with a slice (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); and providing a model with K data synthesizers (adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: providing a dataset X of M columns and N rows (MPEP 2106.05(d) indicates that merely “storing and retrieving information in memory” and/or "receiving or transmitting data over a network" are well‐understood, routine, conventional functions when they are claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer); segmenting the N rows into segments of N/n rows, and slicing the M columns into slices of M/m columns (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); combining each segment with a corresponding slice, thereby providing a providing a plurality of blocks, each block comprising a combination of a segment with a slice (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); and providing a model with K data synthesizers (MPEP 2106.05(d) indicates that merely “storing and retrieving information in memory” and/or "receiving or transmitting data over a network" are well‐understood, routine, conventional functions when they are claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer). With respect to claim 2: 2A Prong 1: This claim is directed to a judicial exception. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: the data synthesizers are trained in parallel (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: the data synthesizers are trained in parallel (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)). With respect to claim 3: 2A Prong 1: This claim is directed to a judicial exception. each data synthesizer is trained with a single corresponding block (mental process – high level training). 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 4: 2A Prong 1: This claim is directed to a judicial exception. the transformation to provide statistical independence comprises transforming the dataset X into a multivariate Gaussian distribution, thereby resulting in a normalized dataset X_n (mental process). 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 5: 2A Prong 1: This claim is directed to a judicial exception. transforming the normalized dataset X_n such that it has identity covariance and zero mean (mental process). 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 6: 2A Prong 1: This claim is directed to a judicial exception. transforming the normalized dataset X_n such that it has identity covariance and zero mean comprises applying a Principal Component Analysis (PCA) methodology to the normalized dataset X_n (mental process). 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 7: 2A Prong 1: This claim is directed to a judicial exception. the transformation of the dataset X into the normalized dataset Xn comprises applying a Normalizing Flow methodology to the dataset X (mental process). 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 8: 2A Prong 1: This claim is directed to a judicial exception. applying a Normalizing Flow methodology comprises applying a Rotation-based Iterative Gaussianization methodology (mental process). 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 9: 2A Prong 1: This claim is directed to a judicial exception. A computer implemented method for generating a dataset X' with enhanced scalability comprising (mental process); training K data synthesizers, from a provided original dataset X, with a computer implemented method for training a model comprising plurality of data synthesizers for data synthesis with enhanced scalability, the method comprising computationally performing the following steps (mental process – high level training); and training each data synthesizer with a corresponding block, wherein a transformation is performed to provide statistical independence between rows and/or statistical independency between columns of the dataset X and correspondingly generating a new dataset X' (mental process – high level training). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: providing a dataset X of M columns and N rows (adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)); segmenting the N rows into segments of N/n rows, and slicing the M columns into slices of M/m columns (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); combining each segment with a corresponding slice, thereby providing a providing a plurality of blocks, each block comprising a combination of a segment with a slice (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); and providing a model with K data synthesizers (adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: providing a dataset X of M columns and N rows (MPEP 2106.05(d) indicates that merely “storing and retrieving information in memory” and/or "receiving or transmitting data over a network" are well‐understood, routine, conventional functions when they are claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer); segmenting the N rows into segments of N/n rows, and slicing the M columns into slices of M/m columns (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); combining each segment with a corresponding slice, thereby providing a providing a plurality of blocks, each block comprising a combination of a segment with a slice (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); and providing a model with K data synthesizers (MPEP 2106.05(d) indicates that merely “storing and retrieving information in memory” and/or "receiving or transmitting data over a network" are well‐understood, routine, conventional functions when they are claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer). With respect to claim 10: 2A Prong 1: This claim is directed to a judicial exception. upon training of each data synthesizer: sampling M records for each data synthesizer, concatenating the K sample of M records to form a new dataset X' (mental process). 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 11: 2A Prong 1: This claim is directed to a judicial exception. performing an inverted transformation to the dataset X' regarding a transformation in which a dataset is transformed such that it has identity covariance and zero mean, thereby obtaining a transformed dataset X'_n, optionally applying an inverted Principal Component Analysis (PCA) methodology (mental process). 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 12: 2A Prong 1: This claim is directed to a judicial exception. performing an inverted transformation to the transformed dataset X'_n as regards a transformation to provide statistical independence between rows and/or statistical independence between columns, optionally applying an inverted Normalizing Flow methodology, optionally applying an inverted Rotation-based Iterative Gaussianization methodology (mental process). 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 13: 2A Prong 1: This claim is directed to a judicial exception. the original dataset X may comprise any structured data and the new dataset X' correspondingly comprises any structured data (mental process). 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 14: 2A Prong 1: This claim is directed to a judicial exception. A computational apparatus or system configured to implement a computer implemented method for training a model comprising plurality of data synthesizers for data synthesis with enhanced scalability, the method comprising computationally performing the following steps (mental process – high level training); training each data synthesizer with a corresponding block, wherein a transformation is performed to provide statistical independence between rows and/or statistical independency between columns of the dataset X and, optionally, additionally a method for generating a dataset X' with enhanced scalability comprising (mental process – high level training); training K data synthesizers, from a provided original dataset X, with a computer implemented method for training a model comprising plurality of data synthesizers for data synthesis with enhanced scalability, the method comprising computationally performing the following steps: training each data synthesizer with a corresponding block, wherein a transformation is performed to provide statistical independence between rows and/or statistical independency between columns of the dataset X and correspondingly generating a new dataset X' (mental process – high level training); to implement a computer implemented method for generating a dataset X' comprising (mental process – high level training); training K data synthesizers, from a provided original dataset X, with a computer implemented method for training a model comprising plurality of data synthesizers for data synthesis with enhanced scalability, the method comprising computationally performing the following steps (mental process – high level training); and training each data synthesizer with a corresponding block, wherein a transformation is performed to provide statistical independence between rows and/or statistical independency between columns of the dataset X and correspondingly generating a new dataset X' (mental process – high level training). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: providing a dataset X of M columns and N rows (adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)); segmenting the N rows into segments of N/n rows, and slicing the M columns into slices of M/m columns (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); combining each segment with a corresponding slice, thereby providing a providing a plurality of blocks, each block comprising a combination of a segment with a slice (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); providing a model with K data synthesizers (adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)); segmenting the N rows into segments of N/n rows, and slicing the M columns into slices of M/m columns (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); combining each segment with a corresponding slice, thereby providing a providing a plurality of blocks, each block comprising a combination of a segment with a slice (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); providing a dataset X of M columns and N rows (adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)); providing a model with K data synthesizers (adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)); providing a dataset X of M columns and N rows (adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)); segmenting the N rows into segments of N/n rows, and slicing the M columns into slices of M/m columns (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); combining each segment with a corresponding slice, thereby providing a providing a plurality of blocks, each block comprising a combination of a segment with a slice (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); and providing a model with K data synthesizers (adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: providing a dataset X of M columns and N rows (MPEP 2106.05(d) indicates that merely “storing and retrieving information in memory” and/or "receiving or transmitting data over a network" are well‐understood, routine, conventional functions when they are claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer); segmenting the N rows into segments of N/n rows, and slicing the M columns into slices of M/m columns (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); combining each segment with a corresponding slice, thereby providing a providing a plurality of blocks, each block comprising a combination of a segment with a slice (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); providing a model with K data synthesizers (MPEP 2106.05(d) indicates that merely “storing and retrieving information in memory” and/or "receiving or transmitting data over a network" are well‐understood, routine, conventional functions when they are claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer); segmenting the N rows into segments of N/n rows, and slicing the M columns into slices of M/m columns (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); combining each segment with a corresponding slice, thereby providing a providing a plurality of blocks, each block comprising a combination of a segment with a slice (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); providing a dataset X of M columns and N rows (MPEP 2106.05(d) indicates that merely “storing and retrieving information in memory” and/or "receiving or transmitting data over a network" are well‐understood, routine, conventional functions when they are claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer); providing a model with K data synthesizers (MPEP 2106.05(d) indicates that merely “storing and retrieving information in memory” and/or "receiving or transmitting data over a network" are well‐understood, routine, conventional functions when they are claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer); providing a dataset X of M columns and N rows (MPEP 2106.05(d) indicates that merely “storing and retrieving information in memory” and/or "receiving or transmitting data over a network" are well‐understood, routine, conventional functions when they are claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer); segmenting the N rows into segments of N/n rows, and slicing the M columns into slices of M/m columns (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); combining each segment with a corresponding slice, thereby providing a providing a plurality of blocks, each block comprising a combination of a segment with a slice (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); and providing a model with K data synthesizers (MPEP 2106.05(d) indicates that merely “storing and retrieving information in memory” and/or "receiving or transmitting data over a network" are well‐understood, routine, conventional functions when they are claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed step is well-understood, routine, conventional activity is supported under Berkheimer). With respect to claim 15: 2A Prong 1: This claim is directed to a judicial exception. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: is configured to train the data synthesizers in parallel (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); and comprises a plurality of computational processors, optionally K computational processors, each computational processor being configured to train at least one data synthesizer in parallel with another computational processor which is configured to train at least one other data synthesizer (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: is configured to train the data synthesizers in parallel (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)); and comprises a plurality of computational processors, optionally K computational processors, each computational processor being configured to train at least one data synthesizer in parallel with another computational processor which is configured to train at least one other data synthesizer (merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f)). Claim Rejections - 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant are advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. Claims 1, 9, and 14 are rejected under 35 U.S.C. § 103 as being unpatentable over Vivona et al. (U.S. Pat. App. Pub. No. 2022/0129706, hereinafter Vivona) in view of Subramanian et al. (U.S. Pat. App. Pub. No. 2018/0075357, hereinafter Subramanian). As to independent claim 1, Vivona teaches: A computer implemented method for training a model comprising plurality of data synthesizers for data synthesis with enhanced scalability, the method comprising computationally performing the following steps (Title and abstract): providing a dataset X of M columns and N rows (Paragraph 81, "The dataset Da includes N rows (or samples) and Ci columns (or features)"), segmenting the N rows into segments of N/n rows, and slicing the M columns into slices of M/m columns, combining each segment with a corresponding slice, thereby providing a providing a plurality of blocks, each block comprising a combination of a segment with a slice (Paragraph 90, "the training dataset table is sampled row wise, making each row a vector of continuous and discrete variables" where 'm' equals 1 and 'n' equals the number of rows), providing a model with K data synthesizers (Paragraph 106, "the generator can generate synthetic data". Paragraph 106, "separate generator"), training each data synthesizer with a corresponding block, wherein a transformation is performed… (Paragraph 107, "training of a generator and application of a trained generator to generate synthetic data rows for a tabular dataset"). Vivona does not appear to expressly teach to provide statistical independence between rows and/or statistical independency between columns of the dataset X. Subramanian teaches to provide statistical independence between rows and/or statistical independency between columns of the dataset X (Paragraph 69, "one or more rows in the dataset may be extracted based on at least one of the one or more of important and statistically independent categorical columns"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the synthetic data generation of Vivona to include the machine learning techniques of Subramanian to extract an appropriate sample of larger dataset, maintaining distribution of important features/variables as in the larger (original) dataset (see Subramanian at paragraph 32). As to independent claim 9, Vivona teaches: A computer implemented method for generating a dataset X' with enhanced scalability comprising (Title and abstract): training K data synthesizers, from a provided original dataset X, with a computer implemented method for training a model comprising plurality of data synthesizers for data synthesis with enhanced scalability, the method comprising computationally performing the following steps: providing a dataset X of M columns and N rows (Paragraph 81, "The dataset Da includes N rows (or samples) and Ci columns (or features)"), segmenting the N rows into segments of N/n rows, and slicing the M columns into slices of M/m columns, combining each segment with a corresponding slice, thereby providing a providing a plurality of blocks, each block comprising a combination of a segment with a slice (Paragraph 90, "the training dataset table is sampled row wise, making each row a vector of continuous and discrete variables" where 'm' equals 1 and 'n' equals the number of rows), providing a model with K data synthesizers (Paragraph 106, "the generator can generate synthetic data". Paragraph 106, "separate generator"), training each data synthesizer with a corresponding block, wherein a transformation is performed… (Paragraph 107, "training of a generator and application of a trained generator to generate synthetic data rows for a tabular dataset") and correspondingly generating a new dataset X' (Paragraph 106, "to generate synthetic tabular data rows"). Vivona does not appear to expressly teach to provide statistical independence between rows and/or statistical independency between columns of the dataset X. Subramanian teaches to provide statistical independence between rows and/or statistical independency between columns of the dataset X (Paragraph 69, "one or more rows in the dataset may be extracted based on at least one of the one or more of important and statistically independent categorical columns"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the synthetic data generation of Vivona to include the machine learning techniques of Subramanian to extract an appropriate sample of larger dataset, maintaining distribution of important features/variables as in the larger (original) dataset (see Subramanian at paragraph 32). As to independent claim 14, Vivona teaches: A computational apparatus or system configured to implement a computer implemented method for training a model comprising plurality of data synthesizers for data synthesis with enhanced scalability, the method comprising computationally performing the following steps (Title and abstract): providing a dataset X of M columns and N rows (Paragraph 81, "The dataset Da includes N rows (or samples) and Ci columns (or features)"), segmenting the N rows into segments of N/n rows, and slicing the M columns into slices of M/m columns, combining each segment with a corresponding slice, thereby providing a providing a plurality of blocks, each block comprising a combination of a segment with a slice (Paragraph 90, "the training dataset table is sampled row wise, making each row a vector of continuous and discrete variables" where 'm' equals 1 and 'n' equals the number of rows), providing a model with K data synthesizers (Paragraph 106, "the generator can generate synthetic data". Paragraph 106, "separate generator"), training each data synthesizer with a corresponding block, wherein a transformation is performed… and, optionally, additionally a method for generating a dataset X' with enhanced scalability comprising (Paragraph 107, "training of a generator and application of a trained generator to generate synthetic data rows for a tabular dataset"): training K data synthesizers, from a provided original dataset X, with a computer implemented method for training a model comprising plurality of data synthesizers for data synthesis with enhanced scalability, the method comprising computationally performing the following steps: providing a dataset X of M columns and N rows (Paragraph 81, "The dataset Da includes N rows (or samples) and Ci columns (or features)"), segmenting the N rows into segments of N/n rows, and slicing the M columns into slices of M/m columns, combining each segment with a corresponding slice, thereby providing a providing a plurality of blocks, each block comprising a combination of a segment with a slice (Paragraph 90, "the training dataset table is sampled row wise, making each row a vector of continuous and discrete variables" where 'm' equals 1 and 'n' equals the number of rows), providing a model with K data synthesizers (Paragraph 106, "the generator can generate synthetic data". Paragraph 106, "separate generator"), training each data synthesizer with a corresponding block, wherein a transformation is performed… (Paragraph 107, "training of a generator and application of a trained generator to generate synthetic data rows for a tabular dataset") and correspondingly generating a new dataset X' (Paragraph 106, "to generate synthetic tabular data rows"), or to implement a computer implemented method for generating a dataset X' comprising: training K data synthesizers, from a provided original dataset X, with a computer implemented method for training a model comprising plurality of data synthesizers for data synthesis with enhanced scalability, the method comprising computationally performing the following steps: providing a dataset X of M columns and N rows (Paragraph 81, "The dataset Da includes N rows (or samples) and Ci columns (or features)"), segmenting the N rows into segments of N/n rows, and slicing the M columns into slices of M/m columns, combining each segment with a corresponding slice, thereby providing a providing a plurality of blocks, each block comprising a combination of a segment with a slice (Paragraph 90, "the training dataset table is sampled row wise, making each row a vector of continuous and discrete variables" where 'm' equals 1 and 'n' equals the number of rows), providing a model with K data synthesizers (Paragraph 106, "the generator can generate synthetic data". Paragraph 106, "separate generator"), training each data synthesizer with a corresponding block, wherein a transformation is performed… (Paragraph 107, "training of a generator and application of a trained generator to generate synthetic data rows for a tabular dataset") and correspondingly generating a new dataset X' (Paragraph 106, "to generate synthetic tabular data rows"). Vivona does not appear to expressly teach to provide statistical independence between rows and/or statistical independency between columns of the dataset X. Subramanian teaches to provide statistical independence between rows and/or statistical independency between columns of the dataset X (Paragraph 69, "one or more rows in the dataset may be extracted based on at least one of the one or more of important and statistically independent categorical columns"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the synthetic data generation of Vivona to include the machine learning techniques of Subramanian to extract an appropriate sample of larger dataset, maintaining distribution of important features/variables as in the larger (original) dataset (see Subramanian at paragraph 32). Claims 2 and 15 are rejected under 35 U.S.C. § 103 as being unpatentable over Vivona in view of Subramanian and Pauly et al. (U.S. Pat. App. Pub. No. 2024/0169196, hereinafter Pauly). As to dependent claim 2, the rejection of claim 1 is incorporated. Vivona does not appear to expressly teach the data synthesizers are trained in parallel. Pauly teaches the data synthesizers are trained in parallel (Paragraph 40, " data parallelism, model parallelism, or both to increase the speed of the training process"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the synthetic data generation of Vivona to include the synthetic data generation techniques of Pauly to scale up the number of synthetic data generator replicas depending on how fast synthetic data is consumed by the machine learning model that is being trained (see Pauly at paragraph 7). Claim 3 is rejected under 35 U.S.C. § 103 as being unpatentable over Vivona in view of Subramanian and Roysdon et al. (Int’l. Pat. App. Pub. No. WO-2023220583-A1, hereinafter Roysdon). As to dependent claim 3, the rejection of claim 1 is incorporated. Vivona does not appear to expressly teach each data synthesizer is trained with a single corresponding block. Roysdon teaches each data synthesizer is trained with a single corresponding block (Paragraph 6, "a transformer-based modeling architecture for training a generator to generate synthetic data includes: at least one embedding model for embedding input data, wherein the input data includes multiple fields Fij containing real data values and the at least one embedding model constructs I embedding matrices, one for each of the multiple fields Fij; a masking model for producing a set of unmasked fields F“ and a set of masked fields F777 for each row"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the synthetic data generation of Vivona to include the synthetic data generation techniques of Roysdon to effectively generate robust synthetic data, are scalable and preserve privacy in underlying training data (see Roysdon at paragraph 5). Claim 4 is rejected under 35 U.S.C. § 103 as being unpatentable over Vivona in view of Subramanian and Scott Chen et al. (U.S. Pat. No. 6,591,235, hereinafter Scott Chen). As to dependent claim 4, the rejection of claim 1 is incorporated. Vivona does not appear to expressly teach the transformation to provide statistical independence comprises transforming the dataset X into a multivariate Gaussian distribution, thereby resulting in a normalized dataset X_n. Scott Chen teaches the transformation to provide statistical independence comprises transforming the dataset X into a multivariate Gaussian distribution, thereby resulting in a normalized dataset X_n (Column 26, lines 29-31, "The compound Gaussian distribution is specifically designed for multivariate variables which are independent dimension-wise while non-Gaussian in each dimension"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the synthetic data generation of Vivona to include the Gaussianization techniques of Scott Chen to transform high dimensional data into a standard Gaussian distribution which is computationally efficient (see Scott Chen at column 2, lines 32-34). Claim 5 is rejected under 35 U.S.C. § 103 as being unpatentable over Vivona in view of Subramanian, Scott Chen, and Seuss et al. (U.S. Pat. App. Pub. No. 2017/0300741, hereinafter Seuss). As to dependent claim 5, the rejection of claim 4 is incorporated. Vivona does not appear to expressly teach transforming the normalized dataset X_n such that it has identity covariance and zero mean. Seuss teaches transforming the normalized dataset X_n such that it has identity covariance and zero mean (Paragraph 285, "The covariance matrix Q′ for wu is the identity matrix". Paragraph 129, "Gaussian-distributed with zero-mean"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the synthetic data generation of Vivona to include the machine learning techniques of Seuss to obtain more accurate information (see Seuss at paragraph 6). 5. Claim 6 is rejected under 35 U.S.C. § 103 as being unpatentable over Vivona in view of Subramanian, Scott Chen, Seuss, and Kewei Chen et al. (U.S. Pat. App. Pub. No. 2006/0074290, hereinafter Kewei Chen). As to dependent claim 6, the rejection of claim 5 is incorporated. Vivona does not appear to expressly teach transforming the normalized dataset X_n such that it has identity covariance and zero mean comprises applying a Principal Component Analysis (PCA) methodology to the normalized dataset X_n. Kewei Chen teaches transforming the normalized dataset X_n such that it has identity covariance and zero mean comprises applying a Principal Component Analysis (PCA) methodology to the normalized dataset X_n (Paragraph 9, "perform a principal component analysis (PCA) of the X matrix and then use the principal components"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the synthetic data generation of Vivona to include the pattern techniques of Kewei Chen to make computation of high dimensional datasets using multivariate methods feasible (see Kewei Chen at paragraph 37). Claim 7 is rejected under 35 U.S.C. § 103 as being unpatentable over Vivona in view of Subramanian, Scott Chen, and Rodenhiser et al. (U.S. Pat. App. Pub. No. 2024/0001306, hereinafter Rodenhiser). As to dependent claim 7, the rejection of claim 4 is incorporated. Vivona does not appear to expressly teach the transformation of the dataset X into the normalized dataset Xn comprises applying a Normalizing Flow methodology to the dataset X. Rodenhiser teaches the transformation of the dataset X into the normalized dataset Xn comprises applying a Normalizing Flow methodology to the dataset X (Paragraph 57, "normalization of mass flow"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the synthetic data generation of Vivona to include the statistical techniques of Rodenhiser to minimize the impact of the inherent background noise variable (see Rodenhiser at paragraph 10). Claim 8 is rejected under 35 U.S.C. § 103 as being unpatentable over Vivona in view of Subramanian, Scott Chen, Rodenhiser, and Tabak et al. (U.S. Pat. App. Pub. No. 2010/0312745, hereinafter Tabak). As to dependent claim 8, the rejection of claim 7 is incorporated. Vivona does not appear to expressly teach applying a Normalizing Flow methodology comprises applying a Rotation-based Iterative Gaussianization methodology. Tabak teaches applying a Normalizing Flow methodology comprises applying a Rotation-based Iterative Gaussianization methodology (Paragraph 22, "distribution of all variables is normal, and can remain so converged under further rotations. This implies that the variables form a jointly-Gaussian set"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the synthetic data generation of Vivona to include the data mining techniques of Tabak to estimate a likelihood based on the various data (see Tabak at paragraph 4). Claim 10 is rejected under 35 U.S.C. § 103 as being unpatentable over Vivona in view of Subramanian and Beigi et al. (U.S. Pat. App. Pub. No. 2023/0060848, hereinafter Beigi). As to dependent claim 10, the rejection of claim 9 is incorporated. Vivona does not appear to expressly teach upon training of each data synthesizer: sampling M records for each data synthesizer, concatenating the K sample of M records to form a new dataset X'. Beigi teaches upon training of each data synthesizer: sampling M records for each data synthesizer, concatenating the K sample of M records to form a new dataset X' (Paragraph 47, "This loop is performed a total of N times, and each synthetic record is concatenated with the previous synthetic records to form synthetic dataset R′ 195 having N records"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the synthetic data generation of Vivona to include the synthetic data generation techniques of Beigi to generate synthetic datasets that still retain high fidelity (see Beigi at paragraph 15). Claim 11 is rejected under 35 U.S.C. § 103 as being unpatentable over Vivona in view of Subramanian, Beigi, and Kewei Chen. As to dependent claim 11, the rejection of claim 10 is incorporated. Vivona does not appear to expressly teach performing an inverted transformation to the dataset X' regarding a transformation in which a dataset is transformed such that it has identity covariance and zero mean, thereby obtaining a transformed dataset X'_n, optionally applying an inverted Principal Component Analysis (PCA) methodology. Kewei Chen teaches performing an inverted transformation to the dataset X' regarding a transformation in which a dataset is transformed such that it has identity covariance and zero mean, thereby obtaining a transformed dataset X'_n, optionally applying an inverted Principal Component Analysis (PCA) methodology (Paragraph 9, "perform a principal component analysis (PCA) of the X matrix and then use the principal components"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the synthetic data generation of Vivona to include the pattern techniques of Kewei Chen to make computation of high dimensional datasets using multivariate methods feasible (see Kewei Chen at paragraph 37). Claim 12 is rejected under 35 U.S.C. § 103 as being unpatentable over Vivona in view of Subramanian, Beigi, Kewei Chen, and Tabak. As to dependent claim 12, the rejection of claim 11 is incorporated. Vivona does not appear to expressly teach performing an inverted transformation to the transformed dataset X'_n as regards a transformation to provide statistical independence between rows and/or statistical independence between columns, optionally applying an inverted Normalizing Flow methodology, optionally applying an inverted Rotation-based Iterative Gaussianization methodology. Tabak teaches performing an inverted transformation to the transformed dataset X'_n as regards a transformation to provide statistical independence between rows and/or statistical independence between columns, optionally applying an inverted Normalizing Flow methodology, optionally applying an inverted Rotation-based Iterative Gaussianization methodology (Paragraph 22, "distribution of all variables is normal, and can remain so converged under further rotations. This implies that the variables form a jointly-Gaussian set"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the synthetic data generation of Vivona to include the data mining techniques of Tabak to estimate a likelihood based on the various data (see Tabak at paragraph 4). Claim 13 is rejected under 35 U.S.C. § 103 as being unpatentable over Vivona in view of Subramanian and Sankaranarayanan et al. (U.S. Pat. App. Pub. No. 2023/0013479, hereinafter Sankaranarayanan). As to dependent claim 13, the rejection of claim 9 is incorporated. Vivona does not appear to expressly teach the original dataset X may comprise any structured data and the new dataset X' correspondingly comprises any structured dat. Sankaranarayanan teaches the original dataset X may comprise any structured data and the new dataset X' correspondingly comprises any structured data (Paragraph 101). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the synthetic data generation of Vivona to include the synthetic data generation techniques of Sankaranarayanan to generate synthetic data that is used to build a synthetic dataset (see Sankaranarayanan at paragraph 42). Claim 15 is rejected under 35 U.S.C. § 103 as being unpatentable over Vivona in view of Subramanian and Pauly. As to dependent claim 15, the rejection of claim 14 is incorporated. Vivona does not appear to expressly teach train the data synthesizers in parallel, and comprises a plurality of computational processors, optionally K computational processors, each computational processor being configured to train at least one data synthesizer in parallel with another computational processor which is configured to train at least one other data synthesizer. Pauly teaches train the data synthesizers in parallel, and comprises a plurality of computational processors, optionally K computational processors, each computational processor being configured to train at least one data synthesizer in parallel with another computational processor which is configured to train at least one other data synthesizer (Paragraph 40, " data parallelism, model parallelism, or both to increase the speed of the training process"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the synthetic data generation of Vivona to include the synthetic data generation techniques of Pauly to scale up the number of synthetic data generator replicas depending on how fast synthetic data is consumed by the machine learning model that is being trained (see Pauly at paragraph 7). Conclusion It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Casey R. Garner whose telephone number is 571-272-2467. The examiner can normally be reached Monday to Friday, 8am to 5pm, Eastern Time. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached on 571-270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR to authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /Casey R. Garner/Primary Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Feb 01, 2023
Application Filed
Dec 15, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596937
METHOD AND APPARATUS FOR ADAPTING MACHINE LEARNING TO CHANGES IN USER INTEREST
2y 5m to grant Granted Apr 07, 2026
Patent 12585994
ACCURATE AND EFFICIENT INFERENCE IN MULTI-DEVICE ENVIRONMENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579451
MINIMAL UNSATISFIABLE SET DETECTION APPARATUS, MINIMAL UNSATISFIABLE SET DETECTION METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12572822
FLEXIBLE, PERSONALIZED STUDENT SUCCESS MODELING FOR INSTITUTIONS WITH COMPLEX TERM STRUCTURES AND COMPETENCY-BASED EDUCATION
2y 5m to grant Granted Mar 10, 2026
Patent 12573187
Self-Learning in Distributed Architecture for Enhancing Artificial Neural Network
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
87%
With Interview (+16.8%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 261 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month