Prosecution Insights
Last updated: April 19, 2026
Application No. 17/551,533

FABRICATING DATA USING CONSTRAINTS TRANSLATED FROM TRAINED MACHINE LEARNING MODELS

Non-Final OA §103
Filed
Dec 15, 2021
Examiner
JONES, CHARLES JEFFREY
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
27%
Grant Probability
At Risk
3-4
OA Rounds
4y 2m
To Grant
93%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
4 granted / 15 resolved
-28.3% vs TC avg
Strong +66% interview lift
Without
With
+65.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
27 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
34.5%
-5.5% vs TC avg
§103
29.1%
-10.9% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to the Application 17/551,533 with claims filed 11/18/2025. Claims 1-2, 7-9, 15-16, 21 and 24 have been amended, claims 27-29 have been added and claims 12, 14, 19 and 25 has been cancelled. Claims 1-11, 13, 15-18, 20-24 and 26-29 have been examined and are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/10/2021 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 3-4, 6-8, 10-11, 15, 17-18, 21, 22-24, 26-27 and 29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Korneev et al.(“Constrained Image Generation Using Binarized Neural Networks with Decision Procedures”, henceforth known as Korneev) and further in view of Dong et al.(“Training Generative Adversarial Networks with Binary Neurons by End-to-end Backpropagation”, henceforth known as Dong) Regarding claim 1: Korneev discloses a system, comprising a processor(Korneev, Page 7, Paragraph 1, “We ran our experiments on Intel(R) Xeon(R) 3.30GHz”) to: Korneev discloses receive a data set for training a machine learning model(Korneev, Page 7, Paragraph 2, “We use two datasets, D2 with 10K images and D3 with 5K images”) Korneev discloses train the machine learning model based on the data set to identify distribution and properties of the data set, wherein the machine learning model comprises a deep neural network(Korneev, Page 7, Paragraph 2, “We use mean absolute error (MAE) to train BNN. BNN consists of three blocks with 100 neurons per layers and one output.”) Korneev discloses translate the trained machine learning model into a constraint satisfaction problem (CSP), wherein the CSP comprises a set of variables, a respective value domain for each variable of the set of variables, and a set of constraints(Korneev, Page 2 , Paragraph 4, “We show that both geometric and process constraints can be encoded as a logical formula. Geometric constraints are encoded as a set of linear constraints. To encode process constraints, we first approximate the diffusion PDE solver with a Neural Network(NN)” where process constraints are user-specified bounds but learned through the neural network correspond to a set of constraints), wherein to translate the trained machine learning model, the processor is to translate the trained DNN into the set of constraints(“We use a special class of NN, called BNN, as these networks can be encoded as logical formulas. Process constraints are encoded as restrictions on outputs of the network. This provides us with an encoding of the image generation problem as a single logical formula” where encoding the neural network as a single logical formula that represents the image generation problem is considered a translating the trained machine learning model into a set of constraints Korneev discloses wherein each constraint represents a composition of activation functions from input to output, and wherein the input for each activation function is an activation of a previous layer multiplied by weights over edges of the DNN(Korneev, Page 4 , Paragraph 3, “We use the ILP encoding…with a minor modification of the last layer as we have numeric outputs instead of categorical outputs. We denote ENCBNN(I; d) a logical formula that encodes BNN using reified linear constraints over Boolean variables” where the constraints represent the full composition of activation functions from input to output as the set of constraints collectively represents a layer-by-layer composition of activation functions from input to output and ILP encodings are based off weighted sums of previous-layer activations) Korneev discloses generate fabricated data to emulate the distribution and properties of the data set based on the CSP(Korneev, Page 1, Abstract, “Our experiments show that our model is able to produce random constrained images that satisfy both topological and process constraints” where producing images that satisfy constraints based on the dataset is considered generating fabricated data that emulate the distribution and properties of a dataset), wherein to generate the fabricated data, the processor is to: transmit the CSP to a CSP solver; and assign, by the CSP solver, a value to each variable of the set of variables that satisfies the set of constraints(Korneev, Page 3, Paragraph 1, “We denote as Cg(I) the geometric constraints on the structure of the image I and as Cp(d) the process constraints on the vector of parameters d. Given a set of geometric and process constraints and a mapping function M, we need to generate a random image I that satisfies Cg and Cp” where satisfying constraints Cg and Cp corresponds to assigning a value to each variable that satisfies a set of constraints) Korneev does not explicitly teach, however Dong does disclose populate the fabricated data into a data stream; and responsive to the data set being unavailable for testing a data-driven application due to one or more constraints on usage of real data. Dong discloses populate the fabricated data into a data stream(Dong, Page 3, Fig. 2 where the generator sending the data to separate area in the model(discriminator) corresponds to populating fabricated data into a data stream) and responsive to the data set being unavailable for testing a data-driven application due to one or more constraints on usage of real data(Dong, Page 3, Paragraph 6,“ The generator takes as input a random vector z drawn from a prior distribution pz and generate a fake sample G(z). The discriminator takes as input either a real sample drawn from the data distribution or a fake sample generated by the generator” and Dong, Page 2, Paragraph 4,“the discriminator aims to tell the fake data from the real ones, while the generator aims to fool the discriminator” where the discriminator needing real and fake data corresponds to using fabricated data due to one or more constraints of usage of real data as needing fabricated data is a constraint to using real data), utilize the fabricated data from the data stream to test the data-driven application(Dong, Page 3, Paragraph 6,“ The discriminator takes as input either a real sample drawn from the data distribution or a fake sample generated by the generator” where the discriminator taking using the fabricated data from the generator corresponds to testing a data-driven application as the discriminator is an application driven by data to determine fake and real data.) References Korneev and Dong are analogous art because they are from the same field of endeavor as using constrained programming with machine learning. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Korneev and Dong before him or her, to modify the machine learning model of Korneev to include the deep neural network GAN of Dong as the GAN’s use can handle more complicated probability distributions with diversity and realistic data and to incorporate Korneev’s GAN-based approach with Dong’s GAN. The suggestion/motivation for doing so would have been “we show that it is possible to train a GAN that has binary neurons”(Dong, Page 1, Abstract) Regarding claim 3: Korneev-Dong teaches the system of claim 1(and thus the rejection of claim 1 is incorporated). Korneev further discloses wherein the processor is to receive user-defined rules(Korneev, Page 3, Paragraph 2, “For example, they can ensure that a given number of grains is present on an image and these grains do not overlap. Another type of constraints focuses on a single grain. They can restrict the shape of a grain, e.g., a convex grain, its size or position on the image.”) convert the user-defined rules into additional constraints, add the additional constraints to the CSP to generate an updated CSP and generate the fabricated data based on the updated CSP(Korneev, Page 3, Paragraph 1, “Given a set of geometric and process constraints and a mapping function M, we need to generate a random image I that satisfies Cg and Cp. Next we overview geometric and process constraints and discuss the mapping function” where incorporating geometric constraints to the list of constraints that need to be satisfied corresponds to converting user-defined rules into additional constraints that are used to generate an updated CSP and generate fabricated data with the updated CSP) Regarding claim 4: Korneev-Dong teaches the system of claim 1(and thus the rejection of claim 1 is incorporated). Korneev further discloses wherein the data set comprises structured data(Korneev, Page 4, Paragraph 3, “A neural network is trained on a set of binary images Ii and their labels di, i = 1,…,n” where the training dataset constructed such that (Ii,di) from 1 to n corresponds to the dataset comprising structured data.) Regarding claim 8: Korneev discloses a computer-implemented method(Korneev, Page 7, Paragraph 1, “We ran our experiments on Intel(R) Xeon(R) 3.30GHz”) Korneev discloses receive a data set for training a machine learning model(Korneev, Page 7, Paragraph 2, “We use two datasets, D2 with 10K images and D3 with 5K images”) Korneev discloses train the machine learning model based on the data set to identify distribution and properties of the data set, wherein the machine learning model comprises a deep neural network(Korneev, Page 7, Paragraph 2, “We use mean absolute error (MAE) to train BNN. BNN consists of three blocks with 100 neurons per layers and one output.”) Korneev discloses translate the trained machine learning model into a constraint satisfaction problem (CSP), wherein the CSP comprises a set of variables, a respective value domain for each variable of the set of variables, and a set of constraints(Korneev, Page 2 , Paragraph 4, “We show that both geometric and process constraints can be encoded as a logical formula. Geometric constraints are encoded as a set of linear constraints. To encode process constraints, we first approximate the diffusion PDE solver with a Neural Network(NN)” where process constraints are user-specified bounds but learned through the neural network correspond to a set of constraints), wherein to translate the trained machine learning model, the processor is to translate the trained DNN into the set of constraints(“We use a special class of NN, called BNN, as these networks can be encoded as logical formulas. Process constraints are encoded as restrictions on outputs of the network. This provides us with an encoding of the image generation problem as a single logical formula” where encoding the neural network as a single logical formula that represents the image generation problem is considered a translating the trained machine learning model into a set of constraints Korneev discloses wherein each constraint represents a composition of activation functions from input to output, and wherein the input for each activation function is an activation of a previous layer multiplied by weights over edges of the DNN(Korneev, Page 4 , Paragraph 3, “We use the ILP encoding…with a minor modification of the last layer as we have numeric outputs instead of categorical outputs. We denote ENCBNN(I; d) a logical formula that encodes BNN using reified linear constraints over Boolean variables” where the constraints represent the full composition of activation functions from input to output as the set of constraints collectively represents a layer-by-layer composition of activation functions from input to output and ILP encodings are based off weighted sums of previous-layer activations) Korneev discloses generate fabricated data to emulate the distribution and properties of the data set based on the CSP(Korneev, Page 1, Abstract, “Our experiments show that our model is able to produce random constrained images that satisfy both topological and process constraints” where producing images that satisfy constraints based on the dataset is considered generating fabricated data that emulate the distribution and properties of a dataset), wherein to generate the fabricated data, the processor is to: transmit the CSP to a CSP solver; and assign, by the CSP solver, a value to each variable of the set of variables that satisfies the set of constraints(Korneev, Page 3, Paragraph 1, “We denote as Cg(I) the geometric constraints on the structure of the image I and as Cp(d) the process constraints on the vector of parameters d. Given a set of geometric and process constraints and a mapping function M, we need to generate a random image I that satisfies Cg and Cp” where satisfying constraints Cg and Cp corresponds to assigning a value to each variable that satisfies a set of constraints) Korneev does not explicitly teach, however Dong does disclose populate the fabricated data into a data stream; and responsive to the data set being unavailable for testing a data-driven application due to one or more constraints on usage of real data. Dong discloses populate the fabricated data into a data stream(Dong, Page 3, Fig. 2 where the generator sending the data to separate area in the model(discriminator) corresponds to populating fabricated data into a data stream) and responsive to the data set being unavailable for testing a data-driven application due to one or more constraints on usage of real data(Dong, Page 3, Paragraph 6,“ The generator takes as input a random vector z drawn from a prior distribution pz and generate a fake sample G(z). The discriminator takes as input either a real sample drawn from the data distribution or a fake sample generated by the generator” and Dong, Page 2, Paragraph 4,“the discriminator aims to tell the fake data from the real ones, while the generator aims to fool the discriminator” where the discriminator needing real and fake data corresponds to using fabricated data due to one or more constraints of usage of real data as needing fabricated data is a constraint to using real data), utilize the fabricated data from the data stream to test the data-driven application(Dong, Page 3, Paragraph 6,“ The discriminator takes as input either a real sample drawn from the data distribution or a fake sample generated by the generator” where the discriminator taking using the fabricated data from the generator corresponds to testing a data-driven application as the discriminator is an application driven by data to determine fake and real data.) References Korneev and Dong are analogous art because they are from the same field of endeavor as using constrained programming with machine learning. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Korneev and Dong before him or her, to modify the machine learning model of Korneev to include the deep neural network GAN of Dong as the GAN’s use can handle more complicated probability distributions with diversity and realistic data and to incorporate Korneev’s GAN-based approach with Dong’s GAN. The suggestion/motivation for doing so would have been “we show that it is possible to train a GAN that has binary neurons”(Dong, Page 1, Abstract) Regarding claim 10: The rejection of claim 8 incorporated in claim 10, and further, claim 10 is rejected under the same rationale as set forth in the rejection of claim 3. Regarding claim 11: Korneev-Dong discloses the system of claim 8(and thus the rejection of claim 8 is incorporated). Korneev further discloses wherein generating the fabricated data comprises iteratively solving the CSP to generate the fabricated data(Korneev, Page 3, Paragraph 1, “Given a set of geometric and process constraints and a mapping function M, we need to generate a random image I that satisfies Cg and Cp” where solving each constraint satisfaction problem per generation corresponds to iteratively solving the CSP to generate the fabricated data as each solved CSP generates one image) Regarding claim 15: Korneev discloses a computer program product for data fabrication, the computer program product comprising a computer-readable storage medium having program code embodied therewith, wherein the computer-readable storage medium is not a transitory signal per se, the program code executable by a processor to cause the processor(Korneev, Page 7, Paragraph 1, “We ran our experiments on Intel(R) Xeon(R) 3.30GHz”) Korneev discloses receive a data set for training a machine learning model(Korneev, Page 7, Paragraph 2, “We use two datasets, D2 with 10K images and D3 with 5K images”) Korneev discloses train the machine learning model based on the data set to identify distribution and properties of the data set, wherein the machine learning model comprises a deep neural network(Korneev, Page 7, Paragraph 2, “We use mean absolute error (MAE) to train BNN. BNN consists of three blocks with 100 neurons per layers and one output.”) Korneev discloses translate the trained machine learning model into a constraint satisfaction problem (CSP), wherein the CSP comprises a set of variables, a respective value domain for each variable of the set of variables, and a set of constraints(Korneev, Page 2 , Paragraph 4, “We show that both geometric and process constraints can be encoded as a logical formula. Geometric constraints are encoded as a set of linear constraints. To encode process constraints, we first approximate the diffusion PDE solver with a Neural Network(NN)” where process constraints are user-specified bounds but learned through the neural network correspond to a set of constraints), wherein to translate the trained machine learning model, the processor is to translate the trained DNN into the set of constraints(“We use a special class of NN, called BNN, as these networks can be encoded as logical formulas. Process constraints are encoded as restrictions on outputs of the network. This provides us with an encoding of the image generation problem as a single logical formula” where encoding the neural network as a single logical formula that represents the image generation problem is considered a translating the trained machine learning model into a set of constraints Korneev discloses wherein each constraint represents a composition of activation functions from input to output, and wherein the input for each activation function is an activation of a previous layer multiplied by weights over edges of the DNN(Korneev, Page 4 , Paragraph 3, “We use the ILP encoding…with a minor modification of the last layer as we have numeric outputs instead of categorical outputs. We denote ENCBNN(I; d) a logical formula that encodes BNN using reified linear constraints over Boolean variables” where the constraints represent the full composition of activation functions from input to output as the set of constraints collectively represents a layer-by-layer composition of activation functions from input to output and ILP encodings are based off weighted sums of previous-layer activations) Korneev discloses generate fabricated data to emulate the distribution and properties of the data set based on the CSP(Korneev, Page 1, Abstract, “Our experiments show that our model is able to produce random constrained images that satisfy both topological and process constraints” where producing images that satisfy constraints based on the dataset is considered generating fabricated data that emulate the distribution and properties of a dataset), wherein to generate the fabricated data, the program code is executable by the processor to: transmit the CSP to a CSP solver; and assign, by the CSP solver, a value to each variable of the set of variables that satisfies the set of constraints(Korneev, Page 3, Paragraph 1, “We denote as Cg(I) the geometric constraints on the structure of the image I and as Cp(d) the process constraints on the vector of parameters d. Given a set of geometric and process constraints and a mapping function M, we need to generate a random image I that satisfies Cg and Cp” where satisfying constraints Cg and Cp corresponds to assigning a value to each variable that satisfies a set of constraints) Korneev does not explicitly teach, however Dong does disclose populate the fabricated data into a data stream; and responsive to the data set being unavailable for testing a data-driven application due to one or more constraints on usage of real data. Dong discloses populate the fabricated data into a data stream(Dong, Page 3, Fig. 2 where the generator sending the data to separate area in the model(discriminator) corresponds to populating fabricated data into a data stream) and responsive to the data set being unavailable for testing a data-driven application due to one or more constraints on usage of real data(Dong, Page 3, Paragraph 6,“ The generator takes as input a random vector z drawn from a prior distribution pz and generate a fake sample G(z). The discriminator takes as input either a real sample drawn from the data distribution or a fake sample generated by the generator” and Dong, Page 2, Paragraph 4,“the discriminator aims to tell the fake data from the real ones, while the generator aims to fool the discriminator” where the discriminator needing real and fake data corresponds to using fabricated data due to one or more constraints of usage of real data as needing fabricated data is a constraint to using real data), utilize the fabricated data from the data stream to test the data-driven application(Dong, Page 3, Paragraph 6,“ The discriminator takes as input either a real sample drawn from the data distribution or a fake sample generated by the generator” where the discriminator taking using the fabricated data from the generator corresponds to testing a data-driven application as the discriminator is an application driven by data to determine fake and real data.) References Korneev and Dong are analogous art because they are from the same field of endeavor as using constrained programming with machine learning. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Korneev and Dong before him or her, to modify the machine learning model of Korneev to include the deep neural network GAN of Dong as the GAN’s use can handle more complicated probability distributions with diversity and realistic data and to incorporate Korneev’s GAN-based approach with Dong’s GAN. The suggestion/motivation for doing so would have been “we show that it is possible to train a GAN that has binary neurons”(Dong, Page 1, Abstract) Regarding claim 17: The rejection of claim 15 incorporated in claim 17, and further, claim 17 is rejected under the same rationale as set forth in the rejection of claim 3. Regarding claim 18: The rejection of claim 15 incorporated in claim 18, and further, claim 18 is rejected under the same rationale as set forth in the rejection of claim 11. Regarding claim 21: Korneev discloses a computer implemented method(Korneev, Page 7, Paragraph 1, “We ran our experiments on Intel(R) Xeon(R) 3.30GHz”) Korneev discloses receiving, via a processor(Korneev, Page 7, Paragraph 1, “We ran our experiments on Intel(R) Xeon(R) 3.30GHz”), a data set for training a machine learning model(Korneev, Page 7, Paragraph 2, “We use two datasets, D2 with 10K images and D3 with 5K images”) Korneev discloses training, via the processor, the machine learning model based on the data set to identify distribution and properties of the data set, wherein the machine learning model comprises a deep neural network(Korneev, Page 7, Paragraph 2, “We use mean absolute error (MAE) to train BNN. BNN consists of three blocks with 100 neurons per layers and one output.”) Korneev discloses translating, via the processor, the trained machine learning model into a constraint satisfaction problem (CSP), wherein the CSP comprises a set of variables, a respective value domain for each variable of the set of variables, and a set of constraints(Korneev, Page 2 , Paragraph 4, “We show that both geometric and process constraints can be encoded as a logical formula. Geometric constraints are encoded as a set of linear constraints. To encode process constraints, we first approximate the diffusion PDE solver with a Neural Network(NN)” where process constraints are user-specified bounds but learned through the neural network correspond to a set of constraints), wherein to translate the trained machine learning model, the processor is to translate the trained DNN into the set of constraints(“We use a special class of NN, called BNN, as these networks can be encoded as logical formulas. Process constraints are encoded as restrictions on outputs of the network. This provides us with an encoding of the image generation problem as a single logical formula” where encoding the neural network as a single logical formula that represents the image generation problem is considered a translating the trained machine learning model into a set of constraints Korneev discloses wherein each constraint represents a composition of activation functions from input to output, and wherein the input for each activation function is an activation of a previous layer multiplied by weights over edges of the DNN(Korneev, Page 4 , Paragraph 3, “We use the ILP encoding…with a minor modification of the last layer as we have numeric outputs instead of categorical outputs. We denote ENCBNN(I; d) a logical formula that encodes BNN using reified linear constraints over Boolean variables” where the constraints represent the full composition of activation functions from input to output as the set of constraints collectively represents a layer-by-layer composition of activation functions from input to output and ILP encodings are based off weighted sums of previous-layer activations) Korneev discloses receiving, via a processor, user defined fabrication rules (Korneev, Page 3, Paragraph 2, “For example, they can ensure that a given number of grains is present on an image and these grains do not overlap. Another type of constraints focuses on a single grain. They can restrict the shape of a grain, e.g., a convex grain, its size or position on the image.”) Korneev discloses converting, via the processor, the user-defined fabrication rules into additional constraints and adding, via the processor, the additional constraints to the CSP to generate an updated CSP(Korneev, Page 3, Paragraph 1, “Given a set of geometric and process constraints and a mapping function M, we need to generate a random image I that satisfies Cg and Cp. Next we overview geometric and process constraints and discuss the mapping function” where incorporating geometric constraints to the list of constraints that need to be satisfied corresponds to converting user-defined rules into additional constraints and adding the additional constraints to generate an updated CSP) Korneev discloses generating, via the processor, fabricated data to emulate the distribution and properties of the data set based on the CSP(Korneev, Page 1, Abstract, “Our experiments show that our model is able to produce random constrained images that satisfy both topological and process constraints” where producing images that satisfy constraints based on the dataset is considered generating fabricated data that emulate the distribution and properties of a dataset), wherein to generate the fabricated data, the processor is to: transmit the CSP to a CSP solver; and assign, by the CSP solver, a value to each variable of the set of variables that satisfies the set of constraints(Korneev, Page 3, Paragraph 1, “We denote as Cg(I) the geometric constraints on the structure of the image I and as Cp(d) the process constraints on the vector of parameters d. Given a set of geometric and process constraints and a mapping function M, we need to generate a random image I that satisfies Cg and Cp” where satisfying constraints Cg and Cp corresponds to assigning a value to each variable that satisfies a set of constraints) Korneev does not explicitly teach, however Dong does disclose populate the fabricated data into a data stream; and responsive to the data set being unavailable for testing a data-driven application due to one or more constraints on usage of real data. Dong discloses populate the fabricated data into a data stream(Dong, Page 3, Fig. 2 where the generator sending the data to separate area in the model(discriminator) corresponds to populating fabricated data into a data stream) and responsive to the data set being unavailable for testing a data-driven application due to one or more constraints on usage of real data(Dong, Page 3, Paragraph 6,“ The generator takes as input a random vector z drawn from a prior distribution pz and generate a fake sample G(z). The discriminator takes as input either a real sample drawn from the data distribution or a fake sample generated by the generator” and Dong, Page 2, Paragraph 4,“the discriminator aims to tell the fake data from the real ones, while the generator aims to fool the discriminator” where the discriminator needing real and fake data corresponds to using fabricated data due to one or more constraints of usage of real data as needing fabricated data is a constraint to using real data), utilize the fabricated data from the data stream to test the data-driven application(Dong, Page 3, Paragraph 6,“ The discriminator takes as input either a real sample drawn from the data distribution or a fake sample generated by the generator” where the discriminator taking using the fabricated data from the generator corresponds to testing a data-driven application as the discriminator is an application driven by data to determine fake and real data.) References Korneev and Dong are analogous art because they are from the same field of endeavor as using constrained programming with machine learning. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Korneev and Dong before him or her, to modify the machine learning model of Korneev to include the deep neural network GAN of Dong as the GAN’s use can handle more complicated probability distributions with diversity and realistic data and to incorporate Korneev’s GAN-based approach with Dong’s GAN. The suggestion/motivation for doing so would have been “we show that it is possible to train a GAN that has binary neurons”(Dong, Page 1, Abstract) Regarding claim 22: Korneev-Dong discloses the system of claim 21(and thus the rejection of claim 21 is incorporated). Korneev further discloses wherein a bias in the fabricated data is offset via the user-defined fabrication rules(Korneev, Page 3, Paragraph 1, “Given a set of geometric and process constraints and a mapping function M, we need to generate a random image I that satisfies Cg and Cp. Next we overview geometric and process constraints and discuss the mapping function” where incorporating geometric constraints to the list of constraints that need to be satisfied corresponds to have a bias that is offset with user-defined rules as the geometric constraints need to be satisfied as well as the process constraints) Regarding claim 23: Korneev-Dong discloses the system of claim 21(and thus the rejection of claim 21 is incorporated). Korneev further discloses wherein the user-defined fabrication rules are received in the form of a CSP language(Korneev, Page 7, Paragraph 3, “We use CPLEX and the SMT solver Z3 to solve instances produced by constraints (1)–(7) together with ENCBNN(I; d). In principle, other solvers could be evaluated on these instances.”) Regarding claim 24: Korneev discloses a computer-implemented method (Korneev, Page 7, Paragraph 1, “We ran our experiments on Intel(R) Xeon(R) 3.30GHz”) Korneev discloses receiving, via a processor, a data set for training a machine learning model(Korneev, Page 7, Paragraph 2, “We use two datasets, D2 with 10K images and D3 with 5K images”) Korneev discloses training the machine learning model based on the data set to identify distribution and properties of the data set, wherein the machine learning model comprises a deep neural network(Korneev, Page 7, Paragraph 2, “We use mean absolute error (MAE) to train BNN. BNN consists of three blocks with 100 neurons per layers and one output.”) Korneev discloses translating the trained machine learning model into a constraint satisfaction problem (CSP), wherein the CSP comprises a set of variables, a respective value domain for each variable of the set of variables, and a set of constraints(Korneev, Page 2 , Paragraph 4, “We show that both geometric and process constraints can be encoded as a logical formula. Geometric constraints are encoded as a set of linear constraints. To encode process constraints, we first approximate the diffusion PDE solver with a Neural Network(NN)” where process constraints are user-specified bounds but learned through the neural network correspond to a set of constraints), wherein to translate the trained machine learning model, the processor is to translate the trained DNN into the set of constraints(“We use a special class of NN, called BNN, as these networks can be encoded as logical formulas. Process constraints are encoded as restrictions on outputs of the network. This provides us with an encoding of the image generation problem as a single logical formula” where encoding the neural network as a single logical formula that represents the image generation problem is considered a translating the trained machine learning model into a set of constraints Korneev discloses wherein each constraint represents a composition of activation functions from input to output, and wherein the input for each activation function is an activation of a previous layer multiplied by weights over edges of the DNN(Korneev, Page 4 , Paragraph 3, “We use the ILP encoding…with a minor modification of the last layer as we have numeric outputs instead of categorical outputs. We denote ENCBNN(I; d) a logical formula that encodes BNN using reified linear constraints over Boolean variables” where the constraints represent the full composition of activation functions from input to output as the set of constraints collectively represents a layer-by-layer composition of activation functions from input to output and ILP encodings are based off weighted sums of previous-layer activations) Korneev discloses transmitting the CSP to a CSP solver; and assigning, by the CSP solver, a value to each variable of the set of variables that satisfies the set of constraints(Korneev, Page 3, Paragraph 1, “We denote as Cg(I) the geometric constraints on the structure of the image I and as Cp(d) the process constraints on the vector of parameters d. Given a set of geometric and process constraints and a mapping function M, we need to generate a random image I that satisfies Cg and Cp” where satisfying constraints Cg and Cp corresponds to assigning a value to each variable that satisfies a set of constraints) to generate the fabricated data that emulates the data set(Korneev, Page 1, Abstract, “Our experiments show that our model is able to produce random constrained images that satisfy both topological and process constraints” where producing images that satisfy constraints based on the dataset is considered generating fabricated data that emulate the dataset) Korneev does not explicitly teach, however Dong does disclose populating, via the processor, the fabricated data into a data stream; and developing and testing, via the processor, a data-driven application using the fabricated data from the data stream, wherein the developing and testing is performed in response to the data set being unavailable for testing the data-driven application due to one or more constraints on usage of real data Dong discloses populating, via the processor, the fabricated data into a data stream(Dong, Page 3, Fig. 2 where the generator sending the data to separate area in the model(discriminator) corresponds to populating fabricated data into a data stream) and developing and testing, via the processor, a data-driven application using the fabricated data from the data stream, wherein the developing and testing is performed(Dong, Page 3, Paragraph 6,“ The discriminator takes as input either a real sample drawn from the data distribution or a fake sample generated by the generator” where the discriminator taking using the fabricated data from the generator corresponds to testing a data-driven application as the discriminator is an application driven by data to determine fake and real data.) in response to the data set being unavailable for testing the data-driven application due to one or more constraints on usage of real data(Dong, Page 3, Paragraph 6,“ The generator takes as input a random vector z drawn from a prior distribution pz and generate a fake sample G(z). The discriminator takes as input either a real sample drawn from the data distribution or a fake sample generated by the generator” and Dong, Page 2, Paragraph 4,“the discriminator aims to tell the fake data from the real ones, while the generator aims to fool the discriminator” where the discriminator needing real and fake data corresponds to using fabricated data due to one or more constraints of usage of real data as needing fabricated data is a constraint to using real data) References Korneev and Dong are analogous art because they are from the same field of endeavor as using constrained programming with machine learning. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Korneev and Dong before him or her, to modify the machine learning model of Korneev to include the deep neural network GAN of Dong as the GAN’s use can handle more complicated probability distributions with diversity and realistic data and to incorporate Korneev’s GAN-based approach with Dong’s GAN. The suggestion/motivation for doing so would have been “we show that it is possible to train a GAN that has binary neurons”(Dong, Page 1, Abstract) Regarding claim 26: Korneev-Dong discloses the system of claim 1(and thus the rejection of claim 1 is incorporated). Korneev further discloses wherein the set of constraints includes at least one of polynomial, logical, or arithmetical constraints automatically generated from the trained machine learning model(Korneev, Page 4, Paragraph 3, “We denote ENCBNN(I, d) a logical formula that encodes BNN using reified linear constraints over Boolean variables” where linear constraints over Boolean variables corresponds to logical constraints) Regarding claim 27: Korneev-Dong discloses the system of claim 1(and thus the rejection of claim 1 is incorporated). Korneev does not discloses however Dong further discloses wherein the one or more constraints correspond to one or more data protection regulations(Korneev, Page 4, Paragraph 2, “The main idea behind our approach is to encode the image generation problem as a logical formula. To do so, we need to encode all problem constraints and the mapping between an image and its label as a set of constraints” where the constraints keeping the mapping between an input and it’s label throughout encoding is considered constraints corresponding to data protection regulation as the process of encoding input and it’s label to constraints is a regulation of data) Regarding claim 6: Korneev-Dong teaches the system of claim 1(and thus the rejection of claim 1 is incorporated). Korneev does not teach however Dong further discloses wherein the trained machine learning model comprises a generator model of a generative adversarial network(Dong, Page 3, Figure 2, where Figure 2 has a generator of a generative adversarial network) References Korneev and Dong are analogous art because they are from the same field of endeavor as using constrained programming with machine learning. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Korneev and Dong before him or her, to modify the machine learning model of Korneev to include the deep neural network GAN of Dong as the GAN’s use can handle more complicated probability distributions with diversity and realistic data and to incorporate Korneev’s GAN-based approach with Dong’s GAN using a deep neural network. The suggestion/motivation for doing so would have been “we show that it is possible to train a GAN that has binary neurons”(Dong, Page 1, Abstract) Regarding claim 7: Korneev-Dong teaches the system of claim 6(and thus the rejection of claim 6 is incorporated). Korneev does not teach however Dong further discloses wherein the generator model is a deep neural network(Dong, Page 3, Paragraph 7, “Figure 2 shows the system diagram for the proposed model implemented by multilayer perceptrons (MLPs)”) Regarding claim 29: Korneev-Dong teaches the system of claim 21(and thus the rejection of claim 21 is incorporated). Korneev does not teach however Dong further discloses wherein the fabricated data is compared to the data set using one or more metrics to evaluate similarity between the fabricated data and the data set(Dong, Page 2, Paragraph 3, “The discriminator take as input either a real sample drawn from the data distribution pd or a fake sample generated by and outputs a scalar representing the genuineness of that sample” where the discriminator of the GAN outputting a scalar representing the genuineness of the sample corresponds to evaluating the similarity between the fabricated data and the data set) Claim(s) 2, 9, 16 and 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Korneev et al.(“Constrained Image Generation Using Binarized Neural Networks with Decision Procedures”, henceforth known as Korneev) and further in view of Dong et al.(“Training Generative Adversarial Networks with Binary Neurons by End-to-end Backpropagation”, henceforth known as Dong) and even further in view of Truong(US20220004878A1). Regarding claim 2: Korneev-Dong teaches the system of claim 1(and thus the rejection of claim 1 is incorporated). Korneev-Dong does not teach however Truong discloses wherein the processor is to populate a data store with the generated fabricated data(Truong, [0071], “Database 406 may include one or more databases configured to store data for use by system 400. The databases can include cloud-based databases (e.g., AMAZON WEB SERVICES S3 buckets) or on-premises databases. Database 406 may store synthetic data, synthetic documents, metadata associated with actual and/or synthetic data, etc.” where storing synthetic data is considered storing a generated fabricated data) References Korneev-Dong and Truong are analogous art because they are from the same field of endeavor using machine learning to produce fabricated data for datasets. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Korneev-Dong and Truong before him or her, to modify the machine learning model of Korneev-Dong to include the data storage of Truong to keep synthetic/fabricated datasets for substituting for real/sensitive data. The suggestion/motivation for doing so would have been, as Truong states Truong,[0030], “The disclosed embodiments can be used to automatically extract data from a large document set and to generate synthetic data and/or synthetic documents. Using these models, the disclosed embodiments can produce fully synthetic datasets with similar structure and statistics as the original sensitive datasets” and Truong, [0034], “Dataset generator 103 can include one or more computing devices configured to generate data…Dataset generator 103 can be configured to receive data from database 105 or another component of system 100.” Regarding claim 9: The rejection of claim 8 incorporated in claim 9, and further, claim 9 is rejected under the same rationale as set forth in the rejection of claim 2. Regarding claim 16: The rejection of claim 15 incorporated in claim 16, and further, claim 16 is rejected under the same rationale as set forth in the rejection of claim 2. Regarding claim 28: Korneev-Dong teaches the system of claim 2(and thus the rejection of claim 2 is incorporated). Korneev-Dong does not teach however Truong discloses wherein the data store comprises a database or a file system(Truong, [0071], “Database 406 may include one or more databases configured to store data for use by system 400. The databases can include cloud-based databases (e.g., AMAZON WEB SERVICES S3 buckets) or on-premises databases. Database 406 may store synthetic data, synthetic documents, metadata associated with actual and/or synthetic data, etc.”) Claim(s) 5, 13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Korneev et al.(“Constrained Image Generation Using Binarized Neural Networks with Decision Procedures”, henceforth known as Korneev) and further in view of Dong et al.(“Training Generative Adversarial Networks with Binary Neurons by End-to-end Backpropagation”, henceforth known as Dong) and even further in view of Narodytska et al.(Learning Optimal Decision Trees with SAT, henceforth known as Narodytska) Regarding claim 5: Korneev-Dong teaches the system of claim 1(and thus the rejection of claim 1 is incorporated). Korneev-Dong does not teach however Narodytska discloses wherein the trained machine learning model comprises a decision tree (Narodytska, Page 1365, Col. 2, Paragraph 6, “We propose additional constraints that aim at pruning the search space, by filtering as soon as possible tree arrangements that are invalid. During the search, a partial structure of the tree is constructed”) References Korneev-Dong and Narodytska are analogous art because they are from the same field of endeavor as using decision trees with CSP’s. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Korneev-Dong and Narodytska before him or her, to modify the machine learning model of Korneev-Dong to include the decision tree of Narodytska to increase ML models interpretability and the inherent rules-based decision mapping. The suggestion/motivation for doing so would have been, as Narodytska states, “Decision trees represent an often used approach for developing explainable ML models, motivated by the natural mapping between decision tree paths and rules”(Narodytska, Page 1362, Col. 1, ABSTRACT) Regarding claim 13: Korneev-Dong teaches the system of claim 1(and thus the rejection of claim 1 is incorporated). Korneev-Dong does not teach however Narodytska discloses wherein translating the machine learning model comprises translating conditions in a decision tree model into conditional constraints (Narodytska, Page 1362, Col. 1, ABSTRACT, Decision trees represent an often used approach for developing explainable ML models, motivated by the natural mapping between decision tree paths and rules” where rules are conditional constraints) References Goyal and Narodytska are analogous art because they are from the same field of endeavor as using decision trees with CSP’s. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Goyal and Narodytska before him or her, to modify the machine learning model of Goyal to include the decision tree of Narodytska to increase ML models interpretability and the inherent rules-based decision mapping. The suggestion/motivation for doing so would have been, as Narodytska states, “Decision trees represent an often used approach for developing explainable ML models, motivated by the natural mapping between decision tree paths and rules”(Narodytska, Page 1362, Col. 1, ABSTRACT) Regarding claim 20: The rejection of claim 15 incorporated in claim 20, and further, claim 20 is rejected under the same rationale as set forth in the rejection of claim 13. Response to Arguments Applicant’s arguments filed 11/18/2025, with respect to the previous 101 rejection(s) of claim(s) have been fully considered and are persuasive. Therefore, the previous 101 rejection has been withdrawn. Applicant’s arguments filed 11/18/2025, with respect to the previous 102/103 rejection(s) of claim(s) concerning Goyal have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The rejection of claims 1-11, 13, 15-18, 20-24 and 26-29 have been updated in view of the Korneev-Dong combination. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES JEFFREY JONES JR whose telephone number is (703)756-1414. The examiner can normally be reached Monday - Friday 8:00 - 5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.J.J./Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Dec 15, 2021
Application Filed
Mar 03, 2025
Non-Final Rejection — §103
Jun 06, 2025
Response Filed
Sep 12, 2025
Final Rejection — §103
Oct 29, 2025
Interview Requested
Nov 06, 2025
Applicant Interview (Telephonic)
Nov 15, 2025
Examiner Interview Summary
Nov 18, 2025
Response after Non-Final Action
Dec 18, 2025
Request for Continued Examination
Jan 06, 2026
Response after Non-Final Action
Feb 04, 2026
Non-Final Rejection — §103
Mar 24, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582959
DATA GENERATION DEVICE AND METHOD, AND LEARNING DEVICE AND METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12380333
METHOD OF CONSTRUCTING NETWORK MODEL FOR DEEP LEARNING, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
27%
Grant Probability
93%
With Interview (+65.9%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month