DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-5, 7-14, 16-22 are presented for examination based on the amended claims in the application filed on November 25, 2025. Claims 6 and 15 have been cancelled by the applicant.
Claims 1, 3-5, 7, 9-10, 12-14, 16, and 18-22 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by Guo, Tinghao et al. “Circuit synthesis using generative adversarial networks (GANs).” In AIA A Scitech 2019 Forum, p. 2350. 2019 [herein “Guo”].
Claims 2, 8, 11, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Guo as applied to claims 1 and 10 above, and further in view of US Patent 11,804,050 Milletari, Fausto et al. [herein “Milletari”].
This action is made Non-Final.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on November 25, 2025 has been entered.
Response to Amendment
The amendment filed November 25, 2025 has been entered. Claims 1-5, 7-14, 16-22 remain pending in the application. Applicant’s amendments to the Specification and Claims have overcome each and every objection previously set forth in the previous Office Action mailed September 25, 2025.
Claim Rejections - 35 U.S.C. § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. § 102 and 103 (or as subject to pre-AIA 35 U.S.C. § 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 3-5, 7, 9-10, 12-14, 16, and 18-22 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by Guo, Tinghao et al. “Circuit synthesis using generative adversarial networks (GANs).” In AIA A Scitech 2019 Forum, p. 2350. 2019 [herein “Guo”].
As per claim 1, Guo teaches “A method for determining system parameters, the method comprising: obtaining, at a design interface, a user input indicating a set of components of a physical system, wherein at least one component of the physical system comprises circuitry”. (Pg. 1, “Here we describe a circuit synthesis problem as one where a set of available electronic components is provided, and from this set components are selected and their connectivities are specified to define a circuit topology” [obtaining a input indicating a set of components of a physical system, wherein at least one component of the physical system comprises circuitry]. “Once a circuit topology is determined, component sizing and other continuous parameters can be optimized to determine the best possible performance of a given circuit topology” [A method for determining system parameters]. Pg. 1, “Circuit synthesis is a challenging design task that can be solved in certain cases by a human designer based on technical understanding, intuition, and knowledge of previous circuit design topologies” [e.g., a user input]. Pg. 10, “One strategy for performance prediction that combines active learning (using a predictive model) with the GAN-based topology generation is depicted in Fig. 7. After circuit generation, NSCs are used to alter out any remaining infeasible topologies. The predictive model is used to approximate the performance of feasible circuits. This strategy may be particularly appropriate in cases where circuit evaluation is computationally expensive, otherwise the performance can be evaluated using the original design optimization problem for each candidate circuit topology” [through the use of a computer, e.g., at a design interface]. Further see Sect. I and III. The examiner has interpreted that providing a set pf available electronic components to define a circuit topology and optimizing the component parameters to determine the best possible performance of a given circuit topology which can be solved by a human desired using a strategy for performance prediction for cases where circuit evaluation is computationally expensive as a method for determining system parameters, the method comprising: obtaining, at a design interface, a user input indicating a set of components of a physical system, wherein at least one component of the physical system comprises circuitry.)
Guo also teaches “providing, to an enhanced generative adversarial network (GAN), a predetermined distribution for generating a distribution of feasible parameters needed for designing the physical system comprising the set of components”. (Pg. 5, “A generative adversarial network (GAN) is a class of unsupervised learning models introduced by Goodfellow et al.” [a generative adversarial network (GAN)]. “The GAN is composed of two artificial neural networks, a generator and discriminator. Figure 3 illustrates a basic GAN framework. Let Xr = {x}Ni for i = 1, 2,…, N and xi ∈ Rd denote the real data samples drawn from a probability density distribution Pr. A latent vector z ∈ Rm is pre-defined with a prior density distribution pz(z); here pz(z) is often chosen as a multivariate normal or uniform distribution. The role of the generator G(z; θG) is to produce samples xg with a probability density Pg that approximates Pr.” [a predetermined distribution for generating a distribution of feasible parameters]. Pg. 5, “Given a random sample z, the generator maps the latent space Z to the original data space X” [e.g., providing, to a GAN] . Pg. 11, “Since the improved WGAN demonstrated efficient generation of feasible topologies, we conducted a parametric study regarding two important parameters: the number of latent variables and λ. The purpose of the parametric study was to provide insights about the improved WGAN's capabilities for circuit synthesis” [an enhanced generative adversarial network (GAN)]. Pg. 1, “Here we describe a circuit synthesis problem as one where a set of available electronic components is provided, and from this set components are selected and their connectivities are specified to define a circuit topology. “Once a circuit topology is determined, component sizing and other continuous parameters can be optimized to determine the best possible performance of a given circuit topology” [the physical system comprising the set of components]. Further see Sect. III and VI. The examiner has interpreted using an improved generative adversarial network that using real data samples to produce samples that approximate the probability density distribution of the real data to generate feasible circuit topologies and parameters for components to determine the best possible performance of a circuit topology as providing, to an enhanced generative adversarial network (GAN), a predetermined distribution for generating a distribution of feasible parameters needed for designing the physical system comprising the set of components.)
Guo teaches “wherein the enhanced GAN comprises a hybrid generator and a discriminator, and wherein the hybrid generator comprises a generator of the enhanced GAN and a physical model representing operations of the physical system”. (Pg. 5, “A generative adversarial network (GAN) is a class of unsupervised learning models introduced by Goodfellow et al. The GAN is composed of two artificial neural networks, a generator and discriminator” [wherein the enhanced GAN comprises a generator and a discriminator]. Pg. 10, “Predictive modeling can be incorporated into the GAN framework to enable performance prediction for the generated circuits” [a physical model representing operations of the physical system]. “One strategy for performance prediction that combines active learning (using a predictive model) with the GAN-based topology generation is depicted in Fig. 7” [combining predictive model with GAN generation, e.g., the hybrid generator comprises a generator of the enhanced GAN and a physical model ]. Pg. 5, “where G(z) is the sample produced by the generator”. Furthermore, Figure 7 shows the predictive model is used in conjunction with the GAN to evaluate the performance of the generator, G(z), e.g. wherein the hybrid generator comprises a generator of the enhanced GAN and a physical model. Further see Sect. III and V. The examiner has interpreted that using a GAN composed of a generator and discriminator and predictive modeling to enable performance prediction for the generated circuits combined with the GAN generation as wherein the enhanced GAN comprises a hybrid generator and a discriminator, and wherein the hybrid generator comprises a generator of the enhanced GAN and a physical model representing operations of the physical system.)
Guo teaches “mapping, using the generator of the enhanced GAN, input samples from the predetermined distribution to a set of sample parameters”. (Pg. 5, “Given a random sample z, the generator maps the latent space Z to the original data space X”. Further see Sect. III. The examiner has interpreted that the generator that maps samples to original data space as mapping, using the generator of the enhanced GAN, input samples from the predetermined distribution to a set of sample parameters.)
Guo teaches “generating, using the physical model representing the physical system within the hybrid generator, a set of outputs of the physical system induced by the set of sample parameters”. Pg. 10, “Predictive modeling can be incorporated into the GAN framework to enable performance prediction for the generated circuits, a typically costly operation” [generating, using the physical model representing the physical system a set of outputs]. “One strategy for performance prediction that combines active learning (using a predictive model) with the GAN-based topology generation is depicted in Fig. 7” [combining predictive model with GAN generation, e.g., the physical model within the hybrid generator]. Figure 7 shows using inputs to determine the performance, e.g. a set of outputs of the system induced by the set of sample parameters. Further see Sect. V. The examiner has interpreted that using inputs of the system to predict the performance of generated circuits using a predictive model that is combined with the generation of the GAN as generating, using the physical model representing the system within the hybrid generator, a set of outputs of the physical system induced by the set of sample parameters.)
Guo teaches “learning, by the discriminator of the enhanced GAN, to distinguish whether the set of sample parameters follows a response of the physical system within a tolerance range based on the set of outputs generated by the physical model.” (Pg. 5, “Discriminator D(x; θD) takes a sample x ∈ X as the input and outputs the probability of x being real. Both D(x; θD) and G(z; θG) can update iteratively in such a way that the generator produces “fake” samples capable of “fooling” the discriminator, while the discriminator aims to distinguish the “fake” samples given by the generator from the real” [learning, by the discriminator of the enhanced GAN, to distinguish whether the set of sample parameters follows a response of the system]. Pg. 10, “Predictive modeling can be incorporated into the GAN framework to enable performance prediction for the generated circuits, a typically costly operation” [the set of outputs generated by the physical model]. Pg. 10, “After circuit generation, NSCs [network structure constraints] are used to filter out any remaining infeasible topologies. The predictive model is used to approximate the performance of feasible circuits” [e.g., a response of the system within a tolerance range]. Furthermore, Fig. 7 shows the NSC are input into evaluating G(z), e.g., learning, by the discriminator. Further see Sect. III and V. The examiner has interpreted that having the discriminator aiming to distinguish fake samples from the real sample inputs to predict a performance prediction for generated circuits to determine a feasible circuit topology using network structure constraints and iterative learning to filter out infeasible topologies as learning, by the discriminator of the enhanced GAN, to distinguish whether the set of sample parameters follows a response of the physical system within a tolerance range based on the set of outputs generated by the physical model.)
Guo teaches “iteratively updating the hybrid generator and the discriminator of the enhanced GAN until outputs generated by the updated generator correspond to an expected output of the physical system, thereby ensuring feasibility for the set of sample parameters.” (Pg. 5, “The goal is to find parameter values θG for the generator G(z; θG) such that Pg is as close to Pr as possible. Discriminator D(x; θD) takes a sample x ∈ X as the input and outputs the probability of x being real. Both D(x; θD) and G(z; θG) can update iteratively” [iteratively updating the hybrid generator and the discriminator of the enhanced GAN] “in such a way that the generator produces “fake” samples capable of “fooling” the discriminator, while the discriminator aims to distinguish the “fake” samples given by the generator from the real” [until outputs generated by the updated generator]. Pg. 10, “Predictive modeling can be incorporated into the GAN framework to enable performance prediction for the generated circuits, a typically costly operation” [outputs generated by the updated generator]. Pg. 10, “After circuit generation, NSCs [network structure constraints] are used to filter out any remaining infeasible topologies. The predictive model is used to approximate the performance of feasible circuits” [e.g., correspond to expected output of the physical system, thereby ensuring feasibility for the set of sample parameters]. Further see Sect. III and V. The examiner has interpreted that updating the generator and discriminator iteratively while the discriminator aims to distinguish the fake samples given by the generator from the real samples and the generator seeks to produce fake samples that fool the discriminator such that the probability density of the fake is close to the real as possible and uses predictive modeling to enable performance prediction for the generated circuits which are filtered out to determine feasible circuit topologies as iteratively updating the hybrid generator and the discriminator of the enhanced GAN until outputs generated by the updated generator correspond to an expected output of the physical system, thereby ensuring feasibility for the set of sample parameters.)
As per claim 3, Guo teaches “classifying, using the discriminator of the enhanced GAN, whether the set of sample parameters is generated from the predetermined distribution or a data distribution of the physical system”. (Pg. 5, “The GAN is composed of two artificial neural networks, a generator and discriminator” [discriminator]. Pg. 5, “The goal is to find parameter values θG for the generator G(z; θG) such that Pg is as close to Pr as possible. Discriminator D(x; θD) takes a sample x ∈ X as the input and outputs the probability of x being real. Both D(x; θD) and G(z; θG) can update iteratively in such a way that the generator produces “fake” samples capable of “fooling” the discriminator, while the discriminator aims to distinguish the “fake” samples given by the generator from the real” [classifying, using the discriminator of the enhanced GAN, whether the set of sample parameters is generated from the predetermined distribution or a data distribution of the physical system]. Further see Sect. III The examiner has interpreted that having the discriminator of a GAN which aims to distinguish the fake samples given by the generator from the real samples as classifying, using the discriminator of the enhanced GAN, whether the set of sample parameters is generated from the predetermined distribution or a data distribution of the physical system.)
Guo also teaches “wherein iteratively updating the discriminator of the enhanced GAN comprises iteratively updating the discriminator until the discriminator correctly classifies the set of sample parameters”. (Pg. 5, “The goal is to find parameter values θG for the generator G(z; θG) such that Pg is as close to Pr as possible. Discriminator D(x; θD) takes a sample x ∈ X as the input and outputs the probability of x being real. Both D(x; θD) and G(z; θG) can update iteratively” [wherein iteratively updating the discriminator of the enhanced GAN comprises iteratively updating the discriminator iteratively updating the discriminator] “in such a way that the generator produces “fake” samples capable of “fooling” the discriminator, while the discriminator aims to distinguish the “fake” samples given by the generator from the real” [until the discriminator correctly classifies the set of parameter samples]. Further see Sect. III The examiner has interpreted that updating the discriminator iteratively while the discriminator aims to distinguish the fake samples given by the generator from the real samples as wherein iteratively updating the discriminator of the enhanced GAN comprises iteratively updating the discriminator until the discriminator correctly classifies the set of parameter samples.)
As per claim 4, Guo teaches “determining, using the discriminator, a distribution of parameters, wherein samples from the distribution of parameters produce an output from the physical model within a predetermined margin of the expected output of the physical system”. (Pg. 5, “Discriminator D(x; θD) takes a sample x ∈ X as the input and outputs the probability of x being real. Both D(x; θD) and G(z; θG) can update iteratively in such a way that the generator produces “fake” samples capable of “fooling” the discriminator, while the discriminator aims to distinguish the “fake” samples given by the generator from the real” [determining, using the discriminator, a distribution of parameters]. Pg. 10, “Predictive modeling can be incorporated into the GAN framework to enable performance prediction for the generated circuits, a typically costly operation” [wherein samples from the distribution of parameters produce an output from the physical model]. “One strategy for performance prediction that combines active learning (using a predictive model) with the GAN-based topology generation is depicted in Fig. 7”. Figure 7 demonstrates the determining samples from the distribution of parameters to produce an output. Pg. 10, “After circuit generation, NSCs (network structure constraints) are used to filter out any remaining infeasible topologies” [within a predetermined margin of the expected output of the physical system]. “The predictive model is used to approximate the performance of feasible circuits” [produce an output from the physical model]. Further see Sect. III and V. The examiner has interpreted that combining predictive modeling into the GAN framework to enable performance prediction for the generated circuits through the generation of parameters to create and filter out any remaining infeasible topologies when the discriminator distinguishes fake samples from real samples generated by the generator as determining, using the discriminator, a distribution of parameters, wherein samples from the distribution of parameters produce an output from the physical model within a predetermined margin of the expected output of the physical system.)
As per claim 5, Guo teaches “wherein the data distribution of the physical system includes a combination of a distribution of the expected output of the physical system and a noise distribution representing the predetermined margin”. (Pg. 5, “Both D(x; θD) and G(z; θG) can update iteratively in such a way that the generator produces “fake” samples capable of “fooling” the discriminator, while the discriminator aims to distinguish the “fake” samples given by the generator from the real” [the data distribution of the physical system]. Pg. 10, “The predictive model is used to approximate the performance of feasible circuits.” [distribution of the expected output of the system representing the predetermined margin]. Pg. 7, “Here we consider two canonical circuit synthesis problems: 1) a frequency response matching problem, and 2) a low-pass filter realizability problem” [noise baring outputs (e.g., a noise distribution)]. Further see Sect. III-IV. The examiner has interpreted that distinguishing the fake samples given by the generator from the real samples to approximate the feasible circuits performance in frequency response matching and low pass filter examples as wherein the data distribution of the physical system includes a combination of a distribution of the expected output of the physical system and a noise distribution representing the predetermined margin.)
As per claim 7, Guo teaches “wherein iteratively updating the hybrid generator further comprises applying a gradient update scheme to the mapping”. (Pg. 6, “To overcome these issues, a gradient penalty has been introduced. A soft version of a penalty on the gradient is included in the loss function to enforce the K-Lipschitz constraint: (Equation (5))” [iteratively updating the hybrid generator further comprises applying a gradient update scheme to the mapping]. Further see Sect. III. The examiner has interpreted that including a gradient in the loss function to enforce the K-Lipschitz constraint as wherein iteratively updating the hybrid generator further comprises applying a gradient update scheme to the mapping.)
As per claim 9, Guo teaches “determining the set of sample parameters based on a design architecture of the system”. (Pg. 1, “Once a circuit topology is determined, component sizing and other continuous parameters can be optimized to determine the best possible performance of a given circuit topology.” Further see Sect. I. The examiner has interpreted that optimizing the parameters for the best performance of a circuit topology as determining the set of sample parameters based on a design architecture of the system.)
Re Claim 10, it is an articles of manufacture claim, having similar limitations of claim 1. Thus, claim 10 is also rejected under the similar rationale as cited in the rejection of claim 1.
Furthermore, regarding claim 10, Guo teaches “A non-transitory computer-readable storage medium storing instructions that when executed by a computer cause the computer to perform a method for determining system parameters”. (Pg. 10, “One strategy for performance prediction that combines active learning (using a predictive model) with the GAN-based topology generation is depicted in Fig. 7. After circuit generation, NSCs are used to alter out any remaining infeasible topologies. The predictive model is used to approximate the performance of feasible circuits. This strategy may be particularly appropriate in cases where circuit evaluation is computationally expensive, otherwise the performance can be evaluated using the original design optimization problem for each candidate circuit topology” [through the use of a computer, e.g., A non-transitory computer-readable storage medium storing instructions that when executed by a computer cause the computer to perform a method for determining system parameters]. Further see Sect. III and V. The examiner has interpreted that using a strategy for performance prediction for cases where circuit evaluation is computationally expensive as a non-transitory computer-readable storage medium storing instructions that when executed by a computer cause the computer to perform a method for determining system parameters.)
Re Claim 12, it is an articles of manufacture claim, having similar limitations of claim 3. Thus, claim 12 is also rejected under the similar rationale as cited in the rejection of claim 3.
Re Claim 13, it is an articles of manufacture claim, having similar limitations of claim 4. Thus, claim 13 is also rejected under the similar rationale as cited in the rejection of claim 4.
Re Claim 14, it is an articles of manufacture claim, having similar limitations of claim 5. Thus, claim 14 is also rejected under the similar rationale as cited in the rejection of claim 5.
Re Claim 16, it is an articles of manufacture claim, having similar limitations of claim 7. Thus, claim 16 is also rejected under the similar rationale as cited in the rejection of claim 7.
Re Claim 18, it is an articles of manufacture claim, having similar limitations of claim 9. Thus, claim 18 is also rejected under the similar rationale as cited in the rejection of claim 9.
Re Claim 19, it is a system claim, having similar limitations of claim 1. Thus, claim 19 is also rejected under the similar rationale as cited in the rejection of claim 1.
Furthermore, regarding claim 19, Guo teaches “A computer system, comprising: a storage device; a processor; a non-transitory computer-readable storage medium storing instructions, which when executed by the processor causes the processor to perform a method for determining system parameters”. (Pg. 10, “One strategy for performance prediction that combines active learning (using a predictive model) with the GAN-based topology generation is depicted in Fig. 7. After circuit generation, NSCs are used to alter out any remaining infeasible topologies. The predictive model is used to approximate the performance of feasible circuits. This strategy may be particularly appropriate in cases where circuit evaluation is computationally expensive, otherwise the performance can be evaluated using the original design optimization problem for each candidate circuit topology” [through the use of a computer, e.g., A computer system, comprising: a storage device; a processor; a non-transitory computer-readable storage medium storing instructions, which when executed by the processor causes the processor to perform a method for determining system parameters]. Further see Sect. III and V. The examiner has interpreted that using a strategy for performance prediction for cases where circuit evaluation is computationally expensive as a computer system, comprising: a storage device; a processor; a non-transitory computer-readable storage medium storing instructions, which when executed by the processor causes the processor to perform a method for determining system parameters.)
Re Claim 20, it is a system claim, having similar limitations of claim 3. Thus, claim 20 is also rejected under the similar rationale as cited in the rejection of claim 3.
Re Claim 21, it is an articles of manufacture claim, having similar limitations of claim 4. Thus, claim 21 is also rejected under the similar rationale as cited in the rejection of claim 4.
Re Claim 22, it is an articles of manufacture claim, having similar limitations of claim 5. Thus, claim 22 is also rejected under the similar rationale as cited in the rejection of claim 5.
Claims 2, 8, 11, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Guo as applied to claims 1 and 10 above, and further in view of US Patent 11,804,050 Milletari, Fausto et al. [herein “Milletari”].
As per claim 2, Guo does not specifically teach “determining a set of approximation points for the hybrid generator and generating the set of sample parameters based on the set of approximation points”.
However, in the same field of endeavor namely using machine learning model to generate parameters for optimized designs, Milletari teaches “determining a set of approximation points for the hybrid generator and generating the set of parameter samples based on the set of approximation points”. (Col. 4 Ln. 43-45, “Weight determiner 126 may determine one or more weights for one or more values for one or more parameters” [determining a set of approximation points for the hybrid generator]. Col. 4 Ln. 49-54, “Parameter determiner 128 may determine one or more values of one or more parameters of machine learning model(s) 108 by aggregating one or more values of one or more corresponding parameters from one or more of training nodes 102 using one or more corresponding weights determined by weight determiner 126” [generating the set of sample parameters based on the set of approximation points]. Further see Col. 4. The examiner has interpreted that determining one or more weights for one or more values for one or more parameters and determining one or more values of one or more parameters using one or more corresponding weights as determining a set of approximation points for the hybrid generator and generating the set of sample parameters based on the set of approximation points.)
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add “determining a set of approximation points for the hybrid generator and generating the set of sample parameter based on the set of approximation points” as conceptually seen from the teaching of Milletari into that of Guo because this modification of the generation of and creation of samples from the approximation points for the advantageous purpose of increasing the efficiency and accuracy of the model (Milletari, Col. 1 Ln. 11-22). Further motivation to combine be that Guo and Milletari are analogous art to the current claim are direct to using machine learning model to generate parameters for optimized designs.
As per claim 8, Guo does not specifically teach “determining a set of infeasible parameters from the predetermined distribution” and “excluding the set of infeasible parameters from the mapping”.
However, Milletari teaches “determining a set of parameters from the predetermined distribution”. ( Col. 12 Ln 58-63, “for each training node 102, discrepancy determiner 124 may compute a discrepancy value representative of an amount of discrepancy between updates, contributions and/ or values for one or more parameters from a training node 102 and corresponding consensus from training nodes 102 (e.g., computed by consensus determiner 122)”. Col. 13 Ln. 18-25, “an amount of discrepancy between a contribution, update, and/or value of at least one parameter and consensus may be based at least in part on an amount of dispersion in contributions, updates, and/or values amongst training nodes 102. In at least one embodiment, discrepancy determiner 124 computes an amount of dispersion using an estimator (e.g., a statistical estimator) of scale of discrepancies between updates, values, and/or parameters from training nodes 102” [determining dispersion between parameter values from the training nodes, e.g., determining a set of infeasible parameters from the predetermined distribution]. Further, Col. 14 Ln. 10-18, “training nodes 102 having a larger amount of discrepancy (e.g., a greater distance from a consensus value) are more likely to be outliers than those having a smaller amount of discrepancy. In at least one embodiment, an outlier may indicate mistakes in data preprocessing, bugs, wrong hyper-parameter choices, deliberate adversarial actions, or other characteristics associated with a training node 102, which may negatively influence collaborative training of machine learning model(s) 108” [e.g., infeasible parameters]. Further see Col. 12-14. The examiner has interpreted that computing a discrepancy based on the amount of dispersion from the parameter values in the training nodes as determining a set of parameters from the predetermined distribution.)
Milletari also teaches “excluding the set of infeasible parameters from the mapping”. (Col. 14, “weight determiner 126 maps a value(s) corresponding to one or more parameters from a training node 102 to a weight value using a model, such as a function and/or a machine learning model(s). In at least one embodiment, a value(s) corresponding to one or more parameters from a training node 102 is representative of an amount of discrepancy between a training node 102 (e.g., computed by discrepancy determiner 124) and a consensus of training nodes” [weight corresponds to discrepancy, e.g., infeasibility] Col. 15 Ln 39-45, “parameter determiner 128 may determine one or more values of one or more parameters of machine learning model(s) 108 by aggregating one or more values of one or more corresponding parameters from one or more of training nodes 102 using one or more corresponding weights determined by weight determiner 126” [parameters are determined based on discrepancy]. Col. 26 Ln. 27-31, “review analyzer 1114 may determine to exclude or include a portion of MLM parameter information 1120 (e.g., all parameters from a training node and/or a particular subset thereof) based on such votes (e.g., a vote count)” [excluding parameters based on discrepancy, e.g, excluding the subset of infeasible parameters]. Further, Col. 14 Ln. 10-18, “training nodes 102 having a larger amount of discrepancy (e.g., a greater distance from a consensus value) are more likely to be outliers than those having a smaller amount of discrepancy. In at least one embodiment, an outlier may indicate mistakes in data preprocessing, bugs, wrong hyper-parameter choices, deliberate adversarial actions, or other characteristics associated with a training node 102, which may negatively influence collaborative training of machine learning model(s) 108” [e.g., infeasible parameters]. Further, Col. 16, “values corresponding to an amount of discrepancy greater than a threshold value may be excluded” [excluding the set of infeasible parameters]. Further see Col. 14 and 26. The examiner has interpreted that excluding a portion of the parameter such as the particular subset that includes a discrepancy in the parameter values that leads to outlines in the values to determine a weight based on the discrepancy as excluding the set of infeasible parameters from the mapping.)
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to add “determining a set of infeasible parameters from the predetermined distribution” and “excluding the set of infeasible parameters from the mapping” as conceptually seen from the teaching of Milletari into that of Guo because this modification of generating and segregating parameter subsets that based on the feasibility for the advantageous purpose of removing parameters will create outliners in the training data (Milletari, Col. 14 Ln. 9-17). Further motivation to combine be that Guo and Milletari are analogous art to the current claim are direct to using machine learning model to generate parameters for optimized designs.
Re Claim 11, it is an article of manufacture claim, having similar limitations of claim 2. Thus, Claim 11 is also rejected under the similar rationale as cited in the rejection of claim 2.
Re Claim 17, it is machine claim, having similar limitations of claim 8. Thus, Claim 17 is also rejected under the similar rationale as cited in the rejection of claim 8.
Response to Arguments
Applicant’s arguments, see Pg. 11, filed November 25, 2025, with respect to the rejection(s) of claims 8 and 17 under 35 U.S.C. 112(a) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn.
Applicant’s arguments, see Pg. 11-15, filed November 25, 2025, with respect to the rejection(s) of the claims under 35 U.S.C. § 101 have been fully considered and are persuasive since the claims as a whole improves generative adversarial network, thereby improving the function of a computer, to integrate the claimed invention into a practical application. Therefore, the rejection has been withdrawn.
Applicant's arguments filed on November 25, 2025, have been fully considered with regards to the rejection of independent claims under 35 U.S.C. 102(a)(1), but they are not persuasive.
Applicant argues that reference does not teach each and every limitation in the amend claims because cited reference fails to teach “wherein the enhanced GAN comprises a hybrid generator and a discriminator, and wherein the hybrid generator comprises a generator of the enhanced GAN and a physical model representing operations of the physical system” (See Applicant’s response, Pg. 15).
Applicant rightly asserts the MPEP § 2143.03 that “All words in a claim must be considered in judging the patentability of that claim against the prior art” and “Examiners must consider all claim limitations when determining patentability of an invention over the prior art.”
As mapped in the previous Office Action in claim 1, Guo discloses “wherein the enhanced GAN comprises a hybrid generator and a discriminator, and wherein the hybrid generator comprises a generator of the enhanced GAN and a physical model representing operations of the physical system” as using a GAN composed of a generator and discriminator and predictive modeling to enable performance prediction for the generated circuits combined with the GAN generation. By incorporating predictive model into the GAN framework and specifically combining the predictive model with the GAN generation, e.g., wherein the hybrid generator comprises a generator of the enhanced GAN and a physical model representing operations of the physical system, then the claimed limitation is taught. Additional emphasis and continued citation from the primary reference has been added to this mapping in the rejection above to the amended limitation.
Therefore, all of the limitations of the amended claim 1 are disclosed in Guo. Therefore, applicant’s arguments are not persuasive and the rejection of claim 1 as anticipated by Guo is maintained.
Applicant argues that reference does not teach each and every limitation in the amend claims because cited reference fails to teach “learning, by the discriminator of the enhanced GAN, to distinguish whether the set of sample parameters follows a response of the physical system within a tolerance range based on the set of outputs generated by the physical model” (See Applicant’s response, Pg. 15-16).
Applicant rightly asserts the MPEP § 2143.03 that “All words in a claim must be considered in judging the patentability of that claim against the prior art” and “Examiners must consider all claim limitations when determining patentability of an invention over the prior art.”
As mapped in the previous Office Action in claim 1, Guo discloses “learning, by the discriminator of the enhanced GAN, to distinguish whether the set of sample parameters follows a response of the physical system within a tolerance range based on the set of outputs generated by the physical model” as having the discriminator aiming to distinguish fake samples from the real sample inputs to predict a performance prediction for generated circuits to determine a feasible circuit topology using network structure constraints and iterative learning to filter out infeasible topologies. As the discriminator iteratively learning to distinguish fake samples from the real sample using network structure constraints to filter infeasible topologies the claimed limitation is taught. Furthermore, as included in the rejection above, the network structure constraints as input when the G(z), output of the generator, is being evaluated, that is by the discriminator. Additional emphasis and reference to the figures of the primary reference has been added to this mapping in the rejection above to the amended limitation.
Therefore, all of the limitations of the amended claim 1 are disclosed in Guo. Therefore, applicant’s arguments are not persuasive and the rejection of claim 1 as anticipated by Guo is maintained.
Applicant argues that reference does not teach each and every limitation in the amend claims because office action does not address the limitation of “iteratively updating the hybrid generator and the discriminator of the enhanced GAN until outputs generated by the updated generator correspond to an expected output of the physical system, thereby ensuring feasibility for the set of sample parameters” (See Applicant’s response, Pg. 16).
Applicant rightly asserts the MPEP § 2143.03 that “All words in a claim must be considered in judging the patentability of that claim against the prior art” and “Examiners must consider all claim limitations when determining patentability of an invention over the prior art.”
The examiner would like to refer the applicant to Pg. 18-19 of the Final Rejection mailed September 25, 2025. As mapped in the previous Office Action in claim 1 and further above, Guo discloses “iteratively updating the hybrid generator and the discriminator of the enhanced GAN until outputs generated by the updated generator correspond to an expected output of the physical system, thereby ensuring feasibility for the set of sample parameters” as updating the generator and discriminator iteratively while the discriminator aims to distinguish the fake samples given by the generator from the real samples and the generator seeks to produce fake samples that fool the discriminator such that the probability density of the fake is close to the real as possible and uses predictive modeling to enable performance prediction for the generated circuits which are filtered out to determine feasible circuit topologies. Since the discriminator is no longer able to distinguish the fake sample from the real sample in addition to being provided samples which have been generated by the generator to have a performance prediction for only feasible circuit topologies, the claimed limitation is taught. Additional emphasis has been added to this mapping in the rejection above to the amended limitation.
Therefore, all of the limitations of the amended claim 1 are disclosed in Guo. Therefore, applicant’s arguments are not persuasive and the rejection of claim 1 as anticipated by Guo is maintained.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Dutta, Rahul, Salahuddin Raju, Ashish James, Chemmanda John Leo, Yong-Joon Jeon, Balagopal Unnikrishnan, Chuan Sheng Foo, Zeng Zeng, Kevin Tshun Chuan Chai, and Vijay R. Chandrasekhar. “Learning of multi-dimensional analog circuits through generative adversarial network (GAN).” In 2019 32nd IEEE International System-on-Chip Conference (SOCC), pp. 394-399. IEEE, 2019 teaches a method using a GAN to model the performance of analog circuits using simulated samples.
Wang, Hai Peng, Yun Bo Li, He Li, Shu Yue Dong, Che Liu, Shi Jin, and Tie Jun Cui. “Deep learning designs of anisotropic metasurfaces in ultrawideband based on generative adversarial networks.” Advanced Intelligent Systems 2, no. 9 (2020): 2000068 teaches training a generator and discriminator in a GAN to find the response of electromagnetic response of a system. The method also uses a predictor to generate the values of the output from the GAN.
Examiner’s Note: The examiner has cited particular columns and line numbers in the reference that applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. In the case of amending the claimed invention, the applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for the proper interpretation and also to verify and ascertain the metes and bound of the claimed invention.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Simeon P Drapeau whose telephone number is (571)-272-1173. The examiner can normally be reached Monday - Friday, 8 a.m. - 5 p.m. ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached on (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SIMEON P DRAPEAU/ Examiner, Art Unit 2188
/RYAN F PITARO/ Supervisory Patent Examiner, Art Unit 2188