Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Applicant’s amendment, filed 12/18/2025, has been entered and carefully considered. Claims 1-11 and 20-30 are pending and claim 20 has been amended.
The objection to the specification is withdrawn.
Response to Arguments
Applicant's arguments filed 12/18/2025 have been fully considered but they are not persuasive.
Regarding claim 1, Applicant argued that Zhou fails to anticipate or suggest "generating a channel model for the wireless channel using a generative adversarial network (GAN)" and "generating a first set of simulated output data by transforming the first set of input data using the channel model. Further argued that notably, the generated data samples are not stated as being associated in any way with a channel model of a wireless channel or training data samples are not equivalent to a "channel model".
Examiner respectfully disagrees.
Zhou discloses in paragraph 0029, an input Vector 110 are provided to a Generator Model 115. The Generator Model 115 is a generator neural network of a GAN. The Generator Model 115 is generally trained to produce Generated Data 120 based on the Input Vector 110, The Generator Model 115 is a machine learning model (e.g., a neural network) trained to generate simulated data.
Paragraph [0031] discloses the Generated Data 120 is then provided to a Discriminator Model 130 and the Discriminator Model 130 is a discriminator neural network of a GAN. Generally, the Discriminator 130 is trained to distinguish true input data (e.g., Provided Data 125) from simulated data created by the Generator Model 115 (e.g., Generated Data 120). That is, the Generator Model 115 may learn to generate Generated Data 120 that approximates, matches, or simulates the Provided Data 125, while the Discriminator Model 130 learns to distinguish Generated Data 120 from Provided Data 125.
Paragraph [0064] discloses FIG. 7 is a flow diagram illustrating a method 700 of training a generator model based on a pre-trained classifier model, according to some embodiments disclosed herein. The method 700 begins at block 705, where the system receives a classifier model that was trained using one or more data samples in a target class. At block 710, the system trains a generative adversarial network (GAN) to generate simulated data samples for the target class. The method 700 then continues to block 715, where the system generates a first simulated data sample using a generator model.
Paragraph [0068] discloses the Generator Model 115 and Discriminator Model 130 form a GAN. The Generator Model 115 is a machine learning model (e.g., a neural network) trained to generate simulated data In one embodiment, the Generator Model 115 is trained to generate simulated data for a particular class of data the Pre-Trained Classifier 140 has been trained for. Further the Discriminator Model 130 is trained alongside the Generator Model 115 and provides pressure to make the Generator Model 115 improve the accuracy of its generated data samples.
Applicant specification (US 2023/0155704 A1), paragraph 0029-0031, 0038 discloses channel model is produced using a neural network (e.g., the generator network of a GAN), the channel model can be used to generate simulations using parallelization, enabling vastly improved efficiencies and reduced latency when generating predicted or simulated data. Paragraph 0040 discloses the channel model 145 is generated by processing some latent input (e.g., a probability distribution) with a trained generator, as discussed in more detail below. In some aspects, the generator is conditioned using antenna indices (e.g., using each pair of receiving and transmitting antennas), enabling fine-grained representation of MIMO channels.
Similarly, Zhou discloses training a generative adversarial network (GAN) to generate simulated data samples for the target class, comprising: generating a first simulated data sample using a generator model.
Thus, Zhou discloses the mechanism of generating a channel model for the wireless channel using a generative adversarial network (GAN)" and "generating a first set of simulated output data by transforming the first set of input data using the channel model.
Similar arguments applied to claims 20 and 30.
Rejection of claims 1-11 and 20-30 is maintained.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 9-10 and 20-22, 28-30 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Zhou et al. (US 2022/0180203 A1).
Regarding claims 1, 20 and 30, Zhou discloses a processor-implemented method of generating simulated output data, comprising/ a processing system comprising means for/ a processing system, comprising: a memory comprising computer-executable instructions and one or more processors configured to execute the computer-executable instructions and cause the processing system to perform an operation comprising (Fig. 6 discloses a method of generating and evaluating training data. Fig. 8 discloses CPU 805 and a memory 810): receiving a first set of input data for data transmitted, from a transmitter, as a signal in a wireless channel (Fig. 6 discloses at block 605, where the system generates or receives one or more input vectors. As discussed above, these input vectors may be randomly generated);
generating a channel model for the wireless channel using a generative adversarial network (GAN) (Fig. 6 discloses at block 610, the system generates one or more simulated data samples based on the input vectors. For example, the system may provide the input vectors to the generator model, which outputs a corresponding simulated data sample for each input vector); and generating a first set of simulated output data by transforming the first set of input data using the channel model (Fig. 6 discloses at block 615, where the simulated data samples are evaluated. This may include, as discussed above, identifying features that are shared across the simulated data samples, identifying features that are absent from the simulated data samples).
Regarding claims 2 and 21, Zhou discloses wherein the GAN was trained by: training a generator network to generate the channel model (Fig. 7 discloses the method 700 then continues to block 715, where the system generates a first simulated data sample using a generator model)
; and training a discriminator network to classify output data as real or simulated (Fig. 4 discloses this Discriminator Loss 135 is then used to refine the weights or parameters of the Discriminator Model 130 such that it more accurately distinguishes the real and simulated data. the Discriminator Model 130 is a machine learning model (e.g., a neural network) trained to distinguish between real data samples (e.g., those used to train the Pre-Trained Classifier 140) and simulated data samples (generated by the Generator Model 115). In embodiments, the Discriminator Model 130 is trained alongside the Generator Model 115 and provides pressure to make the Generator Model 115 improve the accuracy of its generated data samples).
Regarding claims 3 and 22, Zhou discloses evaluating model consistency by processing the first set of simulated output data using the discriminator network (Fig. 7 steps, 715-730 disclose block 715, where the system generates a first simulated data sample using a generator model. At block 720, the system computes a first discriminator loss by processing the first simulated data sample using a discriminator model. Further, at block 725, the system computes a classifier loss by processing the first simulated data sample using the classifier model. The method 700 then continues to block 730, where the system refines the generator model based on the first discriminator loss and the classifier loss); and upon determining that the model consistency does not meet defined criteria, refraining from using the channel model (Fig. 7 steps, 715-730, also Paragraphs 0061-0063 discloses For example, the features which are present (or absent) from the simulated data samples can be used to determine or infer what features the underlying pre-trained classifier is actually relying on. Similarly, in some embodiments, if the simulated data samples do not appear to depict the target class, one may determine or infer that the underlying classifier is not particularly accurate for the class, and/or was trained on insufficient data for the class.
[0062] Based on the results of this analysis, a variety of steps can be taken. This may include, for example, collecting and/or using additional training data for the target class, refraining from using the classifier model in production, and the like.
[0063] The method 600 then continues to block 620, where the system determines whether one or more termination criteria are satisfied. This may include, for example, a number of simulated data samples that should be generated, determining whether a user has initiated another round (or has terminated the process), and the like. If the criteria are not satisfied, the method 600 returns to block 605. Otherwise, the method 600 terminates at block 625).
Regarding claims 9 and 28, Zhou discloses wherein generating the first set of simulated output data comprises convolving the first set of input data with the channel model (Paragraphs 0030-0032 disclose the Input Vector 110 is a randomized vector used as input to the Generator Model 115. Generally, changing the Input Vector 110 results in a different output of Generated Data 120. During training, the Generator Model 115 learns to produce Generated Data 120 that approximates or matches the Target Label 105).
Regarding claims 10 and 29, Zhou discloses modifying one or more transmission parameters of the transmitter based on the first set of simulated output data (Paragraphs 0030-0032 disclose changing the Input Vector 110 results in a different output of Generated Data 120. During training, the Generator Model 115 learns to produce Generated Data 120 that approximates or matches the Target Label 105).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 4-8 and 23-27 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. in view of Piazza et al. (US 20150341103 A1) and further in view of Trost et al. (US 2021/0271591 A1).
Regarding claims 4 and 23, Zhou does not disclose wherein generating the channel model comprises processing latent input, a transmitting antenna index, and a receiving antenna index using the GAN.
In an analogous art, Piazza discloses wherein generating the channel model comprises processing a transmitting antenna index, and a receiving antenna index using the GAN (Paragraph 0125 discloses a method of using the channel model to build the look up tables will need to be properly selected based on the particular application. Parts of such a system would include a transmitter antenna array and a receiver antenna array (which can use both linear and non linear receivers) with multiple reconfigurable elements).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Piazza to the system of Zhou to provide systems and methods for efficiently using multi-element reconfigurable antennas in MIMO, SIMO and MISO systems (Abstract, Piazza).
The combination of Zhou and Piazza don’t disclose the mechanism of wherein generating the channel model comprises processing latent input.
In an analogous art, Trost discloses wherein generating the channel model comprises processing latent input (Paragraph 0035 discloses the model is then trained as follows: first, the input is encoded as distribution over the latent space; second, a point from the latent space is sampled from that distribution).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Trost to the modified system of Zhou and Piazza to provide a method, system, and computer program for generating mock data using generative adversarial networks (Abstract, Trost).
Regarding claims 5 and 24, Zhou and Piazza do not disclose wherein the latent input is sampled from a Gaussian distribution.
In an analogous art, Trost discloses wherein the latent input is sampled from a Gaussian distribution (paragraphs 0035-0037, 0022).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Trost to the modified system of Zhou and Piazza to provide a method, system, and computer program for generating mock data using generative adversarial networks (Abstract, Trost).
Regarding claims 6 and 25, Zhou and Piazza do not disclose wherein the latent input represents channel state information.
In an analogous art, Trost discloses wherein the latent input represents channel state information (Paragraph 0035, 0023 discloses the model is then trained as follows: first, the input is encoded as distribution over the latent space; second, a point from the latent space is sampled from that distribution. [0023] “Generative Adversarial Networks” (GANs) are a deep-learning-based generative model. More generally, GANs are a model architecture for training a generative model, and it is most common to use deep learning models in this architecture. GANs train a generative model by framing the problem as a supervised learning problem with two sub-models: a generator model that is trained to generate new examples, and a discriminator model that classifies data as either real (from the domain) or fake (generated). The two models are trained together in a zero-sum game, adversarial, until the discriminator model is fooled about half the time, meaning the generator model is generating plausible examples).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Trost to the modified system of Zhou and Piazza to provide a method, system, and computer program for generating mock data using generative adversarial networks (Abstract, Trost).
Regarding claims 7 and 26, Zhou and Piazza do not disclose wherein the latent input stores the channel state information in a compact manner and can be used to aid channel reconstruction.
In an analogous art, Trost discloses wherein the latent input stores the channel state information in a compact manner and can be used to aid channel reconstruction (Paragraph 0035 discloses “Variational autoencoder” is an architecture composed of an encoder and a decoder and trained to minimize the reconstruction error between the encoded-decoded data and the initial data. However, instead of encoding an input as a single point, it is encoded as a distribution over the latent space. The model is then trained as follows: first, the input is encoded as distribution over the latent space; second, a point from the latent space is sampled from that distribution; third, the sampled point is decoded and the reconstruction error can be computed; and finally, the reconstruction error is backpropagated through the network).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Trost to the modified system of Zhou and Piazza to provide a method, system, and computer program for generating mock data using generative adversarial networks (Abstract, Trost).
Regarding claims 8 and 27, Zhou does not disclose wherein the transmitting antenna index and the receiving antenna index are used to condition the channel model, and wherein, prior to generating the channel model, the transmitting antenna index and receiving antenna index are embedded by transforming them to a vector space.
In an analogous art, Piazza discloses wherein the transmitting antenna index and the receiving antenna index are used to condition the channel model, and wherein, prior to generating the channel model, the transmitting antenna index and receiving antenna index are embedded by transforming them to a vector space (Paragraph 0125 discloses a method of using the channel model to build the look up tables will need to be properly selected based on the particular application. Parts of such a system would include a transmitter antenna array and a receiver antenna array (which can use both linear and nonlinear receivers) with multiple reconfigurable elements. Paragraph 0032 discloses the mechanism of signal vector at the transmit antenna array and receiver antenna array).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Piazza to the system of Zhou to provide systems and methods for efficiently using multi-element reconfigurable antennas in MIMO, SIMO and MISO systems (Abstract, Piazza).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. in view of Brauer (US 2021/0272273 A1).
Regarding claim 11, Zhou discloses receiving a first set of output data that was received in the wireless channel by a receiver; determining a difference between the first set of simulated output data and the first set of output data (Paragraphs 0030-0031 discloses the discriminator model is trained to differentiate between simulated data samples generated by the generator model and data samples used to train the classifier model).
Zhou does not disclose the mechanism of upon determining that the difference exceeds a defined threshold, using a default configuration for the transmitter.
In an analogous art, Brauer discloses upon determining that the difference exceeds a defined threshold, using a default configuration for the transmitter (Paragraphs 0012, 0044 discloses The method includes generating a simulated image for a specimen by inputting a portion of design data for the specimen into a GAN. The inputting is performed by one or more computer subsystems. One or more components are executed by the one or more computer subsystems. The one or more components include the GAN. The GAN is trained with a training set that includes portions of design data for one or more specimens designated as training inputs and corresponding images of the one or more specimens designated as training outputs. The computer subsystem may compare the output of the detectors to a threshold. Any output having values above the threshold may be identified as an event (e.g., a potential defect) while any output having values below the threshold may not be identified as an event).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Brauer to the system of Zhou to provide methods and systems for generating a simulated image of a specimen using a generative adversarial network (GAN) (Abstract, Brauer).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Srivastava et al. (US 20210133539 A1) discloses a generator network of a variational autoencoder can be trained to approximate a simulator and generate a first result. The simulator is associated with input data, based on which the simulator outputs output data. A training data set for the generator network can include the simulator's input data and output data. Based on the simulator's output data and the first result of the generator network, an inference network of the variational autoencoder can be trained to generate a second result.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROMANI OHRI whose telephone number is (571)272-5420. The examiner can normally be reached 8:00am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, UN C CHO can be reached at 5712727919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROMANI OHRI/Primary Examiner, Art Unit 2413