Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The previous objections and 35 USC 112 rejections on the relevant claims are withdrawn based on the amendments and arguments provided on those claims.
Response to Arguments
Applicant’s arguments have been fully considered but they are not persuasive. In regards to the gist of applicants’ arguments (see Remarks, p. 8) that the prior art does not teach using the results/outputs of one machine learning model to train another, the Examiner points out that firstly, this is well known as pointed out in the rejection below, and secondly, a singular model can also be retrained on its own outputs or using further training sets for fine tuning or for further predictions, resulting in a different more robust trained model. With this in mind, Haisch teaches a controller that “generates a training set… for the ANN by generating a plurality of… vectors” (see, C1, L47-48).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4, 6-12, 14, 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Haisch, US 11,841,391 B1, in view of Nataraj, US 2022/0253579 A1.
Regarding Claim 1, Haisch teaches:
A system to develop and test machine learning models, comprising (C4, L12-14: “an ANN having five layers and ten hidden units with a sigmoid cost function was used to model the functional relationship between X and Y”. And; C2, L29-32: “In one aspect of the invention, the method also includes testing the ANN with a different training set than that used to train the ANN”):
a waveform emulator machine learning system having a first machine learning model (Abstract: “A test apparatus and a method for operating a data processing system to generate a test signal for testing a DUT are disclosed. The apparatus includes a signal generator, artificial neural network, and controller”. The apparatus with the components as described constituting the waveform emulator machine learning system);
a user interface to allow a user to input one or more design parameters for the waveform emulator machine learning system (C5, L31-32: “the user specifies a desired set of signal parameters to controller 52 over user interface”);
and one or more processors configured to execute code to cause the one or more processors to (C2, L35-38):
send the one or more design parameters to the waveform emulator machine learning system (C5, L31-35: “the user specifies a desired set of signal parameters to controller 52 over user interface 56. Controller 52 sends target Y value to the ANN 54 which predicts the control X vector and signal generator 53 generates the desired signal”);
receive one or more data sets from the waveform emulator machine learning system, the one or more data sets based on the one or more design parameters (C1, L65-67: “the controller generates training sets for each of a plurality of different fixed parameter vectors”. And; C3, L46-48: “The ANN is trained using a data set that is obtained by randomly inputting values for the various input parameters to the signal generator”);
validate the trained machine learning model using a previously unused one of the one or more data sets (C1, L60-64: “the controller tests the weights and biases by selecting Y values that were not used to train the ANN, inputting each selected Y to the ANN, determining the calculated parameters from the generated test signal, and comparing the calculated parameters with the selected Y”. And; C5, L23-29: “the test set used to train ANN 54 is divided into two parts. The second part typically consists of a small test set having 10 percent of the test vectors. The larger part is used to train ANN 54 as discussed above. The smaller part is then used to test ANN 54 to verify that ANN 54 generates the X vectors”. The testing being the validating of the machine learning model. See also Nataraj, paragraph 27);
adjust the trained machine learning model based upon validation results; and repeat the training, validating, and adjusting until an accurate machine learning model that has passed validation is trained (C5, L6-21 “controller 52 tests ANN 54 to verify that the trained ANN actually operates as desired. In one aspect of the invention, controller 52 generates random sets of desired signal properties within the ranges permitted for the signal properties and inputs the corresponding vector Y to ANN, which translates that vector to a corresponding control vector X. Signal generator 53 then generates a signal on line 32 which is analyzed by analyzer 51 to extract the actual signal properties. These observed properties are then compared with the Y value given to ANN 54. If the two vectors match to within some predetermined error limit, the test is defined to have passed. The process is repeated for a number of different randomly selected Y values. If the test fails for some subset of the allowable Y values, the user has the option to increase the training set by selecting additional training vectors and repeating”. See also Nataraj, paragraph 27).
While Haisch may have taught the following, Nataraj more directly shows:
train a second, developed machine learning model using at least one of the one or more data sets, resulting in a trained machine learning model (paragraph 7: “a method of hardware development includes training a machine-learning model to replicate behavior of a hardware system under development, using output of a first model of the hardware system. The machine-learning model is distinct from the first model”. That is output from one model is used as training data for training another model. See also Haisch, C4, L10: “The training set is then used to train an ANN”; and Sun, US 20230376751 A1, Abstract: “fine-tuning a second neural network, based on (1) pretrained parameters from the first neural network”);
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the teachings of Nataraj with that of Haisch for training a second, developed machine learning model using at least one of the one or more data sets, resulting in a trained machine learning model.
The ordinary artisan would have been motivated to modify Haisch in the manner set forth above for the purposes of training a machine-learning model for replicating behavior of a hardware system under development, using output of a first model [Nataraj: Abstract].
Regarding Claim 2, Haisch further teaches:
The system as claimed in claim 1, wherein the one or more design parameters comprises one or more design parameters determined from waveform parameters for one or more desired measurement outcomes (C2, L6-8: “the input parameters defining the characteristics of the interference signal and a modulation pattern”. And; C1, L38-43: “The controller receives desired values for the calculated parameters and couples those desired values to the neural network inputs, thereby causing the test signal generator to generate a test signal having the desired values for the calculated parameters”).
Regarding Claim 4, Haisch further teaches:
The system as claimed in claim 1, wherein the one or more design parameters comprises one or more design parameters extracted from a data set derived from a set of devices under test (C1, L29-43: “The present invention includes a test apparatus and a method for operating a data processing system to generate a test signal for testing a DUT. The apparatus includes a signal generator, artificial neural network (ANN), and controller. The signal generator generates a test signal determined by a plurality of signal generator input parameters, X, that are coupled thereto. The test signal is characterized by a plurality of calculated parameters, Y. The ANN has the calculated parameters as inputs and a plurality of outputs connected to the plurality of signal generator inputs. The controller receives desired values for the calculated parameters and couples those desired values to the neural network inputs, thereby causing the test signal generator to generate a test signal having the desired values for the calculated parameters”. DUT is the device under test).
Regarding Claim 6, Haisch further teaches:
The system as claimed in claim 1, wherein the one or more processors are further configured to: operate the accurate machine learning model in a run-time environment (C5, L30-38: “In the fifth phase, the test system enters the production mode in which the user specifies a desired set of signal parameters to controller 52 over user interface 56. Controller 52 sends target Y value to the ANN 54 which predicts the control X vector and signal generator 53 generates the desired signal. Controller 52 can verify that the output signal has the desired properties by instructing analyzer 51 to compute the actual properties of the output signal and compare those to the Y value sent to ANN”. The production mode being the run-time environment);
and periodically check an error metric of the accurate machine learning model (C5, L6-18: “controller 52 tests ANN 54 to verify that the trained ANN actually operates as desired. In one aspect of the invention, controller 52 generates random sets of desired signal properties within the ranges permitted for the signal properties and inputs the corresponding vector Y to ANN, which translates that vector to a corresponding control vector X. Signal generator 53 then generates a signal on line 32 which is analyzed by analyzer 51 to extract the actual signal properties. These observed properties are then compared with the Y value given to ANN 54. If the two vectors match to within some predetermined error limit, the test is defined to have passed. The process is repeated for a number of different randomly selected Y values”. The predetermined error limit corresponding to the error metric).
Regarding Claim 7, Haisch further teaches:
The system as claimed in claim 6, wherein the one or more processors are further configured to request retraining of the accurate machine learning model when the error metric fails (C5, L18-22: “If the test fails for some subset of the allowable Y values, the user has the option to increase the training set by selecting additional training vectors and repeating the third phase with the larger training set”).
Regarding Claim 8, Haisch further teaches:
The system as claimed in claim 7, wherein the one or more processors are further configured to use device under test data from one or more data sets upon which the error metric failed for retraining or adjustment of the accurate machine learning model to develop a new machine learning model design (C5, L18-22: “If the test fails for some subset of the allowable Y values, the user has the option to increase the training set by selecting additional training vectors and repeating the third phase with the larger training set”. That is the training dataset is increased with additional training vectors added to the previous training set that had training vectors that resulted in the failed test; and the ANN model is then retrained with this expanded training dataset).
Regarding Claim 9, Haisch further teaches:
The system as claimed in claim 6, wherein the one or more processors are further configured to continue to operate the accurate machine learning model in the run-time environment when the error metric passes (C5, L15-17: “If the two vectors match to within some predetermined error limit, the test is defined to have passed”. And; C5, L30-31: “In the fifth phase, the test system enters the production mode”. The production mode being the run-time environment).
Regarding Claim 12, Haisch further teaches:
The method as claimed in claim 11, wherein the one or more desired measurement outcomes comprises a range of Transmitter Dispersion and Eye Closure Quaternary (TDECQ) values (C3, L14-18: “Analyzer 27 measures a set of parameters that characterize the corruption of the original modulated signal. For 4-level pulse amplitude modulation, the parameters include a transmitter and eye closure (TDECQ) and the extinction ratio or quantities calculated from these parameters”).
Regarding Claim 20, Haisch further teaches:
The method as claimed in claim 10, wherein the developed machine learning model comprises a neural network (C4, L11-20: “The training set is then used to train an ANN. In one exemplary embodiment, an ANN having five layers and ten hidden units with a sigmoid cost function was used to model the functional relationship between X and Y. The trained ANN is then used as an input to signal generator 30. Refer now to FIG. 3, which illustrates a test system 40 that utilizes such an ANN to run the signal generator. In test system 40, the user enters the calculated parameters Y to the trained ANN 35 whose output X is input to signal generator 30, which provides the test signal having the desired properties on line 32 to the DUT”).
Claims 3, 5, 13, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Haisch, US 11,841,391 B1, in view of Nataraj, US 2022/0253579 A1, and further in view of Dutta, US 2022/0027536 A1.
Regarding Claim 3, with Haisch and Nataraj teaching those limitations of the claim as previously pointed out, neither Haisch nor Nataraj may not have explicitly taught all of the following, however, Dutta shows:
The system as claimed in claim 1, wherein the one or more design parameters comprises a sweep of values for each of the one or more design parameters (paragraphs 3, 55, 84: “Generation of training data is typically done by perturbing circuit parameters (input design parameters or input vectors)”. And; paragraph 89: “the initial batch of training data generation uses 39 input variables to tweak or perturb with N sweeps, that is, design input parameters”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the teachings of Dutta with that of Haisch and Nataraj for having one or more design parameters comprising a sweep of values for each of the one or more design parameters.
The ordinary artisan would have been motivated to modify Haisch and Nataraj in the manner set forth above for the purposes of capturing and generalizing design boundaries (a priori) and observing the corresponding response through electronic design automation (EDA) simulators (EDA tools) on various performance targets [Dutta: paragraph 3].
Regarding Claim 5, with Haisch and Nataraj teaching those limitations of the claim as previously pointed out, Dutta further teaches:
The system as claimed in claim 4, wherein the data set derived from the set of devices under test includes ranges of values and step sizes of parameter sweeps from the data set derived from the devices under test (paragraphs 54, 81: “In various embodiments, the parameter value of the above-mentioned each input design parameter ranges from −1 to 1, and the first predetermined threshold condition is an absolute parameter value of about 0.5 or greater”. See also Haisch, for example, Claim 3).
Claims 10-11, 13-19 are similar to Claims 1-9 respectively and are rejected under the same rationale as stated above for those claims.
Examiner's Note:
The Examiner cites particular pages, sections, columns, line numbers, and/or paragraphs in the references as applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in its entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner and the additional related prior arts made of record that are considered pertinent to applicant's disclosure to further show the general state of the art. The Examiner's interpretations in parenthesis are provided with the cited references to assist the applicants to better understand how the examiner interprets the prior art to read on the claims. Such comments are entirely consistent with the intent and spirit of compact prosecution.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892 for the relevant prior art where for example Barbu, US 2021/0391832 A1, teaches a machine learning model configured to determine digital pre-distortion parameters for a power amplifier based on emulated feedback signal.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVE MISIR whose telephone number is (571)272-5243. The examiner can normally be reached M-R 8-5 pm, F some hours.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar can be reached at 5712703169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVE MISIR/Primary Examiner, Art Unit 2127