Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
Applicant’s arguments, see Page 6 of the remarks, filed 01/05/2026, with respect to Claims 7-8 have been fully considered and are persuasive. The 35 USC 101 rejection of claims 7-8 has been withdrawn.
Claim Rejections - 35 USC § 112
Applicant’s arguments, see Pages 6-7 of the remarks, filed 01/05/2026, with respect to Claims 1-10 have been fully considered and are persuasive. The 35 USC 101 rejection of claims 1-10 has been withdrawn.
Claim Rejections - 35 USC § 102
Applicant’s arguments and amendments, see Pages 8-9 of the remarks, filed 01/05/2026, with respect to the rejection(s) of claim(s) 1, 2, 4-8 and 10 under 35 USC 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration of the arguments and amendments, a new ground(s) of rejection is made in view of Guanhong Tao, NPL, “Trace Divergence Analysis and Embedding Regulation for Debugging Recurrent Neural Networks”, necessitated by the amended claims.
Claim Rejections - 35 USC § 103
Applicant’s arguments, see Pages 9-10 of the remarks, filed 01/05/2026, with respect to the rejection(s) of claim(s) 3 and 9 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in further views of Guanhong Tao, NPL, “Trace Divergence Analysis and Embedding Regulation for Debugging Recurrent Neural Networks”.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 4-8 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Arijit Nandi, NPL “Improving the Performance of Neural Networks with an Ensemble of Activation Functions”, IEEE, Published: July 2020, (hereafter Nandi), in views of Guanhong Tao, NPL, “Trace Divergence Analysis and Embedding Regulation for Debugging Recurrent Neural Networks” (hereafter Tao).
Regarding claim 1. Nandi teaches a system for training an ensemble neural network device (Page 4, Sec III, FFNN structure is trained with five different activation functions), comprising one or more computer processors and one or more computer-readable media operatively coupled to the one or more computer processors,
wherein the one or more computer-readable media store instructions that, when executed by the one or more computer processors (Page 5, Sec III, B, processor core-i7-7700HQ with RAM, thus having a memory storing instructions), cause the one or more computer processors to execute steps of:
- providing a set of exemplar data, comprising at least one set of inputs and at least one set of outputs associated to the set of inputs, to a neural network device comprising an ensemble of neural network devices, each neural network of the ensemble being configured to provide independent predictions based upon same the exemplar data (Page 4 sec III, figure 3, Training data (input), classification model (Neural network), predicted classes (independent predictions), final class prediction (output) ),
- operating the neural network device based upon the set of exemplar data (Page 4, Sec III, activation function is selected, for training the model using the training data) and
- obtaining the trained neural network device configured to provide an output (Page 4, Sec III, P1, P2, are the predicted outputs), wherein:
- the neural network device further comprises at least two independent activation functions (Page 4, Sec III, ELU, Sigmoid, Relu, hyperbolic tangent, LeakyRelu), whereof at least two of the independent activation functions are representative of the statistical distribution of the plurality of independent predictions (Page 4, sec II, D, Statistical for supporting ensemble learning), the neural network device being configured to provide at least one output for at least two said independent activation functions (Page 4, Sec III, Final class prediction) and
- the step of operating further comprising a step of operating each neural network device of the ensemble to provide an ensemble of outputs (Page 4, Sec III, voting based ensembler), the neural network device being trained to minimize a value representative of at least two said independent activation functions (Page 3, sec II, C, minimizes the loss function or cost function).
Nandi does not teach minimize a value representative of the dispersion of the output of at least two said independent activation functions.
Tao teaches minimize a value representative of the dispersion of the output of at least two said independent activation functions (Tao, Page 992, sec 3.2, minimize output variations inf the presence of perturbations).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Nandi to incorporate the teachings of Tao to minimize the value representative of the dispersion of the output because it improves the accuracy of real world models and datasets (Tao, Page 986, abstract).
Regarding claim 2. Nandi and Tao teach the system according to claim 1, in which the neural network device obtained during the step of obtaining being configured to provide, additionally,
a value representative of the dispersion of the output of at least two said independent activation functions (Nandi, Page 5, Table II, comparison table, f-measure)(Nandi, Page 6, Table V, dispersion of the output for different methods and datasets).
Regarding claim 4. Nandi and Tao teach the system according to claim 1, in which the neural network device further comprises
a layer configured to add simulacrums of outputs generated by using the distribution of the outputs of the at least two independent activation functions of the trained neural network device (Page 4, fig 3, Predicted classes, P1, P2, P3, generated from the models using the activation functions).
Regarding claim 5. Nandi teaches a Computer-implemented method to train a neural network device (Page 5, Sec III, B, processor core-i7-7700HQ with RAM, thus having a memory storing instructions), comprising the steps of:
- providing a set of exemplar data, comprising at least one set of inputs and at least one set of outputs associated to the set of inputs, to a neural network device comprising an ensemble of neural network devices, each neural network device of the ensemble being configured to provide independent predictions based upon the same exemplar data (Page 4 sec III, figure 3, Training data (input), classification model (Neural network), predicted classes (independent predictions), final class prediction (output) ),
- operating the neural network device based upon the set of exemplar data (Page 4, Sec III, activation function is selected, for training the model using the training data) and
- obtaining the trained neural network device configured to provide an output (Page 4, Sec III, P1, P2, are the predicted outputs), wherein:
- the step of operating the neural network device, which further comprises at least two independent activation functions (Page 4, Sec III, ELU, Sigmoid, Relu, hyperbolic tangent, LeakyRelu),
whereof at least two of the independent activation functions are representative of a statistical distribution of the plurality of independent predictions (Page 4, sec II, D, Statistical for supporting ensemble learning), is configured to provide at least one output for at least two said independent activation functions (Page 4, Sec III, Final class prediction) and
- the step of operating further comprising
a step of operating each neural network device of the ensemble to provide an ensemble of outputs (Page 4, Sec III, voting based ensembler),
the neural network device being trained to minimize a value representative of at least two said independent activation functions (Page 3, sec II, C, minimizes the loss function or cost function).
Nandi does not teach minimize a value representative of the dispersion of the output of at least two said independent activation functions.
Tao teaches minimize a value representative of the dispersion of the output of at least two said independent activation functions (Tao, Page 992, sec 3.2, minimize output variations inf the presence of perturbations).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Nandi to incorporate the teachings of Tao to minimize the value representative of the dispersion of the output because it improves the accuracy of real world models and datasets (Tao, Page 986, abstract).
Regarding claim 6. Nandi teaches a computer implemented neural network device (Page 4, Sec III, FFNN structure is trained with five different activation functions), wherein the neural network device is obtained by the computer-implemented method according to claim 5, (Rejected on the same grounds as Claim 5).
Regarding claim 7. Nandi teaches a computer program product (Page 5, Sec III, B, processor core-i7-7700HQ with RAM, thus having a memory storing instructions), which comprises one or more non-transitory computer-readable storage mediums having program instructions embodied therewith, the program instructions executable by one or more processors to cause the processors to perform the method of claim 5 (Rejected on the same grounds as Claim 5).
Regarding claim 8. Nandi teaches a non-transitory computer-readable medium (Page 5, Sec III, B, processor core-i7-7700HQ with RAM, thus having a memory storing instructions), which stores instructions to execute the steps of a method according to claim 5, the instructions executable by one or more processors(Rejected on the same grounds as Claim 5).
Regarding claim 10. Nandi teaches a computer-implemented method to predict a category of representation in an image (Page 1, abstract, MNIST, Fashion MNIST (image), Semeion, and ARDIS IV datasets), which comprises:
- a step of training (Page 3, Sec II, C, training algorithm), by a computing device,
a neural network device according to the method object of claim 5 (Rejected on the same grounds as Claim 5), in which the exemplar set of data is representative of:
as input (Page 4 sec III, figure 3, Training data (input)), images (Page 1, abstract, MNIST, Fashion MNIST (image)) and
- as output, at least one category of representation in input images (Page 1, abstract, MNIST, Fashion MNIST, Semeion, and ARDIS IV datasets) (Page 4 sec III, figure 3, Training data (input), classification model (Neural network), predicted classes (independent predictions), final class prediction (output) ),
- a step of inputting, upon a computer interface, at least one image (Page 4 sec III, figure 3, Training data (input)),
- a step of operating, by a computing device, the trained neural network device (Page 3, Sec II, C, training algorithm) and
- a step of providing, upon a computer interface, for the composition, at least one category of representation output by the trained neural network device (Page 4, Sec III, P1, P2, are the predicted outputs).
Claims 3 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Arijit Nandi, NPL “Improving the Performance of Neural Networks with an Ensemble of Activation Functions”, IEEE, Published: July 2020, (hereafter Nandi), in views of Guanhong Tao, NPL, “Trace Divergence Analysis and Embedding Regulation for Debugging Recurrent Neural Networks” (hereafter Tao), in further views of Zheng Li, NPL, “The Optoelectronic Nose: Colorimetric and Fluorometric Sensor Arrays (hereafter Li).
Regarding claim 3. Nandi and Tao teach the system according to claim 1, in which at least two of the activation functions are representative of:
- a means of the statistical distribution of the plurality of independent predictions (Page 6, par 1, results of the mean f-measure and accuracy, thus ) and
- the statistical distribution of the plurality of independent predictions (Page 4, Sec III, P1, P2, are the predicted outputs).
Nandi and Tao do not teach the variance of the statistical distribution.
Li teaches the variance of the statistical distribution (Li, Page 249, sec 4.1.1, minimum variance, minimized at each step).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Nandi and Tao to incorporate the teachings of Li to have the variance of the statistical distribution because detection of toxic industrial chemicals, quality control of foods and beverages to fingerprint the desired substances (Li, Page 231, abstract).
Regarding claim 9. Nandi teaches a computer-implemented method to predict a physical, chemical, medicinal, sensorial, or pharmaceutical property of a flavor, fragrance or drug ingredient (Page 1, abstract, MNIST, Fashion MNIST, Semeion, and ARDIS IV datasets), which comprises:
- a step of training (Page 3, Sec II, C, training algorithm), by a computing device,
a neural network device according to the method object of claim 5 (Rejected on the same grounds as Claim 5), in which the exemplar set of data is representative of:
- as input (Page 4 sec III, figure 3, Training data (input)), and
- as output, at least one physical, chemical, medicinal, sensorial, or pharmaceutical property, one of said physical, chemical, medicinal, sensorial, or pharmaceutical properties being the molecular weight of the composition (Page 1, abstract, MNIST, Fashion MNIST, Semeion, and ARDIS IV datasets) (Page 4 sec III, figure 3, Training data (input), classification model (Neural network), predicted classes (independent predictions), final class prediction (output) ),
- a step of inputting, upon a computer interface, digital identifier, the resulting input corresponding to a composition (Page 4 sec III, figure 3, Training data (input)),
- a step of operating, by a computing device, the trained neural network device trained (Page 3, Sec II, C, training algorithm) and
- a step of providing, upon a computer interface, for the composition, at least one physical, chemical, medicinal, sensorial, or pharmaceutical property output by the trained neural network device (Page 1, abstract, MNIST, Fashion MNIST, Semeion, and ARDIS IV datasets).
Nandi and Tao do not teach inputting compositions of flavor, fragrance, or drug ingredients.
Li teaches inputting compositions of flavor, fragrance, or drug ingredients (Li, Page 232, Col 2, Par 5, array-based detectors, physical properties, molecular weight).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have modified Nandi and Tao to incorporate the teachings of Li to have inputted composition of flavor, fragrance or drug ingredients because detection of toxic industrial chemicals, quality control of foods and beverages to fingerprint the desired substances (dataset designer’s choice) (Li, Page 231, abstract).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANGEL JAVIER CALLE whose telephone number is (571)272-0463. The examiner can normally be reached Monday - Friday 7:30 a.m. - 5 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rehana Perveen can be reached at (571)-272-3676. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.C./Examiner, Art Unit 2189
/REHANA PERVEEN/Supervisory Patent Examiner, Art Unit 2189