DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the Reply filed on 02/10/2026. Claims 1-18 are pending in the case. Claims 1 and 10 are independent claims.
Response to Arguments
Applicant's prior art arguments have been fully considered but are moot in view of the new grounds of rejection presented below.
Claim Rejections - 35 U.S.C. § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 10 are rejected under 35 U.S.C. § 103 as being unpatentable over Hjelm et al. (Devon Hjelm, R., Athul Paul Jacob, Tong Che, Adam Trischler, Kyunghyun Cho, and Yoshua Bengio. "Boundary-seeking generative adversarial networks." arXiv e-prints (2017): arXiv-1702., hereinafter Hjelm) in view of Porikli et al. (U.S. Pat. App. Pub. No. 2012/0207384, hereinafter Porikli) and Kobayashiet al. (U.S. Pat. App. Pub. No. 2018/0018587, hereinafter Kobayashi).
As to independent claims 1 and 10, Hjelm teaches a computer-implemented method for improving performance of a machine learning classifier, the method comprising:
generating, by a data generator implemented by a programmed computer system, a plurality of synthetic input examples in a region of an input space associated with a transition in classification output of the machine learning classifier (Page 2, "the generator to place generated samples at the decision boundary");
computing, by the computer system, for each synthetic input example, a gradient vector of a classification score function with respect to the input example (Page 2, "estimates the gradient of the discriminator’s output w.r.t. the generator as the weighted sum of the gradients of the log-probabilities of the samples generated from the generator")….
Hjelm does not appear to expressly teach identifying, by the computer system, a change in direction of the gradient vectors across the input space indicative of a decision boundary.
Porikli teaches identifying, by the computer system, a change in direction of the gradient vectors across the input space indicative of a decision boundary (Paragraph 70, "the direction of the gradient of the decision function changes").
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having generative neural network of Hjelm to include the AI techniques of Porikli to improve accuracy (see Porikli at paragraph 22).
Hjelm does not appear to expressly teach performing, by the computer system, stability testing of the decision boundary; and modifying, by the computer system, the machine learning classifier to improve performance based on the stability testing.
Kobayashiteaches performing, by the computer system, stability testing of the decision boundary (Paragraph 69, "The machine learning device 100 repeats model learning and evaluation of the prediction performance K times, each time using a different block as the testing dataset"); and modifying, by the computer system, the machine learning classifier to improve performance based on the stability testing (Paragraph 69, "As a result of one learning step, a model with the highest prediction performance amongst K models created and average prediction performance over the K rounds are obtained").
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having generative neural network of Hjelm to include the machine learning techniques of Kobayashi to generate a model with high prediction performance efficiently (see Kobayashi at paragraph 15).
Claims 2, 9, 11, and 18 are rejected under 35 U.S.C. § 103 as being unpatentable over Hjelm in view of Porikli, Kobayashi, and Fan et al. (U.S. Pat. App. Pub. No. 2005/0278322, hereinafter Fan).
As to dependent claims 2 and 11, the respective rejections of claims 1 and 10 are incorporated.
Hjelm does not appear to expressly teach modifying the machine learning classifier comprises smoothing the decision boundary.
Fan teaches modifying the machine learning classifier comprises smoothing the decision boundary (Paragraph 97. "the hyperplane can be changed in a smooth manner by changing the magnitude of the weights").
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having generative neural network of Hjelm to include the AI techniques of Fan to improve accuracy (see Fan at paragraph 6 et seq.).
As to dependent claims 9 and 18, the respective rejections of claims 1 and 10 are incorporated.
Hjelm does not appear to expressly teach by the computer system, outputting a representation of the decision boundary.
Fan teaches by the computer system, outputting a representation of the decision boundary (Figure 2, optimum boundary).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having generative neural network of Hjelm to include the AI techniques of Fan to improve accuracy (see Fan at paragraph 6 et seq.).
Claims 5 and 14 are rejected under 35 U.S.C. § 103 as being unpatentable over Hjelm in view of Porikli, Kobayashi, and Gupta et al. (U.S. Pat. App. Pub. No. 2017/0372201, hereinafter Gupta).
As to dependent claims 5 and 14, the respective rejections of claims 1 and 10 are incorporated.
Porikli does not appear to expressly teach computing the gradient vector comprises back-propagating through the machine learning classifier to determine a partial derivative of the classification score function with respect to the input.
Gupta teaches computing the gradient vector comprises back-propagating through the machine learning classifier to determine a partial derivative of the classification score function with respect to the input (Paragraph 152, "backpropagation; (5) to calculate a gradient or partial derivative for a loss function with respect to each weight, respectively, in a neural network").
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having generative neural network of Hjelm to include the neural network techniques of Gupta to increase computational efficiency (see Fan at paragraph 21).
Allowable Subject Matter
Claims 3, 4, 6-8, 12, 13, and 15-17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Casey R. Garner whose telephone number is 571-272-2467. The examiner can normally be reached Monday to Friday, 8am to 5pm, Eastern Time.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached on 571-270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR to authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/Casey R. Garner/Primary Examiner, Art Unit 2123