DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Remarks
In response to communication files on September 17, 2025, claims 1-2 and 11-14 are amended by applicant's request. Therefore, claims 1-14 are presently pending in the application.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-14 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Yang et al. (US pub. 2021/0056378) (Eff filing date of 8/23/2019) (Hereinafter Yang).
As to claims 1, 11 and 13, Yang teaches a neural network structure search device that searches for a neural network architecture, comprising:
a memory storing software instructions (see 103-104, memory), and
one or more processors configured to execute the software instructions to (see p. 33, processor)
compile multiple operations included in an operation space that is a candidate for search into a single operation (see claim 2, p. 6, “defining a discrete computational cell architecture by replacing each linear combination of candidate operations with a single operation”, p. 21, and p. 65, “defining a discrete computational cell architecture by replacing each linear combination of candidate operations with a single operation”) (see abstract, combination of candidate operations, p. 4, “replacing each operation that transforms a respective neural network latent representation with a respective linear combination of candidate operations from a predefined set of candidate operations, wherein each candidate operation in a respective linear combination has a respective mixing weight that is parameterized by one or more computational cell hyper parameters” and p. 6), and
determine a high performance architecture from candidate architectures that include the compiled operation (the term “high performance” is indefinite and subjective, see Yang where optimization can be interpreted as high performance. See abstract, “replacing each operation that transforms a respective neural network latent representation with a respective linear combination of candidate operations, where each candidate operation in a respective linear combination has a respective mixing weight that is parameterized…to optimize” p. 4 and p. 25, “The presently described techniques for performing neural network architecture search can also achieve improved neural architecture search speed.”)
As to claims 2, 12, and 14, Yang teaches wherein the one or more processors configured to execute the software instructions to compile two or more operations of the same type with different parameter or with the same role into one operation (see abstract, “respective linear combination of candidate operations” p. 20, “In some implementations the predefined set of candidate operations comprises pooling operations, convolutional operations or connection operations.”, p.63) (see p. 59, “multiple instances of the defined computational cell with a same learned architecture and independently learned weights can be stacked to generate a deeper neural network.”, in the same architecture but with different weights, meaning different parameters).
As to claim 3 and 7, Yang teaches wherein the one or more processors configured to execute the software instructions to compile multiple operations into a single operation using a parameter to be compressed (see abstract, “where each candidate operation in a respective linear combination has a respective mixing weight that is parameterized by one or more computational cell hyper parameters;”).
As to claim 4 and 8, Yang teaches wherein the one or more processors configured to execute the software instructions to predict performance of architecture based on the candidate architectures (see p. 4 and 92, projecting and fig. 3).
As to claim 5 and 9, Yang teaches wherein the one or more processors configured to execute the software instructions to search for a parameter of candidate architectures whose parameter is unknown (see p. 17, will get/compute the parameter from previous iterations).
As to claim 6 and 10, Yang teaches wherein the one or more processors configured to execute the software instructions to search for parameter in a parameter space to be searched (see p. 54, “Adjusting values of the computational cell hyper parameters and computational cell weights to optimize a validation loss function subject to the resource constraints 106 includes implementing a continuous relaxation strategy to map the architecture search space from a discrete search space defined by a predefined discrete set of candidate operations O.sub.i,j to a continuous search space, so that the architecture can be determined using gradient descent”).
Response to Arguments
Applicant's arguments filed on September 17, 2025 have been fully considered but they are not persuasive.
In response to applicants’ arguments that prior art does not teach compile multiple operations, the arguments have been fully considered but are not deemed persuasive, because Yang teaches a combination of candidate operations and a predefined set of candidate operations, see abstract and paragraphs 4 and 6.
In response to applicants’ arguments that prior art does not teach determine high performance, the arguments have been fully considered but are not deemed persuasive, because those term are indefinite, what define what high performance, how that is measured. In yang he teaches how this method is use to optimize the function and in p. 25, the architectural search can improve the speed in the neural architecture search.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BELIX M ORTIZ DITREN whose telephone number is (571)272-4081. The examiner can normally be reached M-F 9am -5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached at 571-270-1698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
BELIX M. ORTIZ DITREN
Primary Examiner
Art Unit 2164
/Belix M Ortiz Ditren/ Primary Examiner, Art Unit 2164