DETAILED ACTION
Claims 1-16 are presented for examination.
This office action is in response to submission of application on 11-NOVEMBER-2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed 11-NOVEMBER-2025 in response to the non-final office action mailed 19-AUGUST-2025 has been entered. Claims 1-16 remain pending in the application.
With regards to the non-final office action’s rejection under 103, the amendment to the claims have overcome the original rejection. However, upon a new search for the amended limitations, a new 103 rejection over Sikka in view of Carback further in view of Wenzel has been written.
In light of the new art, the arguments are moot.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-16 are rejected under 35 U.S.C. 103 as being unpatentable over Sikka et al. (Pub. No. US 20210081841 A1, filed September 10th 2020, hereinafter Sikka) in view of Carback et al. (Pub. No. US 20150363294 A1, filed June 10th 2015, hereinafter Carback) further in view of Wenzel et al. (Pub. No. WO 2021004740 A1, filed June 17th 2020, hereinafter Wenzel).
Regarding claim 1:
Claim 1 recites:
An apparatus for generating a valid neural network architecture, the apparatus comprising: one or more processors; a memory storing one or more programs configured to be executed by the one or more processors; the one or more programs comprising instructions for a neural network architecture parser and a neural network architecture generator; the neural network architecture parser configured to generate one or more abstract syntax trees corresponding to one or more neural network architectures, respectively, by parsing the one or more neural network architectures; and the neural network architecture generator configured to generate one or more new neural network architectures by substituting at least a portion of one or more blocks of the one or more abstract syntax trees with blocks compatible with the portion, wherein the neural network architecture parser is configured to calculate an input/output rule for each of the one or more blocks included in the one or more abstract syntax trees, an input/output rule for each layer being a shape conversion function of a corresponding layer between an input tensor and an output tensor, and wherein a shape conversion function of a parallel block is obtained by inheriting a rule of a corresponding merge layer, and a shape conversion function of a serial block is obtained by continuously accumulating rules of at least one lower layer and/or at least one lower block included in a corresponding block
Sikka discloses generating a valid neural network architecture, the apparatus comprising: one or more processors; a memory storing one or more programs configured to be executed by the one or more processors; the one or more programs comprising instructions for a neural network architecture parser and a neural network architecture generator:
Sikka teaches generating a neural network based on compiled code using a synthesis engine (Paragraph 54). This would be a neural network architecture generator as the network would include an architecture, wherein the architecture would be valid as it corresponds to an actual network.
Furthermore, Sikka teaches representing a neural network architecture as source code in building blocks (Paragraph 45) which would describe a neural network architecture parser as the representation is a form of parsing.
Furthermore, Sikka teaches a processor and memory (Paragraph 28) wherein the processor executes a software program (Paragraph 29).
Sikka discloses the neural network architecture parser configured to generate one or more abstract syntax trees corresponding to one or more neural network architectures, respectively, by parsing the one or more neural network architectures:
Sikka teaches parsing the neural network architecture by parsing the neural networks as source code (Paragraph 45) and parsing the source code to form an abstract syntax tree (Paragraph 51). Therefore the abstract syntax tree which corresponds to the neural network architecture is formed through parsing the neural network architecture.
Sikka discloses wherein the neural network architecture parser is configured to calculate an input/output rule for each of the one or more blocks included in the one or more abstract syntax trees:
Sikka teaches that when parsing the neural network architecture, the mathematical expression used to represent the layers define mathematical operations that map the input to the output (Paragraph 46-47) which would be an input/output rule for each of the blocks included in the abstract syntax trees.
However, Sikka does not teach the neural network architecture generator configured to generate one or more new neural network architectures by substituting at least a portion of one or more blocks of the one or more abstract syntax trees with blocks compatible with the portion:
Carback in the same field of endeavor of automatic alteration of software (including neural networks) teaches that in response to a flaw detected in source code, the flaw can be automatically remedied by using a repair source code block (Paragraph 108). This would be an example of substituting a portion of a block with a compatible block.
In combination with Sikka, which uses source code blocks (or abstract syntax tree blocks as the ASTs are formed from the source code) to generate a neural network architecture, it would have been obvious to create a neural network architecture generator configured to generated new architectures by substituting at least a portion of one or more blocks of the abstract syntax trees with compatible blocks.
Carback and the present application are analogous art because they are in the same field of endeavor.
Wenzel in the same field of endeavor of neural networks discloses an input/output rule for each layer being a shape conversion function of a corresponding layer between an input tensor and an output tensor:
Wenzel recites: “In this example, the input data 21 is image data that is in the form of a three-dimensional tensor […] The output 23d as a whole is therefore a tensor formed from the two layers 23d1 and 23d2”
Wenzel teaches an input tensor with the image data in the form of a three-dimensional tensor. Furthermore, it forms an output tensor from the tensor formed by the two layers.
Wenzel further recites: “The processing stage includes the application of at least one filter. For example, if the KNN processes an image as input data, the convolution layer assign the respective calculation result of the filter to the places where the filter is applied to the image and in this way convert the image into a so-called “feature map”
Wenzel therefore teaches a shape conversion layer during the processing between the input and output tensor as it describes the filtering of an image which would be a form of shape conversion through the mapping of the image.
Wenzel further recites: “an artificial neural network which processes input data into output data in a sequence of processing stages typically also called layers or layers. Each processing stage converts the input supplied to it into an output by applying processing rules.”
The processing rule would be the input/output rule for each layer as each layer contains at least one processing rule.
Wenzel and the present application are analogous art because they are in the same field of endeavor.
Wenzel discloses wherein a shape conversion function of a parallel block is obtained by inheriting a rule of a corresponding merge layer, and a shape conversion function of a serial block is obtained by continuously accumulating rules of at least one lower layer and/or at least one lower block included in a corresponding block:
Wenzel recites: “an artificial neural network which processes input data into output data in a sequence of processing stages typically also called layers or layers. Each processing stage converts the input supplied to it into an output by applying processing rules.”
Wenzel teaches that through a series of processing stage i.e. layers, an input is converted by applying processing rules which are representative of shape conversion functions. This indicated that throughout the processing of the input it continuously accumulates rules of at least one lower layer as it has previously run through a lower layer. Furthermore, the accumulation would be the inherited rule of a correspond merge layer, as merge layers are defined by their addition, subtraction, or concatenation (Present application, paragraph 33), where the processing rule of the merge layer would be concatenated onto the rules of the previous layers. Furthermore, parallel blocks are defined by the presence of merge layers (Present application, paragraph 40) and are hence present, and the serial blocks are present through the ordering of the layers.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement an apparatus that utilizes the teachings of Sikka and the teachings of Carback. This would provide the advantage of automating development of computer programs, including neural networks (Carback, Paragraph 5) as well as a neural network which “saves computing time and energy when operating the KNN by saving unnecessary computing operations” (Wenzel).
Regarding claim 2, which depends upon claim 1:
Claim 2 recites:
The apparatus of claim 1, wherein the neural network architecture parser is configured to generate the one or more abstract syntax trees by parsing an architecture expression syntax expressing in a predefined process calculus grammar a plurality of layers included in each of the one or more neural network architectures, a serial connection between the plurality of layers and a parallel merging between the plurality of layers.
Sikka in view of Carback further in view of Wenzel teaches the apparatus of claim 1 upon which claim 2 depends. Furthermore, Sikka discloses the limitations of claim 2:
Sikka teaches first parsing a neural network architecture as source code in series of a mathematical expression, wherein the features expressed by the mathematical expression may include a layer notation for specifying a layer of a neural network as well as a link between layers (Paragraph 45). The source code as mathematical expressions would be a predefined process calculus grammar as it uses math to express the architecture in a rules-based manner, wherein the plurality of layers in the neural network architecture and the serial connections between them are expressed.
Furthermore, in the applicant’s specification parallel merging is discussed as merge operations such as addition, concatenation, and multiplication (present application, Paragraph 33). Sikka teaches that the mathematical expression may includes addition and multiplication (Paragraph 45), which would fit with this definition of parallel merging.
Regarding claim 3, which depends upon claim 2:
Claim 3 recites:
The apparatus of claim 2, wherein the neural network architecture parser is configured to: store the calculated input/output rule and the one or more abstract syntax trees in a first reference database
Sikka in view of Carback further in view of Wenzel teaches the apparatus of claim 2 upon which claim 3 depends. Carback discloses store the calculated input/output rule and the one or more abstract syntax trees in a first reference database:
Carback teaches directed graphs of the input and outputs in basic block of code (Paragraph 48) wherein these graphs are defined as artifacts in Carback (Paragraph 91). Carback teaches that its artifacts are stored within a database for referencing of these artifacts (Paragraph 101). Therefore the input/output rule is stored within the reference database.
Furthermore, Carback teaches that artificial syntax trees may also serve as these (stored) artifacts (Paragraph 75).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement an apparatus that utilizes the teachings of Sikka and the teachings of Carback. This would provide the advantage of automating development of computer programs, including neural networks (Carback, Paragraph 5) as well as a neural network which “saves computing time and energy when operating the KNN by saving unnecessary computing operations” (Wenzel).
Regarding claim 4, which depends upon claim 3:
Claim 4 recites:
The apparatus of claim 3, wherein the neural network architecture generator is configured to: identify one or more serial blocks included in each of the one or more abstract syntax trees; and store the identified one or more serial blocks in a second reference database
Sikka in view of Carback further in view of Wenzel teaches the apparatus of claim 3 upon which claim 4 depends. Furthermore, regarding the limitation wherein the neural network architecture generator is configured to: identify one or more serial blocks included in each of the one or more abstract syntax trees:
Sikka teaches that its abstract syntax trees captures the syntactical structure of the source code (Paragraph 51) wherein that source code contains building blocks that make up the code (Paragraph 46). Therefore, the serial blocks included in each of the abstract syntax trees are identified as part of the recreation of a neural network from the abstract syntax tree’s compiled code (Paragraph 54).
However, Sikka does not teach store the identified one or more serial blocks in a second reference database:
Carback teaches that this block is an artifact, wherein artifacts of Carback are stored in database (Paragraph 109), which may serve as both the first and second reference database.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement an apparatus that utilizes the teachings of Sikka and the teachings of Carback. This would provide the advantage of automating development of computer programs, including neural networks (Carback, Paragraph 5) as well as a neural network which “saves computing time and energy when operating the KNN by saving unnecessary computing operations” (Wenzel).
Regarding claim 5, which depends upon claim 4:
Claim 5 recites:
The apparatus of claim 4, wherein the neural network architecture generator is configured to: amplify the identified one or more serial blocks by applying at least one of block splitting, parameter mutation, or block concatenation; and store the amplified serial block in the second reference database.
Sikka in view of Carback further in view of Wenzel teaches the apparatus of claim 4 upon which claim 5 depends. Sikka discloses wherein the neural network architecture generator is configured to: amplify the identified one or more serial blocks by applying at least one of block splitting, parameter mutation, or block concatenation:
Sikka teaches that the machine learning model may be recompiled or retrained with new architecture or features (Paragraph 96). As defined in the specification, parameter mutation is when the parameter of a layer causes a block to generate a new block (present application, Paragraph 48). Retraining with new architecture would encompass this as the retraining would alter many of the source block that form the network along with, for example, their inputs and outputs, which would necessitate a new block.
Carback discloses store the amplified serial block in the second reference database:
Carback teaches that this block is an artifact, wherein artifacts of Carback are stored in database (Paragraph 109), which may serve as both the first and second reference database.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement an apparatus that utilizes the teachings of Sikka and the teachings of Carback. This would provide the advantage of automating development of computer programs, including neural networks (Carback, Paragraph 5) as well as a neural network which “saves computing time and energy when operating the KNN by saving unnecessary computing operations” (Wenzel).
Regarding claim 6, which depends upon claim 4:
Claim 6 recites:
The apparatus of claim 4, wherein the neural network architecture generator is configured to: select one of the one or more abstract syntax trees stored in the first reference database; and substitute a first serial block among serial blocks included in the selected abstract syntax tree with a second serial block among the one or more serial blocks stored in the second reference database.
Sikka in view of Carback further in view of Wenzel teaches the apparatus of claim 4 upon which claim 6 depends. Furthermore, Carback teaches the limitation of claim 6:
Carback in the same field of endeavor of automatic alteration of software (including neural networks) teaches that in response to a recognizable flaw detected in the source code, the flaw can be automatically remedied by using a repair source code block where the source code block is specific to the flaw being repaired (Paragraph 108). The flaw being repaired and the available solutions would be analogous to the selected tree as it describes the specific expected architecture. This would be an example of substituting a first serial block with a second serial block, wherein the serial blocks may be viewed in combination with Sikka, which has previously taught abstract syntax trees including serial blocks.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement an apparatus that utilizes the teachings of Sikka and the teachings of Carback. This would provide the advantage of automating development of computer programs, including neural networks (Carback, Paragraph 5) as well as a neural network which “saves computing time and energy when operating the KNN by saving unnecessary computing operations” (Wenzel).
Regarding claim 7, which depends upon claim 6:
Claim 7 recites:
The apparatus of claim 6, wherein the first serial block and the second serial block have the same input/output rules.
Sikka in view of Carback further in view of Wenzel teaches the apparatus of claim 6 upon which claim 7 depends. Carback discloses the limitation of claim 7:
Carback teaches that replacement blocks must be compatible with the base block by having the same number of parameters (i.e., inputs) as well as outputs (Paragraph 114).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement an apparatus that utilizes the teachings of Sikka and the teachings of Carback. This would provide the advantage of automating development of computer programs, including neural networks (Carback, Paragraph 5) as well as a neural network which “saves computing time and energy when operating the KNN by saving unnecessary computing operations” (Wenzel).
Regarding claim 8, which depends upon claim 6:
Claim 8 recites:
The apparatus of claim 6, wherein the neural network architecture generator is configured to add an abstract syntax tree in which the first serial block is substituted to the first reference database.
Sikka in view of Carback further in view of Wenzel teaches the apparatus of claim 6 upon which claim 8 depends. Furthermore, Carback discloses the limitation of claim 8:
Carback teaches that abstract syntax trees for parsing the source code may serve as artifacts (Paragraph 75) wherein artifacts are stored to the database (Paragraph 83). Furthermore, Carback teaches that the repaired version of the filed may be stored to the database (Paragraph 85), which would be included in the source code that is parsed by the abstract syntax trees. Therefore, the abstract syntax tree stored would be one wherein the first serial block is substituted.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement an apparatus that utilizes the teachings of Sikka and the teachings of Carback. This would provide the advantage of automating development of computer programs, including neural networks (Carback, Paragraph 5) as well as a neural network which “saves computing time and energy when operating the KNN by saving unnecessary computing operations” (Wenzel).
Claims 9-16 recite a method that parallels the apparatus of claims 1-8 respectively. Therefore, the analysis discussed above with respect to claims 1-8 also applies to claims 9-16 respectively. Accordingly, claims 9-16 are rejected based on substantially the same rationale as set forth above with respect to claims 1-8 respectively.
Response to Arguments
Applicant’s arguments filed 11-NOVEMBER-2025 have been fully considered, but the examiner believes that not all are fully persuasive.
Regarding the applicant’s remarks on the non-final office action’s 103 rejection of the claims, the applicant argues that Sikka in view of Carback alone do not teach the amended limitations of these claims. As such, the applicant argues that all claims dependent on the above would additionally not be obvious under 103. The examiner agrees that the prior art of the original office action does not teach the amended limitations. However, upon a new search of the prior art for the amended limitations, the examiner has written a new rejection under 103 to address these limitations and respectfully requests applicant’s consideration of the following:
Sikka discloses wherein the neural network architecture parser is configured to calculate an input/output rule for each of the one or more blocks included in the one or more abstract syntax trees:
Sikka teaches that when parsing the neural network architecture, the mathematical expression used to represent the layers define mathematical operations that map the input to the output (Paragraph 46-47) which would be an input/output rule for each of the blocks included in the abstract syntax trees.
Wenzel in the same field of endeavor of neural networks discloses an input/output rule for each layer being a shape conversion function of a corresponding layer between an input tensor and an output tensor:
Wenzel recites: “In this example, the input data 21 is image data that is in the form of a three-dimensional tensor […] The output 23d as a whole is therefore a tensor formed from the two layers 23d1 and 23d2”
Wenzel teaches an input tensor with the image data in the form of a three-dimensional tensor. Furthermore, it forms an output tensor from the tensor formed by the two layers.
Wenzel further recites: “The processing stage includes the application of at least one filter. For example, if the KNN processes an image as input data, the convolution layer assign the respective calculation result of the filter to the places where the filter is applied to the image and in this way convert the image into a so-called “feature map”
Wenzel therefore teaches a shape conversion layer during the processing between the input and output tensor as it describes the filtering of an image which would be a form of shape conversion through the mapping of the image.
Wenzel further recites: “an artificial neural network which processes input data into output data in a sequence of processing stages typically also called layers or layers. Each processing stage converts the input supplied to it into an output by applying processing rules.”
The processing rule would be the input/output rule for each layer as each layer contains at least one processing rule.
Wenzel and the present application are analogous art because they are in the same field of endeavor.
Wenzel discloses wherein a shape conversion function of a parallel block is obtained by inheriting a rule of a corresponding merge layer, and a shape conversion function of a serial block is obtained by continuously accumulating rules of at least one lower layer and/or at least one lower block included in a corresponding block:
Wenzel recites: “an artificial neural network which processes input data into output data in a sequence of processing stages typically also called layers or layers. Each processing stage converts the input supplied to it into an output by applying processing rules.”
Wenzel teaches that through a series of processing stage i.e. layers, an input is converted by applying processing rules which are representative of shape conversion functions. This indicated that throughout the processing of the input it continuously accumulates rules of at least one lower layer as it has previously run through a lower layer. Furthermore, the accumulation would be the inherited rule of a correspond merge layer, as merge layers are defined by their addition, subtraction, or concatenation (Present application, paragraph 33), where the processing rule of the merge layer would be concatenated onto the rules of the previous layers. Furthermore, parallel blocks are defined by the presence of merge layers (Present application, paragraph 40) and are hence present, and the serial blocks are present through the ordering of the layers.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement an apparatus that utilizes the teachings of Sikka and the teachings of Carback. This would provide the advantage of automating development of computer programs, including neural networks (Carback, Paragraph 5) as well as a neural network which “saves computing time and energy when operating the KNN by saving unnecessary computing operations” (Wenzel).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDRIA JOSEPHINE MILLER whose telephone number is (703)756-5684. The examiner can normally be reached Monday-Thursday: 7:30 - 5:00 pm, every other Friday 7:30 - 4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.J.M./Examiner, Art Unit 2142 /HAIMEI JIANG/Primary Examiner, Art Unit 2142