Prosecution Insights
Last updated: April 19, 2026
Application No. 18/249,316

NEURAL NETWORK GENERATION DEVICE, NEURAL NETWORK CONTROL METHOD, AND SOFTWARE GENERATION PROGRAM

Non-Final OA §102§103
Filed
Apr 17, 2023
Examiner
BEAN, GRIFFIN TANNER
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Maxell, Ltd.
OA Round
1 (Non-Final)
21%
Grant Probability
At Risk
1-2
OA Rounds
4y 4m
To Grant
50%
With Interview

Examiner Intelligence

Grants only 21% of cases
21%
Career Allow Rate
4 granted / 19 resolved
-33.9% vs TC avg
Strong +28% interview lift
Without
With
+28.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
45 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
37.7%
-2.3% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
9.7%
-30.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§102 §103
DETAILED ACTION This Action is responsive to Claims filed 04/17/2023. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements (IDS) submitted on 04/17/2023 and 12/04/2025 were filed before the mailing date of the first Action. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Drawings Receipt of Drawings filed 04/17/2023 is acknowledged. These Drawings are acceptable. Status of the Claims Claims 8 and 12-16 were amended preliminarily. Claims 17-20 are new as of the filing of the instant Application. Claims 1-20 are currently pending. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an execution model generation unit” of Claim 1 “a software generation unit” of Claims 1-8 and 17 “a storage unit” of Claim 18 “a hardware generation unit” of Claim 19 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2, 8-9, 12-13, and 16-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Park et al. (HetPipe: Enabling Large DNN Training on (Whimpy) Heterogeneous GPU Clusters through Integration of Pipelined Model Parallelism and Data Parallelism, published July, 2020), hereinafter Park. In regards to claim 1: The present invention claims: “A neural network generation device that generates a neural network execution model for performing neural network operations, the neural network generation device comprising:” Park teaches “We present a DNN training system, HetPipe (Heterogeneous Pipeline), that integrates pipelined model parallelism (PMP) with data parallelism (DP).” (Abstract) and “The system that we propose focuses on training a large DNN model in a heterogeneous GPU cluster composed of various types of GPUs that have different computation capability and memory capacity. In such settings, for some types of GPUs in the cluster, the DNN model of interest may be too large to be loaded into the memory of a single GPU. The system that we propose in this paper leverages both pipelined model parallelism (PMP) and data parallelism (DP) to enable training of such large DNN models and, in the process, enhance performance as well as the utilization of the heterogeneous GPU resources of the cluster.” (Section 3, Page 310, mapping the training/distribution/partitioning of a DNN to the generic recitation of a neural network generation device generating a neural network execution model). “an execution model generation unit that generates the neural network execution model based on hardware information regarding hardware in which the neural network execution model is running and network information regarding the neural network;” Park teaches “To train DNN models based on pipelined model parallelism in virtual workers, the resource allocator first assigns k GPUs to each virtual worker based on a resource allocation policy (which will be discussed in Section 8.1). Note that for allocating the heterogeneous GPUs to the virtual workers, the resource allocation policy must consider several factors such as the performance of individual GPUs as well as the communication overhead caused by sending activations and gradients within a virtual worker, and synchronizing the weights among the virtual workers and the parameter server.” (Page 310, mapping the allocation of GPU resources based on GPU performance and communication overhead to the generic recitation of “information regarding hardware” and “network information”). “and a software generation unit that generates software for running neural network hardware obtained by installing the neural network model in the hardware.” Park teaches “In our experiments, we use four nodes with two Intel Xeon Octa-core E5-2620 v4 processors (2.10 GHz) connected via InfiniBand (56 Gbps). Each node has 64 GB memory and 4 homogeneous GPUs. Each node is configured with a different type of GPU as shown in Table 1. Thus, the total number of GPUs in our cluster is 16. Each GPU is equipped with PCIe-3_16 (15.75 GB/s). Ubuntu 16.04 LTS with Linux kernel version 4.4 is used. We implement HetPipe based on the WSP model by modifying TensorFlow 1.12 version3 with CUDA 10.0 and cuDNN 7.4.” (Page 315, mapping the implementation of HetPipe on hardware to the generic recitation of running and installing on neural network hardware). In regards to claim 2: The present invention claims: “wherein: the software generation unit generates the software for making the neural network hardware perform the neural network operations in a partitioned manner.” Park teaches “Note that for allocating the heterogeneous GPUs to the virtual workers, the resource allocation policy must consider several factors such as the performance of individual GPUs as well as the communication overhead caused by sending activations and gradients within a virtual worker, and synchronizing the weights among the virtual workers and the parameter server. Then, for the given DNN model and allocated k GPUs, the model partitioner divides the model into k partitions for the virtual worker such that the performance of the pipeline executed in the virtual worker can be maximized.” (Page 310). In regards to claim 8: The present invention claims: “wherein: the software generation unit allocates the partitioned neural network operations to the neural network hardware.” See above how Pages 315 and 310 of Park read on allocating and partitioning the neural network resources to the hardware resources. In regards to claims 9 and 12: Claims 9 and 12 recite similar limitations pertaining to the generation, implementation, and partitioning limitations of Claims 1-2 and 8, with the exception of a recitation of “A neural network control method…” in Claim 9; therefore, both sets of Claims are similarly rejected. In regards to claims 13 and 16: Claims 13 and 16 recite similar limitations to pertaining to the generation, implementation, and partitioning limitations of Claims 1-2 and 8, with the exception of a recitation of “A non-transitory computer-readable recording medium storing the program…” in Claim 13; therefore, both sets of Claims are similarly rejected. In regards to claim 17: The present invention claims: “wherein: the software generation unit generates the software including learned parameters relating to the neural network execution model.” Park teaches “As any typical DP, multiple virtual workers must periodically synchronize the global parameters via parameter servers or AllReduce communication; in HetPipe, parameter servers are used to maintain the global weights. Each virtual worker has a local copy of the global weights and periodically synchronizes the weights with the parameter server.” (Page 310). In regards to claim 18: The present invention claims: “further having: a storage unit that stores the learned parameters.” See Park Figure 2 for a Parameter Server. In regards to claim 19: The present invention claims: “further having: a hardware generation unit that generates a hardware model by which the neural network execution model can be installed in the hardware.” See above how Park teaches HetPipe allocating and assigning GPUs in its implementation, which the Examiner submits on the generic recitation of “generat[ing] a hardware model by which…the execution model can be installed.” In regards to claim 20: The present invention claims: “wherein: the software is generated so as to include learned parameters relating to the neural network.” See above where Park teaches generating and storing learned parameters in a parameter server in the implementation of HetPipe. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 3-7, 10-11, and 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park as applied to Claims 1, 9, and 13 above, in further view of Song et al. (AccPar: Tensor Partitioning for Heterogeneous Deep Learning Accelerators, published February, 2020), hereinafter Song. In regards to claim 3: While Park teaches processing minibatches for each virtual worker/GPU cluster (Section 4, particularly), Park fails to explicitly teach the limitations in “wherein: the software generation unit generates the software for making the neural network hardware perform the neural network operations with input data to the neural network partitioned into partial tensors.” Song, however, in a similar field of endeavor, teaches “We present ACCPAR, a principled and systematic method of determining the tensor partition among heterogeneous accelerator arrays.” (Abstract). Section 3 of Song goes into detail regarding the partitioning of input data tensors for efficient DNN learning. Song teaches “To achieve high throughput and balanced execution, we need to efficiently distribute data and model tensors between accelerators with awareness of heterogeneous computing capabilities and network bandwidth.” (Section 2.3, Page 3). It would have been obvious to one of ordinary skill in the art at the time of the Applicant’s filing to combine known tensor partitioning techniques or methods as taught in Song in a system such as Park’s to achieve better throughput and data balance among the respective virtual workers or assigned GPUs. In regards to claim 4: The present invention claims: “wherein: the software generation unit partitions the neural network based on a consecutive number of convolution operations to be consecutively implemented by the neural network hardware.” Song teaches “We can easily expand the communication cost and computation cost from fully-connected layers to convolutional layers. In convolutions, Fl , Fl+1, El and El+1 are 4-dimensional tensors, i.e., (batch, channel, height, width). We can view the four dimensional tensors as three dimensional tensors, but the third and fourth dimension is a meta dimension, i.e., (batch, channel, [height, width]).” (Page 8, Section 4.3, mapping the extension of an implementation of a method such as AccPar to include convolutions to the generic recitation of claim 4). In regards to claim 5: The present invention claims: “wherein: the neural network hardware has a memory for storing the partial tensor; and the software generation unit generates software for performing memory transfer of data necessary for the consecutive convolution operations to the memory from an external memory before implementing the consecutive convolution operations.” Park teaches “Each GPU executes both the forward and backward passes for the layers of the assigned partition. Note that it is important to execute the forward and backward passes of a partition on the same GPU as the activation result computed for the minibatch during the forward pass needs to be kept in the GPU memory until the backward pass of the same minibatch for efficient convergence, as similarly discussed by Narayanan and others [38]. Otherwise, considerable extra overhead will incur for managing the activation through either recomputation or memory management.” (Page 309, mapping to the memory on each/multiple GPUs being used in the operations of the neural network (such as convolution) allocated to them). In regards to claim 6: The present invention claims: “wherein: the software generation unit determines the consecutive number of the convolution operations to be consecutively implemented based on data amounts in unused areas of the memory.” Park teaches “To find the best partitions of a DNN model, we make use of CPLEX, which is an optimizer for solving linear programming problems [20]. The memory requirement for each partition on the pipeline to support Nm concurrent minibatches is provided as a constraint to the optimizer. The algorithm will return partitions for a model with a certain batch size only if it finds partitions that meet the memory requirement for the given GPUs. Also, the optimizer checks all the different orders of the given heterogeneous GPUs for a single virtual worker to partition and place layers of the DNN model on them.” (Page 315, mapping the use of a memory requirement of each GPU to the generic recitation of the use of “data amounts in unused areas of memory”). In regards to claim 7: The present invention claims: “wherein: the neural network hardware has a memory for storing the partial tensors; and the software generation unit generates software for performing memory transfer of the partial tensors necessary for the operations to the memory from an external memory before implementing the operations if the partial tensors necessary for the operations are not stored in the memory.” See above where a combination of Park and Song reads on the storage of partial data/tensors in memory. It would follow that in a minibatch or different partition of a tensor was needed/were to be processed, that data would be retrieved before operations would be performed on it. In regards to claims 10 and 11: Claims 10 and 11 recite similar limitations to Claims 3 and 4, with the exception of a recitation of “A neural network control method…” in Claim 9; therefore, both sets of Claims are similarly rejected. In regards to claims 14 and 15: Claims 14 and 15 recite similar limitations to Claims 3 and 4, with the exception of a recitation of “A non-transitory computer-readable recording medium storing the program…” in Claim 13; therefore, both sets of Claims are similarly rejected. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRIFFIN T BEAN whose telephone number is (703)756-1473. The examiner can normally be reached M - F 7:30 - 4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GRIFFIN TANNER BEAN/Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Apr 17, 2023
Application Filed
Mar 02, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12424302
ACCELERATED MOLECULAR DYNAMICS SIMULATION METHOD ON A QUANTUM-CLASSICAL HYBRID COMPUTING SYSTEM
2y 5m to grant Granted Sep 23, 2025
Patent 12314861
SYSTEMS AND METHODS FOR SEMI-SUPERVISED LEARNING WITH CONTRASTIVE GRAPH REGULARIZATION
2y 5m to grant Granted May 27, 2025
Patent 12261947
LEARNING SYSTEM, LEARNING METHOD, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 25, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
21%
Grant Probability
50%
With Interview (+28.4%)
4y 4m
Median Time to Grant
Low
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month