Prosecution Insights
Last updated: April 19, 2026
Application No. 18/125,554

MULICAST AND SIMULCAST IN NEURAL ENGINES

Non-Final OA §101§103§112
Filed
Mar 23, 2023
Examiner
ROHD, BENJAMIN MATTHEW
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
1 (Non-Final)
0%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 1 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
31
Total Applications
across all art units

Statute-Specific Performance

§101
23.5%
-16.5% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This office action is in response to submission of application on 03/23/2023. Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 Claims 1-9, 12-13, 15-16, and 19-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Where applicant acts as his or her own lexicographer to specifically define a term of a claim contrary to its ordinary meaning, the written description must clearly redefine the claim term and set forth the uncommon definition so as to put one reasonably skilled in the art on notice that the applicant intended to so redefine that claim term. Process Control Corp. v. HydReclaim Corp., 190 F.3d 1350, 1357, 52 USPQ2d 1029, 1033 (Fed. Cir. 1999). The term “multicast” in claims 2-3, 11-12, and 19-20 is used by the claim to mean that multiple distinct data items are broadcast to multiple separate destinations (in this case, neural engine clusters), while the accepted meaning is that a single stream of data is broadcast to multiple separate destinations. While the specification alludes to the intended meaning (see 0087-0088), the term is indefinite because the specification does not clearly redefine the term. For examination purposes, the intended meaning will be assumed. The term “simulcast” in claims 2, 4, 8, 11, 13, and 19 is used by the claim to mean that multiple data items are broadcast simultaneously to multiple separate destinations (in this case, neural engine clusters), while the accepted meaning is that identical streams of data are broadcast to multiple separate destinations. While the specification alludes to the intended meaning (see 0087 and 0091-0092), the term is indefinite because the specification does not clearly redefine the term. For examination purposes, the intended meaning will be assumed. Claim 1 recites the limitation “the neural processor unit” in line 9. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, this limitation will be interpreted as referring to the neural processor circuit. Claim 3-4, 7-8, 12-13, and 16 each recite the limitation "the computational operation". There is insufficient antecedent basis for this limitation in the claim. For examination purposes, each first recitation of the limitation will be interpreted as referring to a computational operation, as is the case in claim 20. Claims 2-9, 12-13, 15, and 20 are additionally rejected due to their dependence on rejected claims for the reasons outlined above. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1: Step 1: The claim is directed to a method, which falls within the statutory category of a process. Step 2A Prong 1: The claim is directed to an abstract idea. Specifically, the claim recites: determine, based in part on a neural network description, a data broadcasting mode and an input data dimension configuration mode; (Abstract idea – mental process. Determining data broadcasting and dimension configuration modes based on a neural network description can practically be performed in the human mind or with the aid of pen and paper, for example, by viewing the neural network structure and data on a display and mentally determining how to partition and broadcast the data. The courts have recognized that claims can recite a mental process even if they are claimed as being performed on a computer. See MPEP 2106.04(a)(2)(III).) generate one or more task descriptors, a task descriptor indicating the data broadcasting mode and input data dimension configuration mode; (Abstract idea – mental process. Generating task descriptors indicating configuration modes can practically be performed in the human mind or with the aid of pen and paper, for example, by writing down the mentally selected configuration modes on a sheet of paper. The courts have recognized that claims can recite a mental process even if they are claimed as being performed on a computer. See MPEP 2106.04(a)(2)(III).) Step 2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination. Specifically, the claim recites the additional elements: A system-on-a-chip circuit, comprising: a neural processor circuit and a central processor unit coupled to the neural processor unit (A system-on-a-chip comprising an NPU and a CPU is a generic computing environment, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) a plurality of neural engines, at least one of the plurality of neural engines configured to perform computational operations on input data with variable dimensions; (Neural engines configured to operate on data with variable dimensions are generic computing components which are standard in the field of machine learning, and thus amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) a data processor circuit coupled to the plurality of neural engines, the data processor circuit configured to provide multiple modes of data broadcasting, wherein the data is broadcasted from the data processor circuit to the plurality of neural engines via one or more broadcast buses; (A data processor circuit coupled to neural engines via broadcast buses to provide modes of data broadcasting is a standard component of an NPU, and thus amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) instruct, using the task descriptors, the data processor circuit to broadcast input data to the plurality of neural engines according to the determined data broadcasting mode; (Instructing the data processor circuit to broadcast data according to the determined broadcast mode amounts to adding insignificant extra-solution activity to the judicial exception – see MPEP2106.05(g).) instruct, using the task descriptors, a neural engine to perform computational operations according to the determined input data dimension configuration mode. (Performing computational operations based on input data dimension configuration is standard neural engine behavior, and thus amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Specifically, the claim recites the additional elements: A system-on-a-chip circuit, comprising: a neural processor circuit and a central processor unit coupled to the neural processor unit (A system-on-a-chip comprising an NPU and a CPU is a generic computing environment, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) a plurality of neural engines, at least one of the plurality of neural engines configured to perform computational operations on input data with variable dimensions; (Neural engines configured to operate on data with variable dimensions are generic computing components which are standard in the field of machine learning, and thus amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) a data processor circuit coupled to the plurality of neural engines, the data processor circuit configured to provide multiple modes of data broadcasting, wherein the data is broadcasted from the data processor circuit to the plurality of neural engines via one or more broadcast buses; (A data processor circuit coupled to neural engines via broadcast buses to provide modes of data broadcasting is a standard component of an NPU, and thus amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) instruct, using the task descriptors, the data processor circuit to broadcast input data to the plurality of neural engines according to the determined data broadcasting mode; (Instructing the data processor circuit to broadcast data according to the determined broadcast mode amounts to adding insignificant extra-solution activity to the judicial exception – see MPEP2106.05(g). Further, the limitation is directed to receiving or transmitting data over a network, which the courts have found to be well-understood, routine, and conventional in the computer arts – see MPEP 2106.05(d).) instruct, using the task descriptors, a neural engine to perform computational operations according to the determined input data dimension configuration mode. (Performing computational operations based on input data dimension configuration is standard neural engine functionality, and thus amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) Claims 2-20: Claim 2 recites The system-on-a-chip circuit of claim 1, wherein the multiple modes of data broadcasting include a multicast mode and a simulcast mode, and wherein simulcast mode is a function of multicast mode. A data processor circuit configured to broadcast multiple data items to multiple destinations, including simultaneously, is a standard component of an NPU, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f). Therefore, the claim does not recite additional elements that are sufficient to amount to significantly more than the abstract idea. Claim 3 recites The system-on-a-chip circuit of claim 2, wherein the multicast mode further comprises: assigning each of the plurality of neural engines to a cluster; broadcasting a different portion of input data associated with the computational operation to each cluster to generate an output value; and performing, at each cluster, the computational operation. Assigning each neural engine to a cluster can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by viewing a diagram of the neural engines and mentally determining a cluster assignment for each neural engine. Broadcasting data to each cluster amounts to adding insignificant extra-solution activity to the judicial exception – see MPEP2106.05(g). Further, the limitation is directed to receiving or transmitting data over a network, which the courts have found to be well-understood, routine, and conventional in the computer arts – see MPEP 2106.05(d). Performing a computational operation is standard neural engine functionality, and thus amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f). Therefore, the claim merges with the abstract idea recited in claim 1, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea. Claim 4 recites The system-on-a-chip circuit of claim 3, wherein the simulcast mode further comprises: fetching consecutive blocks of input data from a buffer; broadcasting each block of input data to different clusters; and performing, at each cluster, the computational operation. Fetching consecutive blocks of input data from a buffer and broadcasting the data to different clusters amounts to adding insignificant extra-solution activity to the judicial exception – see MPEP2106.05(g). Further, the limitation is directed to receiving or transmitting data over a network, which the courts have found to be well-understood, routine, and conventional in the computer arts – see MPEP 2106.05(d). Performing a computational operation is standard neural engine functionality, and thus amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f). Therefore, the claim does not recite additional elements that are sufficient to amount to significantly more than the abstract idea. Claim 5 recites The system-on-a-chip circuit of claim 1, wherein the input data dimension configuration modes comprise: a first input data dimension configuration mode, which increases a first dimension of a block of input data, and a second input data dimension configuration mode, which increases a second dimension of a block of input data. Increasing a first or second dimension of input data can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by viewing the data in two dimensions on a sheet of paper, partitioning it by hand, and then increasing either the width or the height of each partition. Therefore, the claim merges with the abstract idea recited in claim 1, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea. Claim 6 recites The system-on-a-chip circuit of claim 3, wherein a total number of clusters is determined based in part on the determined input data configuration mode. Determining a number of clusters based on the input data dimension configuration can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by viewing the input data dimension configuration on a display and mentally selecting a suitable number of clusters. Therefore, the claim merges with the abstract idea recited in claim 3, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea. Claim 7 recites The system-on-a-chip circuit of claim 1, wherein the neural network description includes the computational operation and a number of output channels. This claim merely qualifies the neural network description of claim 1 as including an indication of computational operation and number of output channels. Therefore, the claim merges with the abstract idea recited in claim 1, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea. Claim 8 recites The system-on-a-chip circuit of claim 1, wherein a buffer is configured to operate in simulcast mode based in part on the input data dimension configuration mode and computational operation. Determining to operate in simulcast mode based on the input data dimension configuration and computational operation can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by viewing the input data dimension configuration and computational operation on a display and mentally determining that simulcast mode is suitable. See MPEP 2106.04(a)(2)(III). A buffer is a standard component of an NPU, and thus amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f). Therefore, the claim merges with the abstract idea recited in claim 1, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea. Claim 9 recites The system-on-a-chip circuit of claim 1, wherein the input data is sized by a rasterizer, the rasterizer configured to divide the input data based on the input data dimension configuration mode determined by a compiler. Sizing and dividing input data based on the input data dimension configuration can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by viewing the data in two dimensions on a sheet of paper and partitioning it by hand according to determined partition dimensions. See MPEP 2106.04(a)(2)(III). Use of a generic rasterizer and compiler to perform these steps amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f). Therefore, the claim merges with the abstract idea recited in claim 1, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea. Claims 10-17 are method claims containing substantially the same elements as system claims 1-7 and 9, respectively, and are rejected on the same grounds under 35 U.S.C. 101 as claims 1-7 and 9, respectively, mutatis mutandis. Claims 18-20 are device claims containing substantially the same elements as system claims 1-3, respectively, and are rejected on the same grounds under 35 U.S.C. 101 as claims 1-3, respectively, mutatis mutandis. The additional components of An electronic device, comprising: a system memory configured to store a neural network; and a system-on-a-chip circuit coupled to the memory, the system-on-a-chip circuit configured to: are interpreted as a general-purpose computing environment and mere instructions to apply the judicial exception on the computer. Therefore, the claims do not recite additional elements that are sufficient to amount to significantly more than the abstract idea. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Mills, U.S. Patent Application Publication US-20190340501-A1 (published 11/07/2019), in view of Mital et al. (hereinafter Mital), U.S. Patent Application Publication US-20230120227-A1 (filed 10/18/2022). Regarding Claim 1, Mills teaches A system-on-a-chip circuit, comprising: (0021: “FIG. 2 is a block diagram illustrating components in device 100… the device 100 may include, among other components, image sensor 202, system-on-a chip (SOC) component 204…”) a neural processor circuit, comprising: (0027: “SOC component 204 may include, among other subcomponents… neural processor circuit 218…”) a plurality of neural engines, at least one of the plurality of neural engines configured to perform computational operations on input data with variable dimensions; and (0039: “Neural processor circuit 218 is a configurable circuit that performs neural network operations on the input data based at least on kernel data 340. For this purpose, neural processor circuit 218 may include, among other components, neural task manager 310, a plurality of neural engines 314A through 314N…” 0058: “A work unit is a portion of the input data having a size that produces output values that fit into accumulator 414 of neural engine 314… work units can be shaped to one of 16×16, 32×8, 64×4, 128×2 or 256×1 dimension.” The neural processor circuit includes a plurality of neural engines which operate on work units (i.e. input data) of variable dimensions.) a data processor circuit coupled to the plurality of neural engines, the data processor circuit configured to provide multiple modes of data broadcasting, wherein the data is broadcasted from the data processor circuit to the plurality of neural engines [via one or more broadcast buses]; and (0041: “…the neural task manager 310 sends rasterizer information to the components of the neural processor circuit 218 to enable each of the components to track, retrieve or process appropriate portions of the input data…” 0065: “…rasterizer 718 in data buffer 318 broadcasts in sequence work units for processing by the neural engines 314…” 0043: “Data buffer 318 may be operated in a broadcast mode where data input data of all input channels are fed to all neural engines 314 or in a unicast mode where data input data of a subset of input channels are fed to each neural engine 314.” The neural task manager, data buffer, and rasterizer (i.e. data processor circuit) broadcast work units (i.e. data) to the neural engines, and can be operated in broadcast or unicast mode (i.e. provide multiple modes of data broadcasting).) a central processor unit coupled to the neural processor unit, the central processor unit configured to: (0027: “SOC component 204 may include, among other subcomponents…a central processor unit (CPU) 208…”) Mills does not appear to explicitly disclose a data processor circuit for broadcasting data via one or more broadcast buses determine, based in part on a neural network description, a data broadcasting mode and an input data dimension configuration mode; generate one or more task descriptors, a task descriptor indicating the data broadcasting mode and input data dimension configuration mode; instruct, using the task descriptors, the data processor circuit to broadcast input data to the plurality of neural engines according to the determined data broadcasting mode; and instruct, using the task descriptors, a neural engine to perform computational operations according to the determined input data dimension configuration mode. However, Mital teaches a data processor circuit for broadcasting data via one or more broadcast buses (0030: “Within the AI processor 110, the scheduler is responsible for sending data to each of the multiple ALUs [arithmetic logic units] connected to it via the broadcast bus for parallel processing.” The scheduler (i.e. data processor circuit) broadcasts data via broadcast bus.) determine, based in part on a neural network description, a data broadcasting mode and an input data dimension configuration mode; (0029: “A compiler for the AI processor 110 uses a descriptor/instruction set with specific instructions crafted to efficiently handle various operations for neural networks… The descriptor/instruction set includes categories of descriptors/instructions including, for example… Data descriptors/instructions…” 0007: “The memory manager configured to when a data size of a data set from an AI-based processing model layer using the AI processor is larger than a weight size, the memory manager slices the data set into data set chunks evenly spread across a cluster of components, broadcasts channel instructions from the AI-based processing model layer to every cluster of components, and processes the data set chunk in the cluster of components according to the channel instructions of the AI-based processing model layer. In addition, memory manager configured to when the data size of the data set is smaller than a weight size of the AI-based processing model layer, the memory manager slices the AI-based processing model layer into channel chunks, assigns a channel chunk to a channel cluster, broadcasts the data set to every cluster, and processes the data set chunk according to channel instructions of the channel chunk.” Based on the size of the data (i.e. neural network description), the system determines whether to partition the input data by spatial dimension or by channel (i.e. determines an input data dimension configuration mode) and determines whether to broadcast the entire dataset or data chunks to each component cluster (i.e. determines a data broadcasting mode).) generate one or more task descriptors, a task descriptor indicating the data broadcasting mode and input data dimension configuration mode; (See the previously cited portion of 0007. The AI processor determines instructions (i.e. task descriptors) for the component clusters which indicate whether the data is partitioned by spatial dimension or by channel (i.e. the input data dimension configuration mode) and whether the entire dataset or data chunks are broadcast (i.e. the data broadcasting mode).) instruct, using the task descriptors, the data processor circuit to broadcast input data to the plurality of neural engines according to the determined data broadcasting mode; and (See the previously cited portion of 0007 and 0030. The instructions (i.e. task descriptors) are broadcast to the component clusters where the scheduler (i.e. data processor circuit) sends input data (either the entire dataset or a data chunk, in accordance with the data broadcasting mode) to the plurality of components (i.e. neural engines).) instruct, using the task descriptors, a neural engine to perform computational operations according to the determined input data dimension configuration mode. (0027: “The two or more clusters of components connect to a broadcast bus for the memory manager 132 to broadcast a same instruction to the two or more clusters of components at a same time to evenly divide a computation across the two of more clusters of components so that each cluster of components performs a same computation but on a different portion of data…” Components (i.e. neural engines) receive instructions (i.e. task descriptors) to perform computations (i.e. computational operations) on partitions of input data (i.e. according to the input data dimension configuration mode).) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Mills and Mital. Mills teaches a neural processing circuit which partitions data into work units for processing by neural engines. Mital teaches dynamically determining how to partition and broadcast data to clusters for neural computation based on neural network descriptors. One of ordinary skill would have motivation to combine Mills and Mital in order to “efficiently do computations for AI systems, such as neural networks, as well as have a scalable architecture to adapt to most Artificial Intelligence (AI) networks, as well as optimize memory accesses and allocation” (Mital, 0025). Regarding Claim 2, Mills and Mital teach The system-on-a-chip circuit of claim 1, as shown above. Mital also teaches wherein the multiple modes of data broadcasting include a multicast mode and a simulcast mode, and wherein simulcast mode is a function of multicast mode. (0007: “The memory manager configured to when a data size of a data set from an AI-based processing model layer using the AI processor is larger than a weight size, the memory manager slices the data set into data set chunks evenly spread across a cluster of components, broadcasts channel instructions from the AI-based processing model layer to every cluster of components, and processes the data set chunk in the cluster of components according to the channel instructions of the AI-based processing model layer.” 0033: “Next, so given that each cluster is doing the same computation, then each cluster is running the exact same instructions. Thus, the system can broadcast those instructions to all of the clusters in the scheduler at a same time.” Slicing the data set into chunks and broadcasting each data set chunk to a different component cluster is a multicast mode. Broadcasting instructions to each cluster simultaneously is a simulcast mode, which is a function of the multicast mode in the case that each cluster is performing the same computation.) Regarding Claim 3, Mills and Mital teach The system-on-a-chip circuit of claim 2, as shown above. Mital also teaches wherein the multicast mode further comprises: assigning each of the plurality of neural engines to a cluster; (0007: “The artificial intelligence processor can have multiple clusters of components including multiple arithmetic logic units each configured to have one or more computing engines to perform the computations for the AI system…”) broadcasting a different portion of input data associated with the computational operation to each cluster to generate an output value; (0007: “the memory manager slices the data set into data set chunks evenly spread across a cluster of components, broadcasts channel instructions from the AI-based processing model layer to every cluster of components, and processes the data set chunk in the cluster of components according to the channel instructions of the AI-based processing model layer.” 0026: “Note, at least one or more of the clusters of ALUs has an output that connects to its neighboring cluster.”) and performing, at each cluster, the computational operation. (0027: “…each cluster of components performs a same computation but on a different portion of data…”) Regarding Claim 4, Mills and Mital teach The system-on-a-chip circuit of claim 3, as shown above. Mital also teaches wherein the simulcast mode further comprises: broadcasting each block of input data to different clusters; and (0007: “the memory manager slices the data set into data set chunks evenly spread across a cluster of components, broadcasts channel instructions from the AI-based processing model layer to every cluster of components, and processes the data set chunk in the cluster of components according to the channel instructions of the AI-based processing model layer.”) performing, at each cluster, the computational operation. (0027: “…each cluster of components performs a same computation but on a different portion of data…”) Mills teaches fetching consecutive blocks of input data from a buffer; (0065: “…rasterizer 718 in data buffer 318 broadcasts in sequence work units for processing by the neural engines 314.” The rasterizer broadcasts (i.e. fetches) in sequence work units (i.e. consecutive blocks of input data) from the data buffer.) Regarding Claim 5, Mills and Mital teach The system-on-a-chip circuit of claim 1, as shown above. Mital also teaches wherein the input data dimension configuration modes comprise: a first input data dimension configuration mode, which increases a first dimension of a block of input data, and (0038: “When a data size of the data set is larger than a processing model layer for processing the data set, the memory manager 132 is configured to slice the data set into data set chunks.” Partitioning the dataset into chunks by spatial dimension is a first input data dimension configuration mode which results in a larger channel dimension in each chunk (i.e. increases a first dimension of a block of input data).) a second input data dimension configuration mode, which increases a second dimension of a block of input data. (0040: “When the data size of the data set is smaller than the processing model layer, the memory manager 132 is configured to slice the processing model layer into channel chunks.” Partitioning the dataset into chunks by channel is a second input data dimension configuration mode which results in larger spatial dimensions in each chunk (i.e. increases a second dimension of a block of input data).) Regarding Claim 6, Mills and Mital teach The system-on-a-chip circuit of claim 3, as shown above. Mital also teaches wherein a total number of clusters is determined based in part on the determined input data configuration mode. (0052: “The memory manager 132 can scale an amount of instances of the clusters to perform the computations for the AI system via a user configurable register transfer language parameter fed into the compiler at compile time (Block 606).” The amount of instances of clusters (i.e. total number of clusters) is scaled (i.e. determined) to perform the computations on the partitioned input data (i.e. based on the input data dimension configuration mode).) Regarding Claim 7, Mills and Mital teach The system-on-a-chip circuit of claim 1, as shown above. Mital also teaches wherein the neural network description includes the computational operation and a number of output channels. (0029: “For example, the compiler for the AI processor 110 uses a descriptor/instruction set with specific instructions crafted to efficiently handle various operations, addressing modes, data types, ability to address memory locations, etc., for neural networks. These neural networks can have sparse weights, manipulate one or more dimensional data, e.g., height, width, and channels and other dimensions such as images/frames per second… The descriptor/instruction set includes categories of descriptors/instructions including, for example… Data descriptors/instructions (used for both input and output)…” 0041: “In another example… The data is pretty low but the model because of the number of channels is pretty large, so what the compiler cooperating with the scheduler does is divide these 576 channels into the four clusters, so that each of the clusters is going to generate 144 channels.” The compiler uses the descriptor/instruction set (i.e. neural network description) which includes operations for neural networks (i.e. computational operations) and data descriptors such as number of output channels.) Regarding Claim 8, Mills and Mital teach The system-on-a-chip circuit of claim 1, as shown above. Mital also teaches wherein [a buffer] is configured to operate in simulcast mode based in part on the input data dimension configuration mode and computational operation. (0033: “All of these clusters can now run simultaneously and do the same computation but on a different portion of the data (their portion/chunk). Next, so given that each cluster is doing the same computation, then each cluster is running the exact same instructions. Thus, the system can broadcast those instructions to all of the clusters in the scheduler at a same time.” The system is configured to broadcast instructions to each cluster simultaneously (i.e. operate in simulcast mode) when each cluster is performing the same computation (i.e. based on the computational operation) on a different partition of data (i.e. based on the input data dimension configuration mode).) Mills teaches broadcasting via a buffer (0065: “…rasterizer 718 in data buffer 318 broadcasts in sequence work units for processing by the neural engines 314.”) Regarding Claim 9, Mills and Mital teach The system-on-a-chip circuit of claim 1, as shown above. Mills also teaches wherein the input data is sized by a rasterizer, the rasterizer configured to divide the input data based on the input data dimension configuration mode determined [by a compiler]. (0066: “To perform their functions, each of rasterizers 714, 718, 720, 722 receives task information 710 indicating how the input data and/or kernel data are to be segmented and to be handled by each component of the neural processor circuit 218. The task information includes information about particulars of the current layer (e.g., dimensions of input and output data, dimension of an associated kernel, types of padding at the boundaries of input data).” Rasterizers perform segmentation (i.e. sizing and division) of input data based on information including dimensions of input data (i.e. the input data dimension configuration mode).) Mital teaches the input data dimension configuration mode is determined by a compiler. (0039: “At a user selectable threshold, a size/amount of the data is compared to a size/amount of the weights will transition thru that threshold and change from moving data a single time and broadcasting weights over to moving weights one time and broadcasting data. At this point, the memory manager sub-module 132 of the compiler will switch the AI processor 110 from frame sub-layering across clusters over to channel sub-layering across clusters.” Switching from frame sub-layering to channel sub-layering is determining the input data dimension configuration mode, and this switch is performed by the compiler.) Claims 10-17 are method claims containing substantially the same elements as system claims 1-7 and 9, respectively. Mills and Mital teach the elements of claims 1-7 and 9, as shown above. Claims 18-20 are device claims containing substantially the same elements as system claims 1-3, respectively. Mills and Mital teach the elements of claims 1-3, as shown above. Mills also teaches An electronic device, comprising: a system memory configured to store a neural network; and a system-on-a-chip circuit coupled to the memory, the system-on-a-chip circuit configured to: (0021: “FIG. 2 is a block diagram illustrating components in device 100, according to one embodiment. Device 100 may perform various operations including image processing. For this and other purposes, the device 100 may include, among other components, image sensor 202, system-on-a chip (SOC) component 204, system memory 230…” 0016: “Embodiments of the present disclosure relate to… performing neural network operations.” 0025: “System memory 230 is a component for storing instructions for execution by SOC component 204 and for storing data processed by SOC component 204.” The device performs neural network operations, and includes an SOC coupled to a system memory which stores instructions and data for SOC execution (i.e. the memory is configured to store a neural network).) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN M ROHD whose telephone number is (571)272-6445. The examiner can normally be reached Mon-Thurs 8:00-6:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.M.R./Examiner, Art Unit 2147 /ERIC NILSSON/Primary Examiner, Art Unit 2151
Read full office action

Prosecution Timeline

Mar 23, 2023
Application Filed
Feb 06, 2025
Response after Non-Final Action
Jan 06, 2026
Non-Final Rejection — §101, §103, §112
Mar 11, 2026
Applicant Interview (Telephonic)
Mar 11, 2026
Examiner Interview Summary

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month