Prosecution Insights
Last updated: April 19, 2026
Application No. 17/968,530

Method and apparatus having a scalable architecture for neural networks

Non-Final OA §101§103
Filed
Oct 18, 2022
Examiner
ABOU EL SEOUD, MOHAMED
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Roviero Inc.
OA Round
1 (Non-Final)
38%
Grant Probability
At Risk
1-2
OA Rounds
4y 2m
To Grant
77%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
80 granted / 208 resolved
-16.5% vs TC avg
Strong +39% interview lift
Without
With
+38.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
46 currently pending
Career history
254
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
48.2%
+8.2% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 208 resolved cases

Office Action

§101 §103
DETAILED ACTION This office action is responsive to the above identified application filed 10/18/2022. The application contains claims 1-20, all examined and rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The Information Disclosure Statement with references submitted 2/2/2023, has been considered and entered into the file. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 1 is rejected under 35 USC 101 because the claimed inventions are directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. While independent claims 1,7, 13, and 20 are each directed to a statutory category, it recites a series of steps pertaining to analyze received data to identify features that are used to predict machine failure, which appears to be directed to an abstract idea (mental process, mathematical concept). Claims 1-7, and 20 are rejected under 35 U.S.C. § 101 because the instant application is directed to non-patentable subject matter. Specifically, the claims are directed toward at least one judicial exception without reciting additional elements that amount to significantly more than the judicial exception. The rationale for this determination is in accordance with the guidelines of USPTO, applies to all statutory categories, and is explained in detail below. When considering subject matter eligibility under 35 U.S.C. 101, (1) it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. If the claim does fall within one of the statutory categories, (2a) it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so (2b), it must additionally be determined whether the claim is a patent-eligible application of the exception. If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself. Examples of abstract ideas include certain methods of organizing human activities; a mental processes; and mathematical concepts, (2019 PEG) STEP 1. Per Step 1, the claims are determined to include process, manufacture, and machine. Therefore, the claims are directed to a statutory eligibility category. At step 2A, prong 1, The invention is directed to distribute workload over different computing devices which is akin to Mental Process (see Alice), As such, the claims include an abstract idea. When considering the limitations individually and as a whole the limitations directed to the abstract idea are: “evenly divide a computation for a calculation session across the two of more clusters of components” (Mental process, observation, evaluation and judgment) The claim recites additional elements as “An apparatus, comprising: An Artificial Intelligence (AI) processor composed of two or more clusters of components, where each cluster includes two or more arithmetic logic units (ALUs) that each have one or more compute engines, a schedular, and a local memory, where at least a first cluster of the two or more clusters of components has an output that connects to its neighboring cluster; and a memory manager to direct and communicate with the cluster of components” (“Using a computer as a tool to perform a mental process”, MPEP 2106.04(a)(2)(III)(C)). This judicial exception is not integrated into a practical application. The elements are recited at a high level of generality, i.e. a generic computing system performing generic functions including generic processing of data. Accordingly the additional elements do not integrate the abstract into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Therefore the claims are directed to an abstract idea. (2019 Revised Patent Subject Matter Eligibility Guidance ("2019 PEG"). Thus, under Step 2A of the Mayo framework, the Examiner holds that the claims are directed to concepts identified as abstract. STEP 2B. Because the claims include one or more abstract ideas, the examiner now proceeds to Step 2B of the analysis, in which the examiner considers if the claims include individually or as an ordered combination limitations that are "significantly more" than the abstract idea itself. This includes analysis as to whether there is an improvement to either the "computer itself," "another technology," the "technical field," or significantly more than what is "well-understood, routine, or conventional" (WURC) in the related arts. The instant application includes in Claim 1 additional steps to those deemed to be abstract idea(s). When taken the steps individually, these steps are: “An apparatus, comprising: An Artificial Intelligence (AI) processor composed of two or more clusters of components, where each cluster includes two or more arithmetic logic units (ALUs) that each have one or more compute engines, a schedular, and a local memory, where at least a first cluster of the two or more clusters of components has an output that connects to its neighboring cluster; and a memory manager to direct and communicate with the cluster of components” (“Using a computer as a tool to perform a mental process”, MPEP 2106.04(a)(2)(III)(C)). In the instant case, Claim 1 is directed to above mentioned abstract idea. Technical functions such as receiving, and extracting are common and basic functions in computer technology. The individual limitations are recited at a high level and do not provide any specific technology or techniques to perform the functions claimed. In addition, when the claims are taken as a whole, as an ordered combination, the combination of steps does not add "significantly more" by virtue of considering the steps as a whole, as an ordered combination. The instant application, therefore, still appears only to implement the abstract idea to the particular technological environments using what is well-understood, routine, and conventional in the related arts. The steps are still a combination made to the abstract idea. The additional steps only add to those abstract ideas using well understood and conventional functions, and the claims do not show improved ways of, for example, an unconventional non-routine functions for analyzing model operations or updating the model that could then be pointed to as being "significantly more" than the abstract ideas themselves. Moreover, Examiner was not able to identify any "unconventional" steps, which, when considered in the ordered combination with the other steps, could have transformed the nature of the abstract idea previously identified. The instant application, therefore, still appears to only implement the abstract ideas to the particular technological environments using what is well-understood, routine, and conventional (WURC) in the related arts. Further, note that the limitations, in the instant claims, are done by the generically recited computing devices. The limitations are merely instructions to implement the abstract idea on a computing device that is recited in an abstract level and require no more than a generic computing devices to perform generic functions. Independent claim 20 are the same analogy and rejected using similar analysis as claim 1. CONCLUSION It is therefore determined that the instant application not only represents an abstract idea identified as such based on criteria defined by the Courts and on USPTO examination guidelines, but also lacks the capability to bring about "Improvements to another technology or technical field" (Alice), bring about "Improvements to the functioning of the computer itself" (Alice), "Apply the judicial exception with, or by use of, a particular machine" (Bilski), "Effect a transformation or reduction of a particular article to a different state or thing" (Diehr), "Add a specific limitation other than what is well-understood, routine and conventional in the field" (Mayo), "Add unconventional steps that confine the claim to a particular useful application" (Mayo), or contain "Other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment" (Alice), transformed a traditionally subjective process performed by humans into a mathematically automated process executed on computers (McRO), or limitations directed to improvements in computer related technology, including claims directed to software (Enfish). The dependent claims, when considered individually and as a whole, likewise do not provide "significantly more" than the abstract idea for similar reasons as the independent claim. claims 2 disclose “where an amount of instances of the cluster of components is scalable via a user supplied Register Transfer Language (RTL) parameter supplied by a creator the Artificial Intelligence (AI) processor” (merely indicates a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h)). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea. claims 3 disclose “two or more clusters of components connect to a broadcast bus for the memory manager to broadcast a same instruction to the two or more clusters of components at a same time to evenly divide a computation across the two of more clusters of components so that each cluster of components performs a same computation but on a different portion of data from an Al system using the Al processor.” (merely indicates a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h)). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea. claims 4 disclose “memory manager is configured to have a user selectable threshold for a size of data from an Al system using the Al processor that is compared to a size of weights (comparing data size is mental process) from the Al system using the Al processor, where the user selectable threshold is configured to change the memory manager from moving the data from the Al system a single time into the local memory in the cluster and broadcasting weights over a broadcast bus to the two or more clusters of components over to moving the weights from the Al system a single time into the local memory in the cluster and broadcasting the data from the Al system over the broadcast bus to the two or more clusters of components (mental process as user could make a decision related to data distribution based on data size and the processor, memory, broadcast bus, etc. merely indicates a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h)). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea. claims 5 disclose “where the memory manager is configured to fetch data from an external memory from the Al processor across the local memories of each corresponding cluster of components a single time per calculation session when a size of weights from the Al system using the Al processor is small compared to a size of data from the Al system using the Al processor” (merely indicates a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h), examiner notes that the claim does not recite the actual fetching step, therefore it was not considered as an extra solution). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea. claims 6 disclose “The apparatus of claim 5, where the memory manager is further configured to fetch the weights of the Al system from the external memory from the Al processor across the local memories of each corresponding cluster of components a single time per calculation session when the size of weights from the Al system using the Al processor is larger than the size of the data from the Al system using the Al processor (merely indicates a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h), examiner notes that the claim does not recite the actual fetching step, therefore it was not considered as an extra solution). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea. The dependent claims which impose additional limitations also fail to claim patent eligible subject matter because the limitations cannot be considered statutory. The dependent claim(s) have been examined individually and in combination with the preceding claims, however they do not cure the deficiencies of claim 1 ; where all claims are directed to the same abstract idea, "addressing each claim of the asserted patents [is] unnecessary." Content Extraction &. Transmission LLC v, Wells Fargo Bank, Natl Ass'n, 776 F.3d 1343, 1348 (Fed. Cir. 2014). If applicant believes the dependent claims are directed towards patent eligible subject matter, they are invited to point out the specific limitations in the claim that are directed towards patent eligible subject matter. Claims for the other statutory classes are similarly analyzed. For at least these reasons, the claimed inventions of each of dependent claims 2-6,are directed or indirect to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more and are rejected under 35 USC 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-20 are rejected under 35 U.S.C. 103 as being unpatentable over Mital [US 2020/0293868 A1] in view of Venkataramani et al . [US 2021/0110247 A1, hereinafter Ven]. With regard to Claim 1, Mital teach an apparatus, comprising: An Artificial Intelligence (Al) processor composed of two or more clusters of components, where each cluster includes two or more arithmetic logic units (ALUs) that each have one or more compute engines, a schedular, and a local memory, where at least a first cluster of the two or more clusters of components has an output that connects to its neighboring cluster Fig. 1, “cluster with 32 ALU, SCH, with L2 memory … ALU with L1 memory and 64 multi threaded compute engines … Node Ring – High Speed Data and Cmd Rings”, Fig. 1, “cluster with … SCH, with L2 memory”, ¶17, “integrated circuit 100 contains a scheduler (SCH), one or more arithmetic logic units (ALUs), a communication bus, a mode controller, and one or more random access memories configured to cooperate with each other”); and a memory manager to direct and communicate with the cluster of components (Fig. 1, “Node Ring”, “High Speed Data and Cmd Rings”). Mital does not teach evenly divide a computation for a calculation session across the two of more clusters of components. Ven teach a memory manager to direct and communicate with the cluster of components (Fig. 6, Y-ring, X-Ring) to evenly divide a computation for a calculation session across the two of more clusters of components (Ven, ¶2, “data parallelism where workload is split by the features (or inputs) of the layers. In this technique, each processor can perform all the tasks for a particular batch (or minibatch) of training data. Using an image processor NN as an example, each processor may be assigned to process a respective image”). Mital and Ven are analogous art to the claimed invention because they are from a similar field of endeavor of efficiently process and execute Artificial Intelligence operations, including computations for a neural network. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Mital resulting in resolutions as disclosed by Ven with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Mital as described above to increase efficiency by using a more efficient techniques that save time required for training since data parallelism is inherently more efficient for weight-heavy layers (Ven, ¶¶2-3, ¶¶15-16). With regard to Claim 2, Mital and Ven teach the apparatus of claim 1, where an amount of instances of the cluster of components is scalable via a user supplied Register Transfer Language (RTL) parameter supplied by a creator the Artificial Intelligence (Al) processor (Mital ¶21, “Each arithmetic logic unit can be instantiated with multiple compute engines (CEs) via a user configurable RTL setting for the FPGA”, ¶30, “FPGA is scalable on amount of ALUs instantiated via user configurable parameter set in the RTL. Each ALU can instantiate multiple CEs via the user configurable RTL setting for the FPGA. The depth of the Reuse RAM and Renew RAM in each ALU can also be set via the user configurable RTL setting”, ¶31, “arithmetic logic unit is configurable to be instantiated with multiple compute engines via a user configurable register transfer language (RTL) setting”). The same motivation to combine for claim 1 equally applies for current claim. With regard to Claim 3, Mital and Ven teach the apparatus of claim 1, where the two or more clusters of components connect to a broadcast bus for the memory manager to broadcast a same instruction to the two or more clusters of components at a same time to evenly divide a computation across the two of more clusters of components so that each cluster of components performs a same computation but on a different portion of data from an Al system using the Al processor (Mital, Fig. 1, Ven, ¶2, “data parallelism where workload is split by the features (or inputs) of the layers. In this technique, each processor can perform all the tasks for a particular batch (or minibatch) of training data. Using an image processor NN as an example, each processor may be assigned to process a respective image”). The same motivation to combine for claim 1 equally applies for current claim. With regard to Claim 5, Mital and Ven teach the apparatus of claim 1, where the memory manager is configured to fetch data from an external memory from the Al processor across the local memories of each corresponding cluster of components a single time per calculation session when a size of weights from the Al system using the Al processor is small compared to a size of data from the Al system using the Al processor (Mital, ¶26, “Node DMA engine talks and interfaces with a compiler, an external host CPU and an external memory”, Fig. 1, ¶19, “mode0, the input data (which is anticipated as being the largest amount of static data being used in the calculations) is loaded into the reuse RAM of the neural network processor”, ¶27, “Reuse RAM gets loaded a single time per calculation session”, Von, Fig. 5, 520, 535). The same motivation to combine for claim 1 equally applies for current claim. With regard to Claim 6, Mital and Ven teach the apparatus of claim 5, where the memory manager is further configured to fetch the weights of the Al system from the external memory from the Al processor across the local memories of each corresponding cluster of components a single time per calculation session when the size of weights from the Al system using the Al processor is larger than the size of the data from the Al system using the Al processor (Mital, ¶26, “Node DMA engine talks and interfaces with a compiler, an external host CPU and an external memory”, Fig. 1, ¶19, “mode1 … weights (which are now anticipated as being the largest amount of static data being used in the calculations) are loaded into the reuse RAM”, ¶27, “Reuse RAM gets loaded a single time per calculation session … reused multiple times”, Von, Fig. 5, 520, 5255). The same motivation to combine for claim 1 equally applies for current claim. With regard to Claim 7, Mital teach an artificial intelligence (Al) processor, comprising: multiple clusters of components including multiple arithmetic logic units each configured to have one or more computing engines to perform the computations for the Al system (Fig. 1, “cluster with 32 ALU, SCH, with L2 memory … ALU with L1 memory and 64 multi threaded compute engines … Node Ring – High Speed Data and Cmd Rings”), and a scheduler with a local scheduler memory (Fig. 1, “cluster with … SCH, with L2 memory”, ¶17, “integrated circuit 100 contains a scheduler (SCH), one or more arithmetic logic units (ALUs), a communication bus, a mode controller, and one or more random access memories configured to cooperate with each other”); and a memory manager configured to control a node ring connected between the multiple clusters of components (Fig. 1, “Node Ring”, “High Speed Data and Cmd Rings”)and to fetch data from an external memory to the local scheduler memory in a single time per calculation session (¶27, “FIG. 4 also shows a reuse RAM cooperating with the scheduler to be loaded merely one time per calculation session”). Mital does not explicitly teach when a data size of a data set from an Al-based processing model layer using the Al processor is larger than a weight size, the memory manager is configured to slice the data set into data set chunks evenly spread across a cluster of components, to broadcast channel instructions from the Al-based processing model layer to every cluster of components, and to process the data set chunks in the cluster of components according to the channel instructions of the Al-based processing model layer, and when the data size of the data set is smaller than a weight size of the Al-based processing model layer, the memory manager is configured to slice the Al-based processing model layer into channel chunks, assign a channel chunk to a channel cluster, broadcast the data set to every cluster, and process the data set according to channel instructions of the channel chunk. Ven teach an artificial intelligence (Al) processor, comprising: multiple clusters of components including multiple arithmetic logic units each configured to have one or more computing engines to perform the computations for the Al system (Fig. 2, 2D array, processors, Mac units, PE Array, ¶¶22-24); and a memory manager configured to control a node ring connected between the multiple clusters of components (Fig. 6, Y-ring, X-Ring), wherein the memory manager configured to when a data size of a data set from an Al-based processing model layer using the Al processor is larger than a weight size, the memory manager is configured to slice the data set into data set chunks evenly spread across a cluster of components, to broadcast channel instructions from the Al-based processing model layer to every cluster of components, and to process the data set chunks in the cluster of components according to the channel instructions of the Al-based processing model layer (Fig. 5, 520, 535, 540, ¶¶42-43, “NN assignor selects the data parallelism technique to use in the direction with the least amount of bandwidth in the 2D array”), and when the data size of the data set is smaller than a weight size of the Al-based processing model layer, the memory manager is configured to slice the Al-based processing model layer into channel chunks, assign a channel chunk to a channel cluster, broadcast the data set to every cluster, and process the data set according to channel instructions of the channel chunk (Fig. 5, 520, 525, 530, ¶¶42-43, “if the layer is a feature heavy layer, the method 500 proceeds to block 535 where the NN assignor selects the model parallelism technique”). Mital and Ven are analogous art to the claimed invention because they are from a similar field of endeavor of efficiently process and execute Artificial Intelligence operations, including computations for a neural network. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Mital resulting in resolutions as disclosed by Ven with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Mital as described above to increase efficiency by selecting the more efficient techniques (i.e. model parallelism or data parallelism) based on layer feature compared to weight as model parallelism is more efficient than data parallelism for weight heavy layers but less efficient for feature heavy layers (where the feature (or input) data is larger than the weight data (Ven, ¶¶2-3). With regard to Claim 8, Mital and Ven teach the Al processor of claim 7, wherein an arithmetic logic unit is configured to store a static data set of channel instructions in a reuse random access memory of the arithmetic logic unit (Mital, ¶27, “FIG. 4 also shows a reuse RAM cooperating with the scheduler to be loaded merely one time per calculation session with a larger amount of data between i) a set of weights and ii) input data from input channels, for the neural network in which the larger amount of data is to be reused multiple times during a given calculation session”, ¶37, “FIG. 6 illustrates … scheduler and an ALU configured to handle stride and max pool efficiently by dividing input data from input channels and weights from a neural network into even/odd rows and columns and then to process the weights and input data as even and odd segments”, processing in even and odd segments show channel processing instruction, ¶25, “compiler … create a bit mask that accompanies input data from input channels … bit mask sent by the scheduler to the ALU can be decoded to identify which weights have values that should be calculated and identify sparse weights where the calculation for that weight can be skipped”, bit mask is instruction to control channel operations regarding what to skip and what to use). The same motivation to combine for claim 7 equally applies for current claim. With regard to Claim 9, Mital and Ven teach the Al processor of claim 8, wherein the arithmetic logic unit is configured to move the static data set of channel instructions to the reuse random access memory in a single data move to reduce internal data movement (Mital, ¶27, “Reuse RAM gets loaded a single time per calculation session with the larger amount of data between i) weights and ii) input data from all of the input channels, which is reused multiple times (usually static data)”). The same motivation to combine for claim 8 equally applies for current claim. With regard to Claim 10, Mital and Ven teach the Al processor of claim 7, wherein an arithmetic logic unit is configured to store a static data set of input data from the data set in a reuse random access memory (Mital, ¶19, ¶27, “Reuse RAM gets loaded a single time per calculation session with the larger amount of data between i) weights and ii) input data from all of the input channels, which is reused multiple times (usually static data)”, Fig. 5, ¶11, “ALU that has a RAM width of memory cells set in a reuse RAM to have an additional two or more columns of greater than an amount of columns needed to store input data from input channels in order to allow the ALU to independently perform the calculations for the 3D data object”). The same motivation to combine for claim 7 equally applies for current claim. With regard to Claim 11, Mital and Ven teach the Al processor of claim 7, wherein an arithmetic logic unit is configured to store a variable data set of output data based on the data set in a renew random access memory, and wherein the renew random access memory is configured to use a read pointer to identify the variable data set (¶10, “arithmetic logic unit contains an instance of a renew RAM and an instance of the reuse RAM to i) feed the input data and the set of weights into each compute engine and ii) to also store an output result from a calculation from that compute engine”, ¶27, “Renew RAM is loaded with the other set of data either i) weights or ii) input data, which can changed and/or moved around during the calculation session”, ¶28, “use of RAM accommodates this variable set of possibly a lot of data better than a register. The ALU can use a read pointer for the RAM. Note, the read pointer will jump over a calculation for the 3D object each time a sparse weight is indicated by the bit mask”). The same motivation to combine for claim 7 equally applies for current claim. With regard to Claim 12, Mital and Ven the Al processor of claim 11, wherein the renew random access memory is configured to skip the read pointer over a data object if a sparse weight is indicated by a bit mask ¶25, “compiler … create a bit mask that accompanies input data from input channels … bit mask sent by the scheduler to the ALU can be decoded to identify which weights have values that should be calculated and identify sparse weights where the calculation for that weight can be skipped”, ¶28, “use of RAM accommodates this variable set of possibly a lot of data better than a register. The ALU can use a read pointer for the RAM. Note, the read pointer will jump over a calculation for the 3D object each time a sparse weight is indicated by the bit mask”). The same motivation to combine for claim 11 equally applies for current claim. With regard to Claim 13, Claim 13 is similar in scope to claim 1; therefore it is rejected under similar rationale. With regard to Claim 14, Claim 14 is similar in scope to claim 8; therefore it is rejected under similar rationale. With regard to Claim 15, Claim 15 is similar in scope to claim 9; therefore it is rejected under similar rationale. With regard to Claim 16, Claim 16 is similar in scope to claim 10; therefore it is rejected under similar rationale. With regard to Claim 17, Claim 17 is similar in scope to claim 11; therefore it is rejected under similar rationale. With regard to Claim 18, Claim 18 is similar in scope to claim 12; therefore it is rejected under similar rationale. With regard to Claim 19, Mital and Ven teach the method for processing the data set with the Al processor of claim 17, further comprising: creating the multiple clusters of components to connect to a broadcast bus for the memory manager to broadcast a same instruction to the multiple clusters of components at a same time to evenly divide a computation across the multiple clusters of components so that each cluster of components performs a same computation but on a different portion of the data set (Mital, Fig. 1, Ven, ¶2, “data parallelism where workload is split by the features (or inputs) of the layers. In this technique, each processor can perform all the tasks for a particular batch (or minibatch) of training data. Using an image processor NN as an example, each processor may be assigned to process a respective image”). With regard to Claim 20, Claim 20 is similar in scope to claim 1; therefore it is rejected under similar rationale. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Mital [US 2020/0293868 A1] in view of Venkataramani et al . [US 2021/0110247 A1, hereinafter Ven] in view of Shivarudraiah et al . [US 2016/0092543 A1, hereinafter Shiv]. With regard to Claim 4, Mital and Ven the apparatus of claim 1, where the memory manager is configured to have a user selectable (Mital, ¶18, “mode controller and the compiler cooperate to receive a software input from the user on whether to operate the integrated circuit 100 in one of multiple operational modes in order to more efficiently perform calculations for different types of neural network”, user is able to provide input and interact with the system) a size of data from an Al system using the Al processor that is compared to a size of weights from the Al system using the Al processor (Ven, Fig. 5, 520, ¶¶15-16, “Each layer can then be characterized according to whether they are more feature heavy or weight heavy”), where the [input data] is configured to change the memory manager from moving the data from the Al system a single time into the local memory in the cluster (Mital, Fig. 1, ¶27, “a reuse RAM cooperating with the scheduler to be loaded merely one time per calculation session with a larger amount of data between i) a set of weights and ii) input data from input channels, for the neural network in which the larger amount of data is to be reused multiple times during a given calculation session”, ¶¶29-31) and broadcasting weights over a broadcast bus to the two or more clusters of components (Mital, Fig. 1, Ven, ¶2, “data parallelism where workload is split by the features (or inputs) of the layers … the weights (or kernels) for that layer must be transmitted to each of the processors”) over to moving the weights from the Al system a single time into the local memory in the cluster (Mital, ¶¶19-20, “in mode1 the input data is loaded into the renew RAM and the weights (which are now anticipated as being the largest amount of static data being used in the calculations) are loaded into the reuse RAM”, ¶27, “Reuse RAM gets loaded a single time per calculation session”) and broadcasting the data from the Al system over the broadcast bus to the two or more clusters of components (Mital, Fig. 1, Broadcast Bus, Node Ring, High speed data and Cmd rings). Mital-Ven does not explicitly teach a user selectable threshold for a size of data ,the user selectable threshold. Shiv teach user selectable threshold for a size of data ,the user selectable threshold (Fig. 9, Abstract, ¶47, “system is highly configurable with additional user preferences and associated splits generators to accommodate various requirements of users or client applications”, ¶57, “if a query is received for data in the table based on a user-defined size enabling the table to be split into multiple ranges for optimal processing, the database table accessor 110 can select a size-based splits generator”, ¶¶112-115, ¶124, “determines one or more ranges of the table in accordance with the size data, and at step 1010 the method selects a size-based splits generator in accordance with one or more of the query data indicating a user preference for a size-based splits generator or the table data indicating a size of the table as having predetermined selected size”). Mital-Ven and Shiv are analogous art to the claimed invention because they are from a similar field of endeavor of parallel data processing. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Mital-Ven resulting in resolutions as disclosed by Shiv with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Mital-Ven as described above to increase flexibility for diverse AI workloads. This is simply combining prior art elements according to known methods to yield predictable results and applying a known technique to a known device (method, or product) ready for improvement to yield predictable results (MPEP 2143). Conclusion The prior art made of record and not relied upon is considered pertinent to the applicant’s disclosure. US Patent Application Publication No. 2018/0293493 filed by Kalamkar et al. that disclose distributed learning can be performed model parallelism, data parallelism, or a combination of model and data parallelism See at least ¶169. Examiner has pointed out particular references contained in the prior arts of record in the body of this action for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and Figures may apply as well. It is respectfully requested from the applicant, in preparing the response, to consider fully the entire references as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior arts or disclosed by the examiner. It is noted that any citation to specific pages, columns, figures, or lines in the prior art references any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331-33, 216 USPQ 1038-39 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)). Examiner Notes Claims 13-19 has been fully rejected for compact persecution. However the claims have contingent limitations See MPEP 2111.04. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED ABOU EL SEOUD whose telephone number is (303)297-4285. The examiner can normally be reached Monday-Thursday 9:00am-6:00pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMED ABOU EL SEOUD/Primary Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Oct 18, 2022
Application Filed
Sep 06, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602602
SYSTEMS AND METHODS FOR VALIDATING FORECASTING MACHINE LEARNING MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12578719
PREDICTION OF REMAINING USEFUL LIFE OF AN ASSET USING CONFORMAL MATHEMATICAL FILTERING
2y 5m to grant Granted Mar 17, 2026
Patent 12561565
MODEL DEPLOYMENT AND OPTIMIZATION BASED ON MODEL SIMILARITY MEASUREMENTS
2y 5m to grant Granted Feb 24, 2026
Patent 12461702
METHODS AND SYSTEMS FOR PROPAGATING USER INPUTS TO DIFFERENT DISPLAYS
2y 5m to grant Granted Nov 04, 2025
Patent 12405722
USER INTERFACE DEVICE FOR INDUSTRIAL VEHICLE
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
38%
Grant Probability
77%
With Interview (+38.7%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 208 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month