Prosecution Insights
Last updated: April 19, 2026
Application No. 17/776,848

Distributed Deep Learning System and Distributed Deep Learning Method

Non-Final OA §103
Filed
May 13, 2022
Examiner
HONORE, EVEL NMN
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Nippon Telegraph and Telephone Corporation
OA Round
1 (Non-Final)
39%
Grant Probability
At Risk
1-2
OA Rounds
4y 5m
To Grant
85%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
7 granted / 18 resolved
-16.1% vs TC avg
Strong +46% interview lift
Without
With
+46.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
38 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
42.6%
+2.6% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
6.6%
-33.4% vs TC avg
§112
1.1%
-38.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the filing on 05/13/2022. Claim(s) 1-8 have been canceled. Claims 9-20 are pending in this case. Claims 9, 14, and 16 are independent claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 9-10, 14 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over AMBARDEKAR et al. (US Pub No.: 20180300615 A1), hereinafter referred to as AMBARDEKAR, in view of MENG et al (US Pub No.: 20200242455 A1), hereinafter referred to as MENG. With respect to claim 9, AMBARDEKAR discloses: A distributed deep learning system comprising a plurality of computation nodes mutually connected through a communication network, wherein each of the plurality of computation nodes includes: (FIG. 10 and paragraph [0103], AMBARDEKAR disclose a distributed network computing environment 1000 in which one or more server computers 1000A can be interconnected via a communications network.) an arithmetic operation device configured to perform computation for matrix multiplication included in arithmetic operation processing in a neural network, and output a first arithmetic operation result (In paragraph [0057], AMBARDEKAR discloses performing arithmetic operations using one or more operands (i.e. input values) from the input buffer 202 and one or more operands from the weight buffer 204 (i.e. weights). The accumulators 206 accumulate (e.g. sum) previously stored values with incoming values (e.g. the output of arithmetic operations such as the multiplication of input and weight values). In paragraph [0059], AMBARDEKAR discloses the neuron 105F then performs arithmetic operations (e.g. multiplication) using operands from the input buffer 202 and the weight buffer 204. Results of these operations, which might be referred to herein as partial sums, can be accumulated by a first accumulator 206A.) a first storage device configured to store the first arithmetic operation result outputted from the arithmetic operation device (In paragraph [0110], AMBARDEKAR discloses storing a first buffer storing first data for processing by the at least one neuron in the neural network module; and wherein the neural network module is configured to: perform at least one first arithmetic operation.) a first reception circuit configured to receive the first arithmetic operation result from the other computation node (In paragraph [0120], AMBARDEKAR discloses accumulating results of the first arithmetic operation with first previously stored results in a first accumulator of the neural network module. ) an addition circuit configured to calculate a second arithmetic operation result that is a sum of the first arithmetic operation result stored in the first storage device and the first arithmetic operation result from the other computation node received by the first reception circuit (In paragraph [0110], AMBARDEKAR discloses a first arithmetic operation by way of the at least one neuron using a first operand obtained from the first buffer and a second operand; and performs at least one second arithmetic operation by way of the at least one neuron using the first operand obtained from the first buffer) a third reception circuit configured to receive a notification packet indicating states of the plurality of computation nodes (In paragraph [0117], AMBARDEKAR discloses loading third data into the second buffer and performing a second arithmetic operation by way of the neuron in the neural network module using the first operand and a third operand from the third data stored in the second buffer.) an operation administration maintenance (OAM) processing circuit configured to make a record, in the notification packet received by the third reception circuit, of whether or not the first arithmetic operation result is outputted from the arithmetic operation device of the own node (In paragraph [0068], AMBARDEKAR discloses that the DNN module checks if there are more kernels to process with the input data in the input buffer 202. If there are, the routine continues to step 310, where the weights for the next kernel are loaded into the weight buffer 204. ) a third transmission circuit configured to transmit the notification packet including the record made by the OAM processing circuit to the other computation node (In paragraph [0117], AMBARDEKAR discloses loading third data into the second buffer and performing a second arithmetic operation by way of the neuron in the neural network module using the first operand and a third operand from the third data stored in the second buffer. ) wherein the OAM processing circuit, depending on the state of the other computation node indicated by the notification packet, causes the first transmission circuit to transmit the first arithmetic operation result stored in the first storage device to the other computation node (In paragraph [0081], AMBARDEKAR discloses where the DNN module checks if more input data needs to be processed using the weight data stored in the weight buffer. If it does, the process goes to step 510, where the next input data is added to the input buffer. As mentioned earlier, data from the line buffer can be copied to a shadow buffer. This allows new data to be loaded into the line buffer while the neurons are working with the data in the shadow buffer, so the neurons don't have to wait for the new data.) With respect to claim 9, AMBARDEKAR do not explicitly disclose: a network processing device including: a first transmission circuit configured to transmit the first arithmetic operation result stored in the first storage device to another computation node a second transmission circuit configured to transmit the second arithmetic operation result to the other computation node a second reception circuit configured to receive the second arithmetic operation result from each of the other computation node, However, it is known by MENG to disclose: A network processing device including: a first transmission circuit configured to transmit the first arithmetic operation result stored in the first storage device to another computation node (In paragraph [0120], MENG discloses that the primary processing circuit is configured to preprocess data and transfer data and operation instructions between the plurality of secondary processing circuits. In paragraph [0125], MENG discloses that the primary processing circuit includes a first storage unit, a first operation unit, a first data dependency determination unit, and a first storage unit.) a second transmission circuit configured to transmit the second arithmetic operation result to the other computation node (In paragraph [0162], MENG discloses that the plurality of secondary processing circuits are configured to perform operations on received data blocks according to the operation instruction to obtain intermediate results, and transfer the intermediate results to the primary processing circuit.) a second reception circuit configured to receive the second arithmetic operation result from each of the other computation node (In paragraph [0162], MENG disclose the plurality of secondary processing circuits are configured to perform operations on received data blocks according to the operation instruction to obtain intermediate results, and transfer the intermediate results to the primary processing circuit.) AMBARDEKAR in view of MENG are analogous pieces of art because both references concern a distributed network computing environment, one or more server computers can be interconnected via a communications network. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify AMBARDEKAR, with performing arithmetic operations using operands from input buffer and wight buffer as taught by AMBARDEKAR with primary processing circuit includes a first storage unit, a first operation unit, a first data dependency determination unit as taught by MENG. The motivation for doing so would have been to decrease bandwidth utilization, reduce power consumption, improve neuron multiplier stability, and provide other technical benefits (See [0008] of AMBARDEKAR.) Regarding claim 10, AMBARDEKAR in view of MENG disclose elements of claim 9. In addition, AMBARDEKAR disclose: The distributed deep learning system according to claim 9, wherein each of the plurality of computation nodes further includes a second storage device configured to store the second arithmetic operation result (In paragraph [0117], AMBARDEKAR discloses a second buffer; loading third data into the second buffer; and performing a second arithmetic operation by way of the neuron in the neural network module using the first operand and a third operand from the third data stored in the second buffer.) With respect to claim 14, AMBARDEKAR discloses: A distributed deep learning system comprising a plurality of computation nodes and a collection node mutually connected through a communication network, wherein each of the plurality of computation nodes includes: (FIG. 10 and paragraph [0103], AMBARDEKAR disclose a distributed network computing environment 1000 in which one or more server computers 1000A can be interconnected via a communications network.) an arithmetic operation device configured to perform computation for matrix multiplication included in arithmetic operation processing in a neural network, and output a first arithmetic operation result (In paragraph [0057], AMBARDEKAR discloses performing arithmetic operations using one or more operands (i.e. input values) from the input buffer 202 and one or more operands from the weight buffer 204 (i.e. weights). The accumulators 206 accumulate (e.g. sum) previously stored values with incoming values (e.g. the output of arithmetic operations such as the multiplication of input and weight values). In paragraph [0059], AMBARDEKAR discloses the neuron 105F then performs arithmetic operations (e.g. multiplication) using operands from the input buffer 202 and the weight buffer 204. Results of these operations, which might be referred to herein as partial sums, can be accumulated by a first accumulator 206A.) a first storage device configured to store the first arithmetic operation result outputted from the arithmetic operation device; and a network processing device including (In paragraph [0110], AMBARDEKAR discloses storing a first buffer storing first data for processing by the at least one neuron in the neural network module; and wherein the neural network module is configured to: perform at least one first arithmetic operation.) a first reception circuit configured to receive, from the collection node, a second arithmetic operation result that is a sum of the first arithmetic operation results computed at the plurality of computation nodes (In paragraph [0120], AMBARDEKAR discloses accumulating results of the first arithmetic operation with first previously stored results in the first accumulator of the neural network module. ) an addition circuit configured to calculate the second arithmetic operation result that is a sum of the first arithmetic operation results received by the fourth reception circuit (In paragraph [0110], AMBARDEKAR discloses a first arithmetic operation by way of the at least one neuron using a first operand obtained from the first buffer and a second operand; and performs at least one second arithmetic operation by way of the at least one neuron using the first operand obtained from the first buffer) a third reception circuit configured to receive, from each of the plurality of computation nodes, the notification packet including the record made by the first OAM processing circuit of each of the plurality of computation nodes (In paragraph [0117], AMBARDEKAR discloses loading third data into the second buffer and performing a second arithmetic operation by way of the neuron in the neural network module using the first operand and a third operand from the third data stored in the second buffer) a first operation administration maintenance (OAM) processing circuit configured to make a record, in the notification packet received by the second reception circuit, of whether or not the first arithmetic operation result is outputted from the arithmetic operation device of the own node (In paragraph [0081], AMBARDEKAR discloses where the DNN module checks if more input data needs to be processed using the weight data stored in the weight buffer. If it does, the process goes to step 510, where the next input data is added to the input buffer. As mentioned earlier, data from the line buffer can be copied to a shadow buffer. This allows new data to be loaded into the line buffer while the neurons are working with the data in the shadow buffer, so the neurons don't have to wait for the new data.) and a first storage device configured to store the second arithmetic operation result received by the first reception circuit (In paragraph [0110], AMBARDEKAR discloses storing a first buffer storing first data for processing by the at least one neuron in the neural network module; and wherein the neural network module is configured to: perform at least one first arithmetic operation.) wherein the first OAM processing circuit, based on an instruction from the collection node, is configured to cause the first transmission circuit to transmit the first arithmetic operation result stored in the first storage device to the collection node (In paragraph [0081], AMBARDEKAR discloses where the DNN module checks if more input data needs to be processed using the weight data stored in the weight buffer. If it does, the process goes to step 510, where the next input data is added to the input buffer. As mentioned earlier, data from the line buffer can be copied to a shadow buffer. This allows new data to be loaded into the line buffer while the neurons are working with the data in the shadow buffer, so the neurons don't have to wait for the new data. ) wherein the collection node includes a second network processing device including a second OAM processing circuit configured to generate the notification packet (In paragraph [0141], AMBARDEKAR disclose each secondary processing circuit respectively, transferring the input neuron gradients of the i.sup.th layer to each secondary processing circuit 102; multiplying, by each secondary processing circuit 102, scalar data corresponding to the secondary processing circuit in the input neuron gradients) a third transmission circuit configured to transmit the generated notification packet to each of the plurality of computation nodes (In paragraph [0117], AMBARDEKAR discloses loading third data into the second buffer and performing a second arithmetic operation by way of the neuron in the neural network module using the first operand and a third operand from the third data stored in the second buffer. ) a fourth reception circuit configured to receive the first arithmetic operation results from the plurality of computation nodes (In FIG. 5 and Col.14, lines 42–54, Thomas discloses that the first channel segment 530 and last channel segment 545 only connect to corresponding buses in one other channel segment, while the buses in the intermediate channel segments 535 and 540 connect to corresponding buses in two channel segments.) a fourth transmission circuit configured to transmit the second arithmetic operation result obtained by the addition circuit to the plurality of computation nodes (In paragraph AMBARDEKAR discloses [0132], each secondary processing circuit includes a second operation unit, a second data dependency determination unit, a second storage unit, and a third storage unit.) wherein the second OAM processing circuit, depending on the states of the plurality of computation nodes indicated by the respective notification packets, is configured to instruct the plurality of computation nodes to transmit the first arithmetic operation result to the collection node, in order to collect the first arithmetic operation results obtained at the plurality of computation nodes (In paragraph [0081], AMBARDEKAR discloses where the DNN module checks if more input data needs to be processed using the weight data stored in the weight buffer. If it does, the process goes to step 510, where the next input data is added to the input buffer. As mentioned earlier, data from the line buffer can be copied to a shadow buffer. This allows new data to be loaded into the line buffer while the neurons are working with the data in the shadow buffer, so the neurons don't have to wait for the new data.) With respect to claim 14, AMBARDEKAR do not explicitly disclose: a first network processing device including: a first transmission circuit configured to transmit the first arithmetic operation result outputted from the arithmetic operation device to the collection node a second reception circuit configured to receive a notification packet indicating states of the plurality of computation nodes a second transmission circuit configured to transmit the notification packet including the record made by the first OAM processing circuit to the collection node However, it is known by MENG to disclose: A first network processing device including: a first transmission circuit configured to transmit the first arithmetic operation result outputted from the arithmetic operation device to the collection node (In paragraph [0120], MENG discloses that the primary processing circuit is configured to preprocess data and transfer data and operation instructions between the plurality of secondary processing circuits. In paragraph [0125], MENG discloses that the primary processing circuit includes a first storage unit, a first operation unit, a first data dependency determination unit, and a first storage unit) A second reception circuit configured to receive a notification packet indicating states of the plurality of computation nodes (In paragraph [0162], MENG discloses that the plurality of secondary processing circuits are configured to perform operations on received data blocks according to the operation instruction to obtain intermediate results, and transfer the intermediate results to the primary processing circuit) A second transmission circuit configured to transmit the notification packet including the record made by the first OAM processing circuit to the collection node (In paragraph [0162], MENG discloses that the plurality of secondary processing circuits are configured to perform operations on received data blocks according to the operation instruction to obtain intermediate results, and transfer the intermediate results to the primary processing circuit) AMBARDEKAR in view of MENG are analogous pieces of art because both references concern a distributed network computing environment, one or more server computers can be interconnected via a communications network. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify AMBARDEKAR, with performing arithmetic operations using operands from input buffer and wight buffer as taught by AMBARDEKAR with primary processing circuit includes a first storage unit, a first operation unit, a first data dependency determination unit as taught by MENG. The motivation for doing so would have been to decrease bandwidth utilization, reduce power consumption, improve neuron multiplier stability, and provide other technical benefits (See [0008] of AMBARDEKAR.) With respect to claim 16, AMBARDEKAR discloses: A distributed deep learning method, the method performed by a plurality of computation nodes mutually connected through a communication network, the method comprising: (FIG. 10 and paragraph [0103], AMBARDEKAR disclose a distributed network computing environment 1000 in which one or more server computers 1000A can be interconnected via a communications network.) performing computation for matrix multiplication included in arithmetic operation processing in a neural network, and outputting a first arithmetic operation result (In paragraph [0057], AMBARDEKAR discloses performing arithmetic operations using one or more operands (i.e. input values) from the input buffer 202 and one or more operands from the weight buffer 204 (i.e. weights). The accumulators 206 accumulate (e.g. sum) previously stored values with incoming values (e.g. the output of arithmetic operations such as the multiplication of input and weight values). In paragraph [0059], AMBARDEKAR discloses the neuron 105F then performs arithmetic operations (e.g. multiplication) using operands from the input buffer 202 and the weight buffer 204. Results of these operations, which might be referred to herein as partial sums, can be accumulated by a first accumulator 206A.) storing, in a first storage device, the first arithmetic operation result (In paragraph [0110], AMBARDEKAR discloses storing a first buffer storing first data for processing by the at least one neuron in the neural network module; and wherein the neural network module is configured to: perform at least one first arithmetic operation.) receiving the first arithmetic operation result from the other computation node (In paragraph [0120], AMBARDEKAR disclose accumulating a result of the first arithmetic operation with first previously stored results in a first accumulator of the neural network module.) receiving a notification packet indicating states of the plurality of computation nodes, making a record, in the notification packet received, of whether or not the first arithmetic operation result is outputted at the own node (In paragraph [0068], AMBARDEKAR discloses that the DNN module checks if there are more kernels to process with the input data in the input buffer 202. If there are, the routine continues to step 310, where the weights for the next kernel are loaded into the weight buffer 204. .) transmitting the notification packet including the record made to the other computation node (In paragraph [0117], AMBARDEKAR disclose loading third data into the second buffer; and performing a second arithmetic operation by way of the neuron in the neural network module using the first operand and a third operand from the third data stored in the second buffer) wherein depending on the state of the other computation node indicated by the notification packet, causing the first arithmetic operation result stored in the first storage device to be transmitted to the other computation node (In paragraph [0081], AMBARDEKAR discloses where the DNN module checks if more input data needs to be processed using the weight data stored in the weight buffer. If it does, the process goes to step 510, where the next input data is added to the input buffer. As mentioned earlier, data from the line buffer can be copied to a shadow buffer. This allows new data to be loaded into the line buffer while the neurons are working with the data in the shadow buffer, so the neurons don't have to wait for the new data.) calculating a second arithmetic operation result that is a sum of the first arithmetic operation result stored in the first storage device and the first arithmetic operation result received from the other computation node (In paragraph [0081], AMBARDEKAR discloses where the DNN module checks if more input data needs to be processed using the weight data stored in the weight buffer. If it does, the process goes to step 510, where the next input data is added to the input buffer. As mentioned earlier, data from the line buffer can be copied to a shadow buffer. This allows new data to be loaded into the line buffer while the neurons are working with the data in the shadow buffer, so the neurons don't have to wait for the new data) With respect to claim 16, MENG do not explicitly disclose: transmitting the first arithmetic operation result stored in the first storage device to another computation node transmitting the second arithmetic operation result to the other computation node, receiving the second arithmetic operation result from the other computation node calculating a second arithmetic operation result that is a sum of the first arithmetic operation result stored in the first storage device and the first arithmetic operation result received from the other computation node However, it is known by MENG to disclose: Transmitting the first arithmetic operation result stored in the first storage device to another computation node (In paragraph [0120], MENG discloses that the primary processing circuit is configured to preprocess data and transfer data and operation instructions between the plurality of secondary processing circuits. In paragraph [0125], MENG discloses that the primary processing circuit includes a first storage unit, a first operation unit, a first data dependency determination unit, and a first storage unit) transmitting the second arithmetic operation result to the other computation node, receiving the second arithmetic operation result from the other computation node (In paragraph [0162], MENG discloses that the plurality of secondary processing circuits are configured to perform operations on received data blocks according to the operation instruction to obtain intermediate results, and transfer the intermediate results to the primary processing circuit) AMBARDEKAR in view of MENG are analogous pieces of art because both references concern a distributed network computing environment, one or more server computers can be interconnected via a communications network. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify AMBARDEKAR, with performing arithmetic operations using operands from input buffer and wight buffer as taught by AMBARDEKAR with primary processing circuit includes a first storage unit, a first operation unit, a first data dependency determination unit as taught by MENG. The motivation for doing so would have been to decrease bandwidth utilization, reduce power consumption, improve neuron multiplier stability, and provide other technical benefits (See [0008] of AMBARDEKAR.) Regarding claim 17, AMBARDEKAR in view of MENG disclose elements of claim 9. In addition, AMBARDEKAR disclose: The distributed deep learning method according to claim 16, wherein storing, in a second storage device, the second arithmetic operation result. (In paragraph [0117], AMBARDEKAR discloses a second buffer; loading third data into the second buffer; and performing a second arithmetic operation by way of the neuron in the neural network module using the first operand and a third operand from the third data stored in the second buffer.) Claims 11-13, 15 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over AMBARDEKAR in view of MENG and further in view of Liu et al (US Patent No.10,860,917 B2), hereinafter referred to as Liu. Regarding claim 11, AMBARDEKAR in view of WENG disclose elements of claim 9. AMBARDEKAR in view of WENG do explicitly disclose: The distributed deep learning system according to claim 9, wherein any one computation node of the plurality of computation nodes is designated as a master node, and a plurality of the other computation nodes are designated as slave nodes that are controlled by the master node However, Liu disclose the limitation (In Col. 4, lines 15–29, Liu discloses multiple computation modules that further include at least one master computation module and multiple slave computation modules. The controller unit may be configured to read instructions from the instruction caching unit and to decode the instructions into micro-instructions for controlling the operation of the interconnection unit, the master computation module and the slave computation modules.) Accordingly, it would have been obvious to a person having ordinary skills in the art before the effective filling date of the claimed invention, having the teaching of AMBARDEKAR in view of WENG to include Liu, with generating, by the master computation circuit and the one or more slave computation circuits to perform one or more operations as taught by Liu. The motivation for doing so would have been to effectively improve the support to forward computations of multi-layer artificial neural networks (See (Col. 4, lines 15-29) of Liu.) Regarding claim 12, AMBARDEKAR in view of MENG and Liu disclose elements of claim 11. In addition, AMBARDEKAR disclose: The distributed deep learning system according to claim 11, wherein in the network processing device included in the one computation node, the OAM processing circuit generates the notification packet, the third transmission circuit transmits the generated notification packet to the plurality of the other computation nodes, and when the notification packet including records made by the plurality of the other computation nodes indicates that the first arithmetic operation result is already outputted at each of the plurality of the other computation nodes, the OAM processing circuit causes the addition circuit to compute the second arithmetic operation result that is a sum of the first arithmetic operation results outputted from the respective arithmetic operation devices included in the plurality of the other computation nodes (In paragraph [0081], AMBARDEKAR disclose where the DNN module checks if more input data needs to be processed using the weight data stored in the weight buffer. If it does, the process goes to step 510, where the next input data is added to the input buffer. As mentioned earlier, data from the line buffer can be copied to a shadow buffer. This allows new data to be loaded into the line buffer while the neurons are working with the data in the shadow buffer, so the neurons don't have to wait for the new data..) Regarding claim 13, AMBARDEKAR in view of MENG and Liu disclose elements of claim 11. In addition, AMBARDEKAR disclose: The distributed deep learning system according to claim 11, wherein in the network processing device included in the one computation node, the OAM processing circuit generates specification information that specifies, among the plurality of the other computation nodes, a plurality of specified computation nodes that output a plurality of the first arithmetic operation results required for the addition circuit to calculate the second arithmetic operation result, the network processing device included in the one computation node further includes a fourth transmission circuit configured to transmit the specification information to the plurality of the other computation nodes, the network processing device included in each of the plurality of the other computation nodes further includes a fifth reception circuit configured to receive the specification information, and in the network processing device included in each of the plurality of the other computation nodes, when the own node is included in the plurality of specified computation nodes specified by the specification information, the OAM processing circuit, depending on the states of the plurality of specified computation nodes indicated by the notification packet, causes the first transmission circuit to transmit the first arithmetic operation result stored in the first storage device to another computation node specified by the specification information (In paragraph [0062], AMBARDEKAR disclose N partial outputs exist in N accumulators after the neurons have processed the contents of the input buffer. In order to process the remainder of the data, the input buffer is then loaded with the next portion of the input data and the weight buffer is loaded with a corresponding set of weight values. The process described above is then repeated and the output values are aggregated using the same N accumulators. This process can be repeated until the entire input data set has been processed.) Regarding claim 15, AMBARDEKAR in view of MENG disclose elements of claim 14. AMBARDEKAR in view of MENG do explicitly disclose: The distributed deep learning system according to claim 14, wherein the plurality of computation nodes and the collection node are included in a star communication network in which each of the plurality of computation nodes and the collection node are mutually connected However, Liu disclose the limitation (In Col. 4, lines 15–29, Liu discloses multiple computation modules that further include at least one master computation module and multiple slave computation modules. The controller unit may be configured to read instructions from the instruction caching unit and to decode the instructions into micro-instructions for controlling the operation of the interconnection unit, the master computation module and the slave computation modules.) Accordingly, it would have been obvious to a person having ordinary skills in the art before the effective filling date of the claimed invention, having the teaching of AMBARDEKAR in view of MENG to include Liu, with generating, by the master computation circuit and the one or more slave computation circuits to perform one or more operations as taught by Liu. The motivation for doing so would have been to effectively improve the support to forward computations of multi-layer artificial neural networks (See (Col. 4, lines 15-29) of Liu.) Regarding claim 18, AMBARDEKAR in view of MENG disclose elements of claim 16. AMBARDEKAR in view of MENG do explicitly disclose: The distributed deep learning method according to claim 16, wherein any one computation node of the plurality of computation nodes is designated as a master node, and a plurality of the other computation nodes are designated as slave nodes that are controlled by the master node However, Liu disclose the limitation (In Col. 4, lines 15–29, Liu discloses multiple computation modules that further include at least one master computation module and multiple slave computation modules. The controller unit may be configured to read instructions from the instruction caching unit and to decode the instructions into micro-instructions for controlling the operation of the interconnection unit, the master computation module and the slave computation modules.) Accordingly, it would have been obvious to a person having ordinary skills in the art before the effective filling date of the claimed invention, having the teaching of AMBARDEKAR in view of MENG to include Liu, with generating, by the master computation circuit and the one or more slave computation circuits to perform one or more operations as taught by Liu. The motivation for doing so would have been to effectively improve the support to forward computations of multi-layer artificial neural networks (See (Col. 4, lines 15-29) of Liu.) Regarding claim 19, AMBARDEKAR in view of MENG and Liu disclose elements of claim 18. In addition, AMBARDEKAR disclose: The distributed deep learning method according to claim 18 further comprising: generating the notification packet by the one computation node; transmitting the generated notification packet to the plurality of the other computation nodes, and when the notification packet including records made by the plurality of the other computation nodes indicates that the first arithmetic operation result is already outputted at each of the plurality of the other computation nodes, computing the second arithmetic operation result that is a sum of the first arithmetic operation results outputted from the respective arithmetic operation devices included in the plurality of the other computation nodes (In paragraph [0081], AMBARDEKAR discloses where the DNN module checks if more input data needs to be processed using the weight data stored in the weight buffer. If it does, the process goes to step 510, where the next input data is added to the input buffer. As mentioned earlier, data from the line buffer can be copied to a shadow buffer. This allows new data to be loaded into the line buffer while the neurons are working with the data in the shadow buffer, so the neurons don't have to wait for the new data.) Regarding claim 20, AMBARDEKAR in view of MENG and Liu disclose elements of claim 18. In addition, AMBARDEKAR disclose: The distributed deep learning method according to claim 18 further comprising: generating, by the one computation node, specification information that specifies, among the plurality of the other computation nodes, a plurality of specified computation nodes that output a plurality of the first arithmetic operation results required for the second arithmetic operation result, transmitting, by the one computation node, the specification information to the plurality of the other computation nodes, receiving, by the other computation nodes, the specification information, and when the own node is included in the plurality of specified computation nodes specified by the specification information and depending on the states of the plurality of specified computation nodes indicated by the notification packet, transmitting the first arithmetic operation result stored in the first storage device to another computation node specified by the specification information (In paragraph [0062], AMBARDEKAR disclose N partial outputs exist in N accumulators after the neurons have processed the contents of the input buffer. In order to process the remainder of the data, the input buffer is then loaded with the next portion of the input data and the weight buffer is loaded with a corresponding set of weight values. The process described above is then repeated and the output values are aggregated using the same N accumulators. This process can be repeated until the entire input data set has been processed.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EVEL HONORE whose telephone number is (703)756-1179. The examiner can normally be reached Monday-Friday 8 a.m. -5:30 p.m. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela D Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. EVEL HONORE Examiner Art Unit 2142 /HAIMEI JIANG/Primary Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

May 13, 2022
Application Filed
Nov 12, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566942
System and Method For Generating Parametric Activation Functions
2y 5m to grant Granted Mar 03, 2026
Patent 12547946
SYSTEMS AND METHODS FOR FIELD EXTRACTION FROM UNLABELED DATA
2y 5m to grant Granted Feb 10, 2026
Patent 12547906
METHOD, DEVICE, AND PROGRAM PRODUCT FOR TRAINING MODEL
2y 5m to grant Granted Feb 10, 2026
Patent 12536156
UPDATING METADATA ASSOCIATED WITH HISTORIC DATA
2y 5m to grant Granted Jan 27, 2026
Patent 12406483
ONLINE CLASS-INCREMENTAL CONTINUAL LEARNING WITH ADVERSARIAL SHAPLEY VALUE
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
39%
Grant Probability
85%
With Interview (+46.4%)
4y 5m
Median Time to Grant
Low
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month