Prosecution Insights
Last updated: April 19, 2026
Application No. 17/400,353

METHOD AND APPARATUS OF OPERATING A NEURAL NETWORK

Non-Final OA §103
Filed
Aug 12, 2021
Examiner
HONORE, EVEL NMN
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
4 (Non-Final)
39%
Grant Probability
At Risk
4-5
OA Rounds
4y 5m
To Grant
85%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
7 granted / 18 resolved
-16.1% vs TC avg
Strong +46% interview lift
Without
With
+46.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
38 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
42.6%
+2.6% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
6.6%
-33.4% vs TC avg
§112
1.1%
-38.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is responsive to the Application filed on 12/30/2025 Claims 1-3, 5-8, 10-13, 15-18 and 20-25 are pending in the case. Claims 1, 11 and 24 are independent claims. Claims 4, 9, 14 and 19 have been canceled. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5-8, 11, 15-18 and 23-25 are rejected under 35 U.S.C. 103 as being unpatentable over Kluchnikov et al. (US Pub No.: 20160092367 A1), hereinafter referred to as Kluchnikov, in view of YANG et al. (US Pub No.: 20200026997 A1), hereinafter referred to as YANG. With respect to claim 1, Kluchnikov disclose: A processor-implemented method, the method comprising: implementing a neural network by executing parallel neural network operations by first and second processors respectively based on a plurality of first data and a plurality of second data, where the plurality of first data is assigned to be utilized by the first processor in order according to a < first instruction >, the plurality of second data is assigned to be utilized by the second processor in order according to a < second instruction >, and the < first instruction > and the < second instruction > overlap with respect to at least one same stored data (In paragraph [0020], Kluchnikov discloses instruction processing hardware (e.g., a processor having one or more cores to execute instructions), it is desirable to utilize the available instruction level parallelism (LP) as well as execution resources to maximize performance. In paragraph [0035], Kluchnikov discloses that these instructions are executed in the core of a hardware processor connected (e.g., on-chip) with a multiple (e.g., interleaved) bank data cache having two cache access ports and only one physical port per bank. In Fig. 4 and paragraph [0047], Kluchnikov discloses the first instruction and the second instruction to access the same bank of a multiple bank data cache in the same clock cycle. ) verifying whether competition occurs between the first data traversal path and the second data traversal path, where a result of the verifying is that competition occurs based on a first operand data of the first data traversal path and a second operand data of the second data traversal path being the same stored data, and in response to the first processor and the second processor being determined to be approaching, according to the first and second data traversal paths, respective executions of the first and second operand data at a same point in time (In Fig. 4 and paragraph [0047], Kluchnikov discloses checking if both instructions (first instruction and second instruction)are set to access the same number of banks. If they are, the instruction that was scheduled first gets priority. If they are not, the instruction that is set to access more banks gets priority.) executing a first neural network operation by the first processor using the first operand data in parallel with a second neural network operation by the second processor using the second operand data in response to the result of the verifying being the competition does not occur (In Fig. 4 and paragraph [0045], Kluchnikov discloses that if both instructions can use the same amount of data, it will give access to the older instruction (the one that appears first in the program) and make the other instruction (Load 2) wait. The signals to redispatch Load 1 and Load 2 and to grant access to them can either trigger these actions or directly cause them.) respectively executing the first neural network operation by the first processor and an the second neural network operation by the second processor in non-parallel order in response to the result of the verifying being that competition does occur, where the non-parallel order is dependent on a determined priority between the first data traversal path and the second data traversal path (In paragraph [0035], Kluchnikov discloses that each instruction is sent out in different clock cycles, meaning one instruction per clock cycle. Here, there is no conflict, so both can access the data cache without any changes needed. In another situation, the first and third instructions are sent out at the same time, while the second load is sent out at a different time. Again, there is no conflict, and both can access the data cache without any changes needed.) With respect to claim 1, Kluchnikov do not specifically disclose: A <first instruction> as being claim “first data traversal path” A <second instruction> as being claim “second data traversal path” However, YANG is known to disclose: A <first instruction> as being claim “first data traversal path and <second instruction> as being claim “second data traversal path” (In paragraph [0112], YANG discloses a first computing path and first path information for the first task, and second computing path and second path information for the second task from the memory.) Kluchnikov and YANG are analogous pieces of art because both references concern the collecting sensor data of an environment. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Kluchnikov, with detecting multiple instructions scheduled to access a same bank of a multiple bank data cache in a same clock cycle as taught by Kluchnikov, with receiving path information indicating a first computing path for the first task and second computing path for the second task as taught by YANG. The motivation for doing so would have been to improve memory access throughput and/or give higher performance (See [0035] of Kluchnikov.) Regarding claim 5, Kluchnikov in view of YANG disclose the elements of claim 1. In addition, Kluchnikov disclose: The method of wherein the implementing of the neural network further comprises setting respective priorities for the first data traversal path and the second data traversal path, and wherein the determined priority between the first data traversal path and the second data traversal path is determined dependent on the set respective priorities (In paragraph [0035], Kluchnikov discloses that instead of giving priority to the first instruction because it comes first in the program, this method gives priority to the second instruction because it can access more data than the first one.) Regarding claim 6, Kluchnikov in view of YANG disclose the elements of claim 5. In addition, Kluchnikov disclose: The method of claim 5, wherein the setting of the respective priorities comprises: setting different first priorities for each of the plurality of first data on the first data traversal path (In paragraph [0044], Kluchnikov discloses an instruction (e.g., Load 1) to receive a priority (e.g., width) indication from a select priority request module.) Setting different second priorities for each of the plurality of second data on the second data traversal path (In paragraph [0044], Kluchnikov disclose an instruction (e.g., Load 2) receive a priority (e.g., width) indication from select priority request module.) Regarding claim 7, Kluchnikov in view of YANG disclose the elements of claim 5. In addition, Kluchnikov disclose: The method of wherein the implementing of the neural network further comprises performing the determining of the priority by comparing a first priority, among the set respective priorities, set for the first data traversal path with a second priority, among the set respective priorities, set for the second data traversal path to determine a higher-priority traversal path among the first data traversal path and the second data traversal path (In paragraph [0035], Kluchnikov discloses priority to the first load because of its earlier position in the program order. This embodiment of the conflict resolution logic would grant access priority to the second instruction and redispatch the first instruction as the access width of the second instruction is greater than the access width of the first instruction.) Wherein the respective executing of the first neural network operation by the first processor and the second neural network operation by the second processor in the non-parallel order comprises (In paragraph [0035], Kluchnikov discloses that each instruction can be sent out in different clock cycles, meaning one instruction can go out per clock cycle. In this situation, there is no problem, and both instructions can access the data cache without any delays. In another situation, the first and third instructions are sent out at the same time, while the second instruction is sent out at a different time.) In response to the determined higher-priority traversal path being the first data traversal path, executing the first neural network operation by the first processor before executing the second neural network operation by the second processor (In paragraph [0035], Kluchnikov discloses that the first and third instructions are sent out at the same time, while the second load is sent out at a different time (either earlier or later).) In response to the determined higher-priority traversal path being the second data traversal path, executing the second neural network operation by the second processor before executing the first neural network operation by the first processor (In paragraph [0035], Kluchnikov discloses that instead of letting the first load go first just because it comes first in the program, this method gives priority to the second instruction. This is because the second instruction can access more data than the first. This approach may help improve memory access speed and overall performance.) Regarding claim 8, Kluchnikov in view of YANG disclose the elements of claim 6. In addition, Kluchnikov disclose: The method of wherein the implementing of the neural network further comprises performing the determining of the priority by comparing a corresponding first priority, among the set respective priorities, set for the first operand data with a second priority, among the set respective priorities, set for the second operand data (In paragraph [0035], Kluchnikov discloses priority to the first load because of its earlier position in the program order. This embodiment of the conflict resolution logic would grant access priority to the second instruction and redispatch the first instruction as the access width of the second instruction is greater than the access width of the first instruction.) Wherein the respective executing of the first neural network operation by the first processor and the second neural network operation by the second processor in the non-parallel order comprises performing one, dependent on a result of the performing of the determining of the priority (In paragraph [0035], Kluchnikov discloses that each instruction can be sent out in different clock cycles, meaning one instruction can go out per clock cycle. In this situation, there is no problem, and both instructions can access the data cache without any delays. In another situation, the first and third instructions are sent out at the same time, while the second instruction is sent out at a different time.) Executing the first neural network operation by the first processor before executing the second neural network operation by the second processor and executing, in parallel with the execution of the first neural network operation, another second operand data on the second data traversal path subsequent to the second operand data by the second processor (In paragraph [0035], Kluchnikov discloses that the first and third instructions are sent out at the same time, while the second load is sent out at a different time (either earlier or later).) Executing the second neural network operation by the second processor before executing the first neural network operation by the first processor and executing, in parallel with the execution of the second neural network operation, another first operand data on the first data traversal path subsequent to the first operand data by the first processor (In paragraph [0035], Kluchnikov discloses that instead of letting the first load go first just because it comes first in the program, this method gives priority to the second instruction. This is because the second instruction can access more data than the first. This approach may help improve memory access speed and overall performance.) With respect to claim 11, Kluchnikov disclose: A computing apparatus, the apparatus comprising: one or more processors configured to: implement a neural network through a control of an execution of parallel neural network operations by first and second processors respectively based on a plurality of first data and a plurality of second data, where the plurality of first data is assigned to be utilized by the first processor in order according to a <first instruction>, the plurality of second data is assigned to be utilized by the second processor in order according to a <second instruction>, and the <first instruction> and the <second instruction> overlap with respect to at least one same stored data (In paragraph [0020], Kluchnikov discloses instruction processing hardware (e.g., a processor having one or more cores to execute instructions), it is desirable to utilize the available instruction level parallelism (LP) as well as execution resources to maximize performance. In paragraph [0035], Kluchnikov discloses that these instructions are executed in the core of a hardware processor connected (e.g., on-chip) with a multiple (e.g., interleaved) bank data cache having two cache access ports and only one physical port per bank. In Fig. 4 and paragraph [0047], Kluchnikov discloses the first instruction and the second instruction to access the same bank of a multiple bank data cache in the same clock cycle. ) Verifying whether competition occurs between the first data traversal path and the second data traversal path, where a result of the verifying is that competition occurs based on a first operand data of the first data traversal path and a second operand data of the second data traversal path being the same stored data, and in response to the first processor and the second processor being determined to be approaching, according to the first and second data traversal paths, respective executions of the first and second operand data at a same point in time (In Fig. 4 and paragraph [0047], Kluchnikov discloses checking if both instructions (first instruction and second instruction)are set to access the same number of banks. If they are, the instruction that was scheduled first gets priority. If they are not, the instruction that is set to access more banks gets priority.) A control of an execution of a first neural network operation by the first processor using the first operand data in parallel with a second neural network operation by the second processor using the second operand data in response to a-the result of the verifying being that competition does not occur (In Fig. 4 and paragraph [0045], Kluchnikov discloses that if both instructions can use the same amount of data, it will give access to the older instruction (the one that appears first in the program) and make the other instruction (Load 2) wait. The signals to redispatch Load 1 and Load 2 and to grant access to them can either trigger these actions or directly cause them.) A control of respective execution of the first neural network operation by the first processor and the second neural network operation by the second processor in non-parallel order in response to the result of the verifying being that competition does occur, where the non-parallel order is dependent on a determined priority between the first data traversal path and the second data traversal pat (In paragraph [0035], Kluchnikov discloses that each instruction is sent out in different clock cycles, meaning one instruction per clock cycle. Here, there is no conflict, so both can access the data cache without any changes needed. In another situation, the first and third instructions are sent out at the same time, while the second load is sent out at a different time. Again, there is no conflict, and both can access the data cache without any changes needed.) With respect to claim 11, Kluchnikov do not specifically disclose: A <first instruction> as being claim “first data traversal path” A <second instruction> as being claim “second data traversal path” However, YANG is known to disclose: A <first instruction> as being claim “first data traversal path and <second instruction> as being claim “second data traversal path” (In paragraph [0112], YANG discloses a first computing path and first path information for the first task, and second computing path and second path information for the second task from the memory.) Kluchnikov and YANG are analogous pieces of art because both references concern the collecting sensor data of an environment. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Kluchnikov, with detecting multiple instructions scheduled to access a same bank of a multiple bank data cache in a same clock cycle as taught by Kluchnikov, with receiving path information indicating a first computing path for the first task and second computing path for the second task as taught by YANG. The motivation for doing so would have been to improve memory access throughput and/or give higher performance (See [0035] of Kluchnikov.) Regarding claim 15, Kluchnikov in view of YANG disclose the elements of claim 11. In addition, Kluchnikov disclose: The apparatus of wherein the implementation of the neural network further comprises a setting of respective priorities for the first data traversal path and the second data traversal path, and wherein the determined priority between the first data traversal path and the second data traversal path is determined dependent on the set respective priorities (In paragraph [0035], Kluchnikov discloses that instead of giving priority to the first instruction because it comes first in the program, this method gives priority to the second instruction because it can access more data than the first one.) Regarding claim 16, Kluchnikov in view of YANG disclose the elements of claim 15. In addition, Kluchnikov disclose: The apparatus of claim 15, wherein the setting of the respective priorities comprises: a setting of different first priorities for each of the plurality of first data on the first data traversal path (In paragraph [0044], Kluchnikov discloses an instruction (e.g., Load 1) to receive a priority (e.g., width) indication from a select priority request module.) Setting of different second priorities for each of the plurality of second data on the second data traversal path (In paragraph [0044], Kluchnikov disclose an instruction (e.g., Load 2) receive a priority (e.g., width) indication from select priority request module.) Regarding claim 17, Kluchnikov in view of YANG disclose the elements of claim 15. In addition, Kluchnikov disclose: The apparatus of wherein the implementation of the neural network further comprises a performing of the determining of the priority by a comparing of a first priority, among the set respective priorities, set for the first data traversal path with a second priority, among the set respective priorities, set for the second data traversal path to determine a higher-priority traversal path among the first data traversal path and the second data traversal path (In paragraph [0035], Kluchnikov discloses priority to the first load because of its earlier position in the program order. This embodiment of the conflict resolution logic would grant access priority to the second instruction and redispatch the first instruction as the access width of the second instruction is greater than the access width of the first instruction.) Wherein the control of the respective execution of the first neural network operation by the first processor and the second neural network operation by the second processor in the non- parallel order comprises (In paragraph [0035], Kluchnikov discloses that each instruction can be sent out in different clock cycles, meaning one instruction can go out per clock cycle. In this situation, there is no problem, and both instructions can access the data cache without any delays. In another situation, the first and third instructions are sent out at the same time, while the second instruction is sent out at a different time.) In response to the determined higher-priority traversal path being the first data traversal path, a control of the execution of the first neural network operation by the first processor before the execution of the second neural network operation by the second processor (In paragraph [0035], Kluchnikov discloses that the first and third instructions are sent out at the same time, while the second load is sent out at a different time (either earlier or later).) In response to the determined higher-priority traversal path being the second data traversal path, a control of the execution of the second neural network operation by the second processor before the execution of the first neural network operation by the first processor (In paragraph [0035], Kluchnikov discloses that instead of letting the first load go first just because it comes first in the program, this method gives priority to the second instruction. This is because the second instruction can access more data than the first. This approach may help improve memory access speed and overall performance.) Regarding claim 18, Kluchnikov in view of YANG disclose the elements of claim 17. In addition, Kluchnikov disclose: The apparatus of wherein the implementation of the neural network further comprises a performing of the determining of the priority by a comparing of a corresponding first priority, among the set respective priorities, set for the first operand data with a second priority, among the set respective priorities, set for the second operand data (In paragraph [0035], Kluchnikov discloses priority to the first load because of its earlier position in the program order. This embodiment of the conflict resolution logic would grant access priority to the second instruction and redispatch the first instruction as the access width of the second instruction is greater than the access width of the first instruction) Wherein the control of the respective execution of the first neural network operation by the first processor and the second neural network operation by the second processor in the non- parallel order comprises a control of a performing of one, dependent on a result of the performing of the determining of the priority (In paragraph [0035], Kluchnikov discloses that each instruction can be sent out in different clock cycles, meaning one instruction can go out per clock cycle. In this situation, there is no problem, and both instructions can access the data cache without any delays. In another situation, the first and third instructions are sent out at the same time, while the second instruction is sent out at a different time.) Execution of the first neural network operation by the first processor before the execution of the second neural network operation by the second processor and an execution, in parallel with the execution of the first neural network operation, of another second operand data on the second data traversal path subsequent to the second operand data by the second processor (In paragraph [0035], Kluchnikov discloses that the first and third instructions are sent out at the same time, while the second load is sent out at a different time (either earlier or later).) Execution of the second neural network operation by the second processor before the execution of the first neural network operation by the first processor and an execution, in parallel with the execution of the second neural network operation, of another first operand data on the first data traversal path subsequent to the first operand data by the first processor (In paragraph [0035], Kluchnikov discloses that instead of letting the first load go first just because it comes first in the program, this method gives priority to the second instruction. This is because the second instruction can access more data than the first. This approach may help improve memory access speed and overall performance.) Regarding claim 23, Kluchnikov in view of YANG disclose the elements of claim 11. In addition, Kluchnikov disclose: The apparatus of claim 11, wherein the one or more processors comprise the first and second processors (In Fig. 9 and paragraph [0079], Kluchnikov discloses a first processor 970 and a second processor. ) With respect to claim 24, Kluchnikov disclose: A processor-implemented method, the method comprising: implementing a neural network by executing parallel neural network operations by first and second processors respectively based on a plurality of first data and a plurality of second data, where the plurality of first data is assigned to be utilized by the first processor in order according to a <first instruction>, the plurality of second data is assigned to be utilized by the second processor in order according to a <second instruction>, and the <first instruction> and the <second instruction> overlap with respect to at least one same stored data (In paragraph [0020], Kluchnikov discloses instruction processing hardware (e.g., a processor having one or more cores to execute instructions), it is desirable to utilize the available instruction level parallelism (LP) as well as execution resources to maximize performance. In paragraph [0035], Kluchnikov discloses that these instructions are executed in the core of a hardware processor connected (e.g., on-chip) with a multiple (e.g., interleaved) bank data cache having two cache access ports and only one physical port per bank. In Fig. 4 and paragraph [0047], Kluchnikov discloses the first instruction and the second instruction to access the same bank of a multiple bank data cache in the same clock cycle. ) Verifying whether competition occurs between the first data traversal path and the second data traversal path, where a result of the verifying is that competition occurs based on a first operand data of the first data traversal path and a second operand data of the second data traversal path being the same stored data, and in response to the first processor and the second processor being determined to be approaching, according to the first and second data traversal paths, respective executions of the first and second operand data at a same point in time (In Fig. 4 and paragraph [0047], Kluchnikov discloses checking if both instructions (first instruction and second instruction)are set to access the same number of banks. If they are, the instruction that was scheduled first gets priority. If they are not, the instruction that is set to access more banks gets priority.) In response to the result of the verifying being that competition does not occur: selecting, dependent on a determined set priority between the first data traversal path and the second data traversal, one between the first neural network operation by the first processor and the second neural network operation by the second processor (In paragraph [0035], Kluchnikov discloses that the first and third instructions are sent out at the same time, while the second load is sent out at a different time (either earlier or later).) Executing only the selected one between the first neural network operation by the first processor and the second neural network operation by the second processor (In paragraph [0035], Kluchnikov discloses that instead of letting the first load go first just because it comes first in the program, this method gives priority to the second instruction. This is because the second instruction can access more data than the first. This approach may help improve memory access speed and overall performance.) With respect to claim 24, Kluchnikov do not specifically disclose: A <first instruction> as being claim “first data traversal path” A <second instruction> as being claim “second data traversal path” However, YANG is known to disclose: A <first instruction> as being claim “first data traversal path and <second instruction> as being claim “second data traversal path” (In paragraph [0112], YANG discloses a first computing path and first path information for the first task, and second computing path and second path information for the second task from the memory.) Kluchnikov and YANG are analogous pieces of art because both references concern the collecting sensor data of an environment. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Kluchnikov, with detecting multiple instructions scheduled to access a same bank of a multiple bank data cache in a same clock cycle as taught by Kluchnikov, with receiving path information indicating a first computing path for the first task and second computing path for the second task as taught by YANG. The motivation for doing so would have been to improve memory access throughput and/or give higher performance (See [0035] of Kluchnikov.) Regarding claim 25, Kluchnikov in view of YANG disclose the elements of claim 24. In addition, Kluchnikov disclose: The method of claim 24, further comprising: in response to the result of the verifying indicating being that competition does occur: in response to the selected one being the first neural network operation by the first processor, executing in parallel with the selected one another second neural network operation by the second processor using another second operand data on the second data traversal path subsequent to the second operand data (In paragraph [0035], Kluchnikov discloses that each instruction can be sent out in different clock cycles, meaning one instruction can go out per clock cycle. In this situation, there is no problem, and both instructions can access the data cache without any delays. In another situation, the first and third instructions are sent out at the same time, while the second instruction is sent out at a different time.) In response to the selected one being the second neural network operation by the second processor, executing in parallel with the selected one another first neural network operation by the first processor using another first operand data on the first data traversal path subsequent to the first operand data (In paragraph [0035], Kluchnikov discloses that instead of letting the first load go first just because it comes first in the program, this method gives priority to the second instruction. This is because the second instruction can access more data than the first. This approach may help improve memory access speed and overall performance.) Claim(s) 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Kluchnikov ,in view of YANG and further in view of Park et al. (US Patent No.12,265,912 B1), hereinafter referred to as Park. Regarding claim 2, Kluchnikov in view of YANG disclose the elements of claim 1. Kluchnikov in view of YANG do not explicitly disclose: The method of claim 1, wherein the implementing of the neural network further comprises selectively skipping a neural network operation of the first processor for a data on the first data traversal path, and/or another neural network operation of the second processor for the data or another data on the second data traversal path However, Park disclose the limitation (In Col. 2, lines 45-58, Park discloses the deep neural network training accelerator performing a first operation using a method called mini-batch gradient descent. The input data is either skip data or training data based on a confidence matrix from the first task. Finally, the first part skips the second task for the skip data and does the second task for the training data based on the control signal.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Kluchnikov in view of YANG before them, to include Park’s deep neural network training to increase training speed and reducing its training energy provided by the deep neural network training accelerator as taught by Park (see(Col. 1, lines 54-56)) Regarding claim 12, Kluchnikov in view of YANG disclose the elements of claim 11. Kluchnikov in view of YANG do not explicitly disclose: The apparatus of claim 11, wherein the implementing of the neural network further comprises selectively skipping a neural network operation of the first processor for a data on the first data traversal path, and/or another neural network operation of the second processor for the data or another data on the second data traversal path However, Park disclose the limitation (In Col. 2, lines 45-58, Park discloses the deep neural network training accelerator performing a first operation using a method called mini-batch gradient descent. The input data is either skip data or training data based on a confidence matrix from the first task. Finally, the first part skips the second task for the skip data and does the second task for the training data based on the control signal.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Kluchnikov in view of YANG before them, to include Park’s deep neural network training to increase training speed and reducing its training energy provided by the deep neural network training accelerator as taught by Park (see(Col. 1, lines 54-56)) Claim(s) 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Kluchnikov ,in view of YANG and further in view of Thakker et al. (US Pub No.: 20210056422 A1), hereinafter referred to as Thakker. Regarding claim 3, Kluchnikov in view of YANG disclose the elements of claim 1. Kluchnikov in view of YANG do not explicitly disclose: The method of claim 1, wherein the selectively skipping further comprises: skipping the first neural network operation in response to the first operand data having a value of "O"; or skipping the first neural operation in response to the first operand data having a value within a predetermined range However, Thakker disclose the limitation (Examiner selects: “skipping the first neural operation in response to the first operand data having a value within a predetermined range” In paragraph [0073], Thakker disclose the skip predictor receives sequential input data, and divides the sequential input data into a sequence of input data values (x.sub.1, . . . , x.sub.t, . . . , x.sub.N), each input data value being associated with a different time step for RNN model and determines skipping based on hidden state vector.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Kluchnikov in view of YANG before them, to include Thakker, with generating output sequence data for a skip predictor for a recurrent neural network (RNN). The motivation for doing so would have been to improve the performance of the network (See [0038] of Thakker.) Regarding claim 13, Kluchnikov in view of YANG disclose the elements of claim 11. Kluchnikov in view of YANG do not explicitly disclose: The apparatus of claim 11, wherein the selective skipping further comprises: a skipping of the first neural network operation in response to the first operand data having a value of "0"; or the skipping of the first neural operation in response to the first operand data having a value within a predetermined range However, Thakker disclose the limitation (Examiner selects: “skipping the first neural operation in response to the first operand data having a value within a predetermined range” In paragraph [0073], Thakker disclose the skip predictor receives sequential input data, and divides the sequential input data into a sequence of input data values (x.sub.1, . . . , x.sub.t, . . . , x.sub.N), each input data value being associated with a different time step for RNN model and determines skipping based on hidden state vector.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Kluchnikov in view of YANG before them, to include Thakker, with generating output sequence data for a skip predictor for a recurrent neural network (RNN). The motivation for doing so would have been to improve the performance of the network (See [0038] of Thakker.) Claim(s) 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kluchnikov ,in view of YANG and further in view of Demaj et al. (US Pub No.: 20180293441 A1), hereinafter referred to as Demaj. Regarding claim 10, Kluchnikov in view of YANG disclose the elements of claim 1. Kluchnikov in view of YANG do not explicitly disclose: The method of claim 1, wherein the first data traversal path and the second data traversal path each have a corresponding predetermined traversal range corresponding to a respective predetermined number of parameters of the neural network or input data of the neural network, and wherein the implementing of the neural network further comprises respectively updating, with another respective predetermined number of the parameters of the neural network or the input data of the neural network, each of the first data traversal path and the second data traversal path, in response to respective completions of traversals of the first data traversal path by the first processor and the second data traversal path by the second processor However, Demaj disclose the limitation (In paragraph [0132-0134], Demaj discloses that the first and second processing module's corresponding attribute has the current value. The first and second processing module determines an initial confidence index from all the first and second probabilities taken into account along the traversed path. Each first distribution of probabilities and the second distribution of probabilities result in a value.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Kluchnikov in view of YANG before them, to include Demaj, with acquiring current values of the attributes so as to traverse a path within the decision tree and obtain at the output of the path. The motivation for doing so would have been to improve the reliability of the classification obtained at the output of a decision tree (See[0013] of Demaj.) Regarding claim 20, Kluchnikov in view of YANG disclose the elements of claim 11. Kluchnikov in view of YANG do not explicitly disclose: The apparatus of The apparatus of wherein the first data traversal path and the second data traversal path each have a corresponding predetermined traversal range corresponding to a respective predetermined number of parameters of the neural network or input data of the neural network, and wherein the implementation of the neural network further comprises a respective updating, with another respective predetermined number of the parameters of the neural network or the input data of the neural network, of each of the first data traversal path and the second data traversal path, in response to respective completions of traversals of the first data traversal path by the first processor and the second data traversal path by the second processor However, Demaj disclose the limitation (In paragraph [0132-0134], Demaj discloses that the first and second processing module's corresponding attribute has the current value. The first and second processing module determines an initial confidence index from all the first and second probabilities taken into account along the traversed path. Each first distribution of probabilities and the second distribution of probabilities result in a value.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Kluchnikov in view of YANG before them, to include Demaj, with acquiring current values of the attributes so as to traverse a path within the decision tree and obtain at the output of the path. The motivation for doing so would have been to improve the reliability of the classification obtained at the output of a decision tree (See[0013] of Demaj.) Response to Arguments Applicant's arguments filed on 12/30/2025 have been fully considered, and in part are persuasive Pertaining to Rejection under 101 Applicant’s argument in regard to 101 is persuasive and rejection is withdrawn Pertaining to Rejection under 103 Applicant’s arguments in regard to the examiner’s rejections under 35 USC 103 are moot in view of the new grounds of rejection Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EVEL HONORE whose telephone number is (703)756-1179. The examiner can normally be reached Monday-Friday 8 a.m. -5:30 p.m. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela D Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. EVEL HONORE Examiner Art Unit 2142 /Mariela Reyes/Supervisory Patent Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Aug 12, 2021
Application Filed
Aug 21, 2024
Non-Final Rejection — §103
Nov 25, 2024
Response Filed
Mar 13, 2025
Final Rejection — §103
May 27, 2025
Response after Non-Final Action
Aug 01, 2025
Final Rejection — §103
Nov 11, 2025
Response after Non-Final Action
Dec 04, 2025
Examiner Interview Summary
Dec 04, 2025
Applicant Interview (Telephonic)
Jan 22, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566942
System and Method For Generating Parametric Activation Functions
2y 5m to grant Granted Mar 03, 2026
Patent 12547946
SYSTEMS AND METHODS FOR FIELD EXTRACTION FROM UNLABELED DATA
2y 5m to grant Granted Feb 10, 2026
Patent 12547906
METHOD, DEVICE, AND PROGRAM PRODUCT FOR TRAINING MODEL
2y 5m to grant Granted Feb 10, 2026
Patent 12536156
UPDATING METADATA ASSOCIATED WITH HISTORIC DATA
2y 5m to grant Granted Jan 27, 2026
Patent 12406483
ONLINE CLASS-INCREMENTAL CONTINUAL LEARNING WITH ADVERSARIAL SHAPLEY VALUE
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
39%
Grant Probability
85%
With Interview (+46.4%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month