Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 06/13/2025 has been entered.
DETAILED ACTION
Claims 1-31 are presented for the examination.
§ 101 2. 35 U.S.C. 101 reads as follows
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-31 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
As to Claims 1, 9, 17, 24 have been rejected under 35 USC 101 for abstract idea without significantly more. Under Step 2A, Prong 1, the “ defining a recurrent neural network;” recite a mental process since “ defining” is function that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgment, opinion.
Under Prong 2, the additional element “ perform one or more APIs comprising a graph definition and a recurrence attribute, generate a first recurrent neural network comprising a number of iterations of the graph definition based, at least in part, on the recurrence attribute ” are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component, or merely a generic computer or generic computer components to perform the judicial exception, Accordingly, the additional elements do not integrate the recited judicial exception into a practical application, and the claim is therefore directed to the judicial exception. See MPEP 2106.05(f).
Under Step 2B, the additional elements “ perform one or more APIs comprising a graph definition and a recurrence attribute” - this generally have been a mental process although the api could be a generic computer component describes in an actual computer hardware
and generate a first recurrent neural network comprising a number of iterations of the graph definition based, at least in part, on the recurrence attribute - this is mere instructions to apply the mental process under mpep 2106.05(f), amounts to merely generally linking the use of the judicial exception to a particular technological environment or field or use, and is merely applying the judicial exception, therefore, does not amount to significantly more, hence, cannot provide an inventive concept.
4. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application. See MPEP 2106.05(d). Thus, the claim is not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) and further in view of El(US 20190286972 A1).
As to claim 1, Nicol teaches non-transitory computer readable medium ( non-transitory computer readable medium, para[0096], ln 2-3);
generate a first recurrent neural network comprising a number of iterations of the graph definition based, at least in part, on the recurrence attribute( the neural network can include a recurrent neural network (RNN), para[0084], ln 2-6/ . The nodes of the data flow graph, where the nodes of the data flow graph can represent neurons, layers, and so on, of a neural network, can compute updates 330. Updates can be accumulated, captured, or otherwise obtained from the nodes of the data flow graph. Updating can include forward-propagation of values within the data flow graph, back-propagation of values with the data flow graph, and the like. The updates can be captured based on iterations such as N iterations 332, averaging, reducing, scaling, compressing, and so on, para[0036], ln 25-40/ The system 1300 can include an updating component 1360. The updating component can include functions and instructions for updating the neural network, based on the N copies of a variable. The updating can include various techniques, where the techniques can include averaging, averaging after a number of iterations within the data flow graph, and so on. The averaging can include averaging the updates resulting from the distributing the N copies of a variable. The updating can be based on a running average of copies of the variable with the data flow graph, para[0095], ln 15-30).
Chung teaches more APIs comprising a graph definition and a recurrence attribute( the neural network model may comprise of many layers and each layer may be encoded as matrices or vectors[graph definition] of weights expressed in the form of coefficients or constants that have been obtained via off-line training of a neural network. Programmable hardware logic blocks in the nodes may process the matrices or vectors to perform various operations, including multiply, add, para[0027], ln 1-10/ / an LSTM network may comprise a sequence of repeating RNN layers or other types of layers, para[0028]/ inside each LSTM layer, the inputs and hidden states may be processed using a combination of vector operations (e.g., dot-product, inner product, or vector addition) and non-linear functions, para[0029], ln 1-6/ The programming model for the nodes may allow for subroutines to take up to the 30 runtime arguments. These arguments[recurrent attribute] may be passed into the subroutine through the node header as “auxiliary data.” In one example, Aux[0] may be reserved for selecting a subroutine. In one example, one common use for a runtime argument[recurrent attribute] is to set the number of iterations for a given LSTM evaluation. Each subroutine[API] may be a series of API calls that perform matrix and vector operations. Each of these API calls may correspond to an NFU instruction, and when the CSP encounters one of these API calls it may send that instruction to the NFU. In one example, all instructions may act on vector, para]0062], ln 1-15/ Assuming Subroutine 4[API] of Table 9 has executed and has stored a vector[graph definition] in global address 0 and the sigmoid of that vector in MFU 0's local address is 0, para[0068], ln 7-11).
It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Nicol with Chung to incorporate the above because this allows the quick loading of the pre-trained neural network mode.
El teaches if performed at least in part by one or more processors, cause the one or more processors to at least: perform one or more APIs comprising a graph definition and a recurrence attribute defining a recurrent neural network; generate a first recurrent neural network( a neural network model includes a plurality of interconnected neural nodes, where each neural node has associated weights and/or bias(es). Each of the neural nodes provides an output as a function of the weights and biases. In some examples, the output is a function of the dot product with the node weights multiplied with its input values plus a bias value. A number of edges connect the NN nodes, in a variety of topologies. In some examples, some of the nodes are recurrent nodes that provide output as a function of input plus a previous output of the node (e.g., gated recurrent unit (GRUs) nodes or long short-term memory (LSTM) nodes)., para[0020], ln 6-20/ As shown in FIG. 3, a subgraph 320 of the neural network model 310 is identified by a dashed circle. As illustrated, the subgraph 320 includes neural nodes 321-323 and 330-331., para[0062], ln 1-5/ A data scientist can create different neural network models by using different APIs, different numbers of APIs, and interconnecting the APIs in different ways, para[0070], ln 16-20/ selecting a particular API, or otherwise identifying edges and/or nodes that will become part of the subgraph. The API can include marker nodes at the interface of the subgraph. As one example, the marker nodes can be used by a compiler to identify subgraphs for acceleration. As another example, the marker nodes can be predefined nodes of the native format that do not perform operations in the neural network model. In other words, the marker nodes can be used as identifiers without affecting the execution of the neural network model on the machine learning execution engine, para[0111], ln 12-26/ The subgraph of the neural network model can be identified by determining that the subgraph was instantiated in the source code using an API that defines the subgraph as destined for the neural network accelerator. Additionally or alternatively, the subgraph of the neural network model can be identified based on various properties of the neural network model and/or the subgraph. The properties can include an amount of recurrence, connectivity, and/or parallelism within a given topological region, for example, para[0111], ln 5-16).
It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Nicol and EI with Chung to incorporate the above because this creates different neural network models by using different APIs, different numbers of APIs, and interconnecting the APIs in different ways and executes a tool flow for compiling, training, installing, and executing a deep neural network graph.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) and further in view of El(US 20190286972 A1) and further in view of Dufort(US 20190392583 A1).
As to claim 2, Dufort teaches input to the recurrent neural network comprises a tensor, wherein access to the tensor is limited, during an iteration of execution of the recurrent neural network, to a slice of the tensor that corresponds to the iteration( In the first iteration of the neural network 208 merely initializes the internal state for each node included in a spatial lattice of a gated spatiotemporal unit 330 using the output from the third layer 325. Each node includes a vector of values that represent the internal state of the node, and values derived in the image pyramid from the brightness of a block of one or more pixels centered at that node. In each successive iteration, the internal state of each node from the previous iteration is input into the second layer 320 of the neural network 208, via the tensor 322 (H.sub.l.sup.t). The process described above is then repeated starting at the second layer 320, para[0050]/ a tensor 322 that holds an internal state for each node in the spatial lattice at resolution 1 and time step t. As described above, the internal state of each node in the spatial lattice is updated on each time step. The tensor 322 has the dimensions N.sub.lxN.sub.lxC.sub.H. Therefore, there are C.sub.H variables describing each block of one or more pixels of the image at resolution 1, para[0041]).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung and EI with Dufort to incorporate the feature of input to the recurrent neural network comprises a tensor, wherein access to the tensor is limited, during an iteration of execution of the recurrent neural network, to a slice of the tensor that corresponds to the iteration because this allows information from the internal states of nearest neighbor nodes to be considered when determining the current internal state of a node.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) and further in view of El(US 20190286972 A1) in view of Dufort(US 20190392583 A1) and further in view of Bronstein(US 20190318227 A1).
As to claim 3, Bronstein teaches the slice of the tensor is advanced after the iteration( The tensor features 705 are fed into an RNN 821 that produces an incremental update of the tensor 806. The incremental update 806 is added to the current tensor by means of an adder 850. The process is repeated several times, producing each time an improving estimate of the tensor 731, para[0115], In 7-20).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung , EI and Dufort with Bronstine to incorporate the feature of the slice of the tensor is advanced after the iteration because this improves estimate of the factors.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) and further in view of El(US 20190286972 A1) and further in view of DANESH(WO 2018231152 A1).
As to claim 4, Danesh teaches optimize performance of the first recurrent neural network across a plurality of iterations( A fast, recurrent adaptive neural network, such as a Regional Convolutional Neural Network (RCNN) or a Long Short Term Memory network (LSTM), may include an adaptive learning detection algorithm that has been trained to extract, identify and track a unique optical signature amongst the background image can bring in high levels of accuracy, reliability, repeatability and speed to execution, para[0078], In 10-25).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung and EI with Danesh to incorporate the feature of optimize execution of the recurrent neural network across a plurality of iterations because this provides systems and methods for high speed communication in current satellite technology that addresses one or more of the above problems.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) and further in view of El(US 20190286972 A1) in view of DANESH(WO 2018231152 and further in view of Bronstein(US 20190318227 A1).
As to claim 5, Bronstein teaches least one of input or output to the first recurrent neural network comprises a tensor, and wherein the optimization of execution is based at least in part on a compiler assumption that access to the tensor is limited, during an iteration of the execution, to a slice of the tensor(Initial d-dimensional tensor is given in the form of d factors 902, which, together with a set of d geometric domains 701 are provided as input. Each factor and the corresponding geometric domain is fed into a single-domain intrinsic CNN 911, producing the respective factor features 905. The factor features are fed into an RNN 921 that produces an incremental update of the factor 906. The incremental update 906 is added to the current factor by means of an adder 850. The process is repeated several times, producing each time an improving estimate of the factors. The product of the factors by means of a tensor multiplier 930 produces an improving estimate of the tensor 931, para[0116], In 9-25).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung and EI and Danesh with Bronstein to incorporate the feature of least one of input or output to the recurrent neural network comprises a tensor, and wherein the optimization of execution is based at least in part on a compiler assumption that access to the tensor is limited, during an iteration of the execution, to a slice of the tensor because this improves estimate of the factors.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) and further in view of Garg(US 10885452 ).
As to claim 6, Garg teaches the one or more API calls comprise an API call to associate the graph with a recurrence attribute (A number of different types of programmatic interfaces 175 may be implemented by the relationship analyzer[API] in various embodiments, including for example a set of application programming interfaces (APIs), col 5, In 51-56/ constructing relationship graphs with the entities as nodes and the relationships as edges, identifying inconsistent cycles in the graphs and, at least in some cases, pruning selected edges from the graphs based on a multi-factor optimization function which takes inconsistencies and relationship confidence levels into account. The algorithms may be implemented at one or more computing devices which may be collectively referred to herein as a text analyzer or a relationship analyzer[API], col 2, In 53-53).
It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Nicol, Chung, EI with Garg to incorporate the feature of the one or more API calls comprise an API call to associate the graph with a recurrence attribute because this provides removes inconsistencies in graph.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1)in view of El(US 20190286972 A1) and further in view of Ward(US 10210860 B1).
As to claim 7, Ward teaches wherein the API, if performed by one or more processors, cause the one or more processors to at least: determine, based at least in part on detection of a function reversing input to the graph, that the recurrent neural network is a bidirectional recurrent neural network; and optimize execution of the bidirectional recurrent network( Training processes 1646, 1648, 1654, 1656, 1658 perform training of a neural network such as by accepting training data, performing forward propagation through a neural network, and performing backpropagation based on the results. The training processes may be training the same single neural network in parallel or may be training different neural networks. Training manager 1643 manages the training processes on server 1640, and training manager 1653 manages the training processes on server 1650. Training data augmentation system 1642 provides training data augmentation service to the training processes 1646, and 1648. In an embodiment, the training processes 1646, and 1648 communicate with the training data augmentation system 1642 through an API, col 30, ln 30-50).
It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Nicol, Chung, EI with Ward to incorporate the above feature because this is desirable to provide a mechanism for customizing a neural network that has been trained on a general training set for a specific dataset.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) and further in view of ROUSSEAU(US 20170160342 A1).
As to claim 8, Rousseau teaches the API, if performed by one or more processors, cause the one or more processors to eliminate a concatenation operation( instead of the Proxy 600, a known per se Application Programming Interface (API) may be utilized by computer system 450. Thus, for example, the test OS of test system 100 may instead query the Server for an indication if a candidate group needs to be removed from the execution sequence using an existing API function or any other suitable programmatic mechanism, para[0142]).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI with Rousseau to incorporate the feature of the API, if performed by one or more processors, cause the one or more processors to eliminate a concatenation operation because this provides new technique for dynamically modifying an execution sequence of tests in real time.
Claims 9, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) and further in view of Walters(US 10379995 B1).
As to claim 9, It is rejected for the same reason as to claim 1 above. Nicol teaches one or more processor( one or more processors , para[0096], ln 3-5). In additional, Walters teaches circuit to at least receive one or more API calls( The programs, modules, or code can also be implemented or replicated as firmware or circuit logic, col 34, In 11-15/ the machine learning models may include an RNN, a long-short term memory (LSTM) model, or another neural network model. The machine-learning models may be trained using API call data (e.g., API calls and/or API outputs) to predict a routing pathway based on the API call data. In some embodiments, the machine-learning models may be configured to retrieve API call data from a storage (e.g., data 331, database 110, or other data storage) for model training, API testing, or call processing. In some embodiments, the machine-learning models may be configured to receive API call data in real-time as API nodes calls are received and processed by, for example, one or more API systems (e.g., API systems 102a, 102b, 102c), col 13, In 40-56).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI with Walters to incorporate the feature of receive one or more API calls, API comprising and a recurrence attribute because this provides a need for efficient, unconventional systems that identify problems with API systems.
As to claim 17, it is rejected for the same reason as to claim 9 above.
Claims 10, 18, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) in view of Walters(US 10379995 B1) and further in view of Dufort(US 20190392583 A1).
As to claim 10, Dufort teaches input to the recurrent neural network comprises a tensor, wherein access to the tensor is limited, during an iteration of execution of the recurrent neural network, to a slice of the tensor that corresponds to the iteration( In the first iteration of the neural network 208 merely initializes the internal state for each node included in a spatial lattice of a gated spatiotemporal unit 330 using the output from the third layer 325. Each node includes a vector of values that represent the internal state of the node, and values derived in the image pyramid from the brightness of a block of one or more pixels centered at that node. In each successive iteration, the internal state of each node from the previous iteration is input into the second layer 320 of the neural network 208, via the tensor 322 (H.sub.l.sup.(). The process described above is then repeated starting at the second layer 320, para[0050]/ a tensor 322 that holds an internal state for each node in the spatial lattice at resolution 1 and time step t. As described above, the internal state of each node in the spatial lattice is updated on each time step. The tensor 322 has the dimensions N.sub.lxN.sub.1xC.sub.H. Therefore, there are C.sub.H variables describing each block of one or more pixels of the image at resolution 1, para[0041]).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI and Walters with Dufort to incorporate the feature of the recurrent neural network is executed based at least in part on a tensor, wherein access to the tensor is limited, during an iteration of execution of the recurrent neural network, to a slice of the tensor that corresponds to the iteration because this allows information from the internal states of nearest neighbor nodes to be considered when determining the current internal state of a node.
As to claim 18, Dufort teaches input to the recurrent neural network comprises a tensor (para[0050 para[0041) for the same reason as to claim 10 above.
As to claim 19, Dufort teaches access to a tensor comprising input to the recurrent neural network is limited during performance to a slice of the tensor that corresponds to a current iteration (para[0050 para[0041) for the same reason as to claim 10 above.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) in view of Dufort(US 20190392583 A1) and further in view of Bronstein(US 20190318227 A1).
As to claim 11, Bronstein teaches the slice of the tensor is advanced after the iteration( The tensor features 705 are fed into an RNN 821 that produces an incremental update of the tensor 806. The incremental update 806 is added to the current tensor by means of an adder 850. The process is repeated several times, producing each time an improving estimate of the tensor 731, para[0115], In 7-20).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI, Walters and Dufort with Bronstine to incorporate the feature of the slice of the tensor is advanced after the iteration because this improves estimate of the factors.
16. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) in view of Walters(US 10379995 and further in view of NAGARAJA(US 20180260498 A1).
As to claim 12, Nagaraja teaches arithmetic logic units (ALUs) to be configured to at least optimize execution of the recurrent neural network across a plurality of iterations( Further, the SIMA type instructions determine an optimal Q-value corresponding to a current state of the reinforcement learning agent, and trigger the reinforcement learning agent to perform generalized policy iteration, and on-policy and off policy learning methods. Further, the SIMA type instructions, upon execution, approximate a state-value function and a reward function for the current state of the reinforcement learning agent. Further, the SIMA type instructions, when executed by the reinforcement learning processor, train at least one of a deep neural network (DNN) and a recurrent neural network (RNN) using a predetermined learning context, and further trigger the deep neural network or the recurrent neural network for approximating at least one of a reward function and state-value function corresponding to the current state of the reinforcement learning agent, para[0069]).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI , Walters with Nagaraja to incorporate the feature of the one or more arithmetic logic units (ALUs) to be configured to at least optimize execution of the recurrent neural network across a plurality of iterations because this automates the process of SoC design using application specific instructions, thereby improving efficiency due to the complexities associated with the design and implementation of SoC circuits.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) in view of Walters(US 10379995 B1) in view of NAGARAJA(US 20180260498 A1) and further in view of Bronstein(US 20190318227 A1).
As to claim 13, Bronstein teaches least one of input or output to the recurrent neural network comprises a tensor, and wherein the optimization of execution is based at least in part on a compiler assumption that access to the tensor is limited, during an iteration of the execution, to a slice of the tensor(Initial d-dimensional tensor is given in the form of d factors 902, which, together with a set of d geometric domains 701 are provided as input. Each factor and the corresponding geometric domain is fed into a single-domain intrinsic CNN 911, producing the respective factor features 905. The factor features are fed into an RNN 921 that produces an incremental update of the factor 906. The incremental update 906 is added to the current factor by means of an adder 850. The process is repeated several times, producing each time an improving estimate of the factors. The product of the factors by means of a tensor multiplier 930 produces an improving estimate of the tensor 931, para[0116], In 9-25).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI , Walters and Nagaraha with Bronstein to incorporate the feature of least one of input or output to the recurrent neural network comprises a tensor, and wherein the optimization of execution is based at least in part on a compiler assumption that access to the tensor is limited, during an iteration of the execution, to a slice of the tensor because this improves estimate of the factors.
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) in view of Walters(US 10379995 B1) and further in view of Burger(US 20190340499 A1).
As to claim 14, Burger teaches effects of executing functions of the API for defining the recurrent neural network are localized to the functions respective environments( The modelling framework 131 can be used to define and use a neural network model. As one example, the modelling framework 131 can include pre-defined APIs and/or programming primitives that can be used to specify one or more aspects of the neural network model. The predefined APIs can include both lower-level APIs (c.g., activation functions, cost or error functions, nodes, edges, and tensors) and higher-level APIs (e.g., layers, convolutional neural networks, recurrent neural networks, linear classifiers, and so forth). "Source code" can be used as an input to the modelling framework 131 to define a topology of the graph of a given neural network model. In particular, APIs of the modelling framework 131 can be instantiated and interconnected within the source code to specify a complex neural network model. A data scientist can create different neural network models by using different APIs, different numbers of APIs, and interconnecting the APIs in different ways, para[0045]/ Accordingly, there is ample opportunity for improvements in computer hardware and software to implement neural networks, para[0001], In 12-17).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI , Walters and Dufort with Burger to incorporate the feature of effects of executing functions of the API for defining the recurrent neural network are localized to the functions respective environments because this performs methods of training and evaluating neural networks.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) in view of Walters(US 10379995 and further in view of Manion(US 20040148287 A1).
As to claim 15, Manion teaches the graph is defined by invocation of one or more functions of an application programming interface, the one or more functions comprising a function to associate the graph with a recurrence attribute( receiving from the application program a peer graph search call having a plurality of call parameters, claim 7, In 3-9/Thc parameters of the peer graph search API include a handle to the graph associated with the query, an XML string representing the search criteria, and a handle to a newly allocated record enumerator used to iterate over each of the records returned from the search, para[0052], In 3- - 20).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI and Walters with Manion to incorporate the feature of the graph is defined by invocation of one or more functions of an application programming interface, the one or more functions comprising a function to associate the graph with a recurrence attribute because this enables communication and information to be passed to and between the nodes.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) in view of Walters(US 10379995 B1) in view of Tseng( US 20150212990 A1) and further in view of Ward(US 10210860 B1).
As to claim 16, Tseng teaches arithmetic logic units (ALUs)( processor 402 may include one or more arithmetic logic units (ALUs), para[0046], In 24-30/ processors are further configured to execute the instructions to: receive from the client computing device an application programming interface call to the server computing device, claim 36).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI and Walters with Tseng to incorporate the above feature because this enable communication and information to be passed to and between the nodes.
Ward teaches wherein the API, if performed by one or more processors, cause the one or more processors to at least: determine, based at least in part on detection of a function reversing input to the graph, that the recurrent neural network is a bidirectional recurrent neural network; and optimize execution of the bidirectional recurrent neural network( Training processes 1644, 1646, 1648, 1654, 1656, 1658 perform training of a neural network such as by accepting training data, performing forward propagation through a neural network, and performing backpropagation based on the results. The training processes may be training the same single neural network in parallel or may be training different neural networks. Training manager 1643 manages the training processes on server 1640, and training manager 1653 manages the training processes on server 1650. Training data augmentation system 1642 provides training data augmentation service to the training processes 1644, 1646, and 1648. In an embodiment, the training processes 1644, 1646, and 1648 communicate with the training data augmentation system 1642 through an API, col 30, In 30-50).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI, Walters and Tseng with Ward to incorporate the feature of least one of the API, if performed by one or more processors, cause the one or more processors to at least: determine, based at least in part on detection of a function reversing input to the graph, that the recurrent neural network is a bidirectional recurrent neural network; and optimize execution of the bidirectional recurrent neural network because this is desirable to provide a mechanism for customizing a neural network that has been trained on a general training set for a specific dataset.
21. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) in view of Walters(US 10379995 B1) and further in view of DANESH(WO 2018231152 A1).
As to claim 20, Danesh teaches optimize performance of the recurrent neural network across a plurality of iterations( A fast, recurrent adaptive neural network, such as a Regional Convolutional Neural Network (RCNN) or a Long Short Term Memory network (LSTM), may include an adaptive learning detection algorithm that has been trained to extract, identify and track a unique optical signature amongst the background image can bring in high levels of accuracy, reliability, repeatability and speed to execution, para[0078], In 10-25).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI and Walters with Danesh to incorporate the feature of optimize execution of the recurrent neural network across a plurality of iterations because this provides systems and methods for high speed communication in current satellite technology that addresses one or more of the above problems.
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) in view of Walters(US 10379995 B1) and further in view of Bronstcin(US 20190318227 A1).
As to claim 21, Bronstein teaches least one of input or output to the recurrent neural network comprises a tensor, and wherein the optimization of execution is based at least in part on a compiler assumption that access to the tensor is limited, during an iteration of the execution, to a slice of the tensor(Initial d-dimensional tensor is given in the form of d factors 902, which, together with a set of d geometric domains 701 are provided as input. Each factor and the corresponding geometric domain is fed into a single-domain intrinsic CNN 911, producing the respective factor features 905. The factor features are fed into an RNN 921 that produces an incremental update of the factor 906. The incremental update 906 is added to the current factor by means of an adder 850. The process is repeated several times, producing each time an improving estimate of the factors. The product of the factors by means of a tensor multiplier 930 produces an improving estimate of the tensor 931, para[0116], In 9-25).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI and Walters with Bronstein to incorporate the feature of least one of input or output to the recurrent neural network comprises a tensor, and wherein the optimization of execution is based at least in part on a compiler assumption that access to the tensor is limited, during an iteration of the execution, to a slice of the tensor because this improves estimate of the factors.
Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) in view of Walters(US 10379995 B1) and further in view of Gupta( US 20200160167 A1).
As to claim 22, Gupta teaches concatenation operations on output of iterations of the recurrent neural network are eliminated( During training, this process continues for N iterations, stopping when the recurrent neural network 208 outputs a predicted tag that is the last (Nth) known tag corresponding to the color theme embedding. In situations in which the recurrent neural network 208 has not output the last known tag corresponding to the color theme embedding but all known tags corresponding to the color theme embedding have been input to the recurrent neural network 208 during different iterations, para[0086], In 1-10).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI and Walters with Gupta to incorporate the feature of concatenation operations on output of iterations of the recurrent neural network are eliminated because this provides a corresponding level of scale to encountered demand for the resources that are implemented via the platform C.
Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) and further in view of Ward(US 10210860.
As to claim 23, Ward teaches wherein the API, if performed by one or more processors, cause the one or more processors to at least: determine, based at least in part on detection of a function reversing input to the graph, that the recurrent neural network is a bidirectional recurrent neural network; and optimize execution of the bidirectional recurrent neural network(Training processes 1644, 1646, 1648, 1654, 1656, 1658 perform training of a neural network such as by accepting training data, performing forward propagation through a neural network, and performing backpropagation based on the results. The training processes may be training the same single neural network in parallel or may be training different neural networks. Training manager 1643 manages the training processes on server 1640, and training manager 1653 manages the training processes on server 1650. Training data augmentation system 1642 provides training data augmentation service to the training processes 1644, 1646, and 1648. In an embodiment, the training processes 1644, 1646, and 1648 communicate with the training data augmentation system 1642 through an API, col 30, In 30-50).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI and Walters with Ward to incorporate the feature of least one of the API, if performed by one or more processors, cause the one or more processors to at least: determine, based at least in part on detection of a function reversing input to the graph, that the recurrent neural network is a bidirectional recurrent neural network; and optimize execution of the bidirectional recurrent neural network because this is desirable to provide a mechanism for customizing a neural network that has been trained on a general training set for a specific dataset.
Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) and further in view of Takagi(US 20100070958 A1).
As to claim 24, It is rejected for the same reason as to claim 1 above. Nicol teaches one or more processor( one or more processors , para[0096], ln 3-5).
EI teaches the pattern is detected by generation of a first recurrent neural network comprising a number of iterations of the graph definition based, at least in part, on the recurrence attribute ( a neural network model includes a plurality of interconnected neural nodes, where each neural node has associated weights and/or bias(es). Each of the neural nodes provides an output as a function of the weights and biases. In some examples, the output is a function of the dot product with the node weights multiplied with its input values plus a bias value. A number of edges connect the NN nodes, in a variety of topologies. In some examples, some of the nodes are recurrent nodes that provide output as a function of input plus a previous output of the node (e.g., gated recurrent unit (GRUs) nodes or long short-term memory (LSTM) nodes). Generally, subgraphs containing recurrent nodes can be more computationally intensive than similar sized feed-forward subgraphs that have no feedback., para[0020], ln 6-6-20/ The neural network can be divided into different subgraphs,, para[0050], ln 8-10/ The subgraph 320 can be identified in a number of different ways. For example, a compiler can identify the subgraph. As another example, a user can identify a subgraph using a graphical tool, by using one or more predefined application programming interfaces (APIs) to specify the neural network, or by providing markers in a coding language for the neural network to indicate boundaries of the subgraph., para[0064]/ a neural network model includes a plurality of interconnected neural nodes, where each neural node has associated weights and/or bias(es). Each of the neural nodes provides an output as a function of the weights and biases. In some examples, the output is a function of the dot product with the node weights multiplied with its input values plus a bias value. A number of edges connect the NN nodes, in a variety of topologies. In some examples, some of the nodes are recurrent nodes that provide output as a function of input plus a previous output of the node (e.g., gated recurrent unit (GRUs) nodes or long short-term memory (LSTM) nodes)., para[0020], ln 6-20/ As shown in FIG. 3, a subgraph 320 of the neural network model 310 is identified by a dashed circle. As illustrated, the subgraph 320 includes neural nodes 321-323 and 330-331., para[0062], ln 1-5/ A data scientist can create different neural network models by using different APIs, different numbers of APIs, and interconnecting the APIs in different ways, para[0070], ln 16-20/ selecting a particular API, or otherwise identifying edges and/or nodes that will become part of the subgraph. The API can include marker nodes at the interface of the subgraph. As one example, the marker nodes can be used by a compiler to identify subgraphs for acceleration. As another example, the marker nodes can be predefined nodes of the native format that do not perform operations in the neural network model. In other words, the marker nodes can be used as identifiers without affecting the execution of the neural network model on the machine learning execution engine, para[0111], ln 12-26/ The subgraph of the neural network model can be identified by determining that the subgraph was instantiated in the source code using an API that defines the subgraph as destined for the neural network accelerator. Additionally or alternatively, the subgraph of the neural network model can be identified based on various properties of the neural network model and/or the subgraph. The properties can include an amount of recurrence, connectivity, and/or parallelism within a given topological region, for example, para[0111], ln 5-16).
In additional, Takagi teaches detect a pattern in time-series information, based at least in part on one or more API calls comprising a graph definition and a recurrence attribute(the repeat count has not reached the specified value (No in step S107), interdependency by recursive call or mutual recursive call in the functions that form the strongly connected component s, the results of the dependency analysis and the schedule in one function need to be employed in the dependency analysis and the schedule in other functions. The repeat count can be set to once or a plurality of times according to the form of the strongly connected component in the function calling graph. For example, when there is a directed side between the functions that form the strongly connected component in the function calling graph, the repeat count may be set to a plurality of times (four times, for example). Further, the repeat count may be set to a plurality of times (four times, for example) also when only one function forms the strongly connected component and this function performs the self recursive call, para[0163]/
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI with Takagi to incorporate the feature of detect a pattern in time-series information, based at least in part on one or more API calls comprising a graph definition and a recurrence attribute because this enables efficient generation of a parallelized program with shorter parallel execution time.
Claims 25, 26, 27 are rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) in view of Takagi(US 20100070958 A1) and further in view of Ward(US 10210860 B1).
As to claim 25, Ward teaches the first recurrent neural network is a bidirectional recurrent neural network( Training processes 1644, 1646, 1648, 1654, 1656, 1658 perform training of a neural network such as by accepting training data, performing forward propagation through a neural network, and performing backpropagation based on the results. The training processes may be training the same single neural network in parallel or may be training different neural networks. Training manager 1643 manages the training processes on server 1640, and training manager 1653 manages the training processes on server 1650. Training data augmentation system 1642 provides training data augmentation service to the training processes 1644, 1646, and 1648. In an embodiment, the training processes 1644, 1646, and 1648 communicate with the training data augmentation system 1642 through an API, col 30, In 30-50).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI with Ward to incorporate the feature of the recurrent neural network is a bidirectional recurrent neural network because this is desirable to provide a mechanism for customizing a neural network that has been trained on a general training set for a specific dataset.
As to claim 26, Ward teaches input to the bidirectional neural network is ragged( col 11, In 25-30) for the same reason as to claim 25 above.
As to claim 27, Ward teaches input data to the first recurrent neural network comprises a tensor( col 10, In 5-15) for the same reason as to claim 25 above. Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over
Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) in view of Takagi(US 20100070958 A1) in view of Ward(US 10210860B1) and further in view of Bronstein(US 20190318227 A1).
As to claim 28, Bronstein teaches a compiler optimizes pattern detection using the recurrent neural network, based at least in part on optimizing execution of the recurrent neural network based on an assumption, enforced by the compiler, that access to the tensor is limited to a fixed one or more slices of the tensor(Initial d-dimensional tensor is given in the form of d factors 902, which, together with a set of d geometric domains 701 are provided as input. Each factor and the corresponding geometric domain is fed into a single-domain intrinsic CNN 911, producing the respective factor features 905. The factor features are fed into an RNN 921 that produces an incremental update of the factor 906. The incremental update 906 is added to the current factor by means of an adder 850. The process is repeated several times, producing each time an improving estimate of the factors. The product of the factors by means of a tensor multiplier 930 produces an improving estimate of the tensor 931, para[0116], In 9-25).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI , Takagi and Ward with Bronstein to incorporate the feature of a compiler optimizes pattern detection using the recurrent neural network, based at least in part on optimizing execution of the recurrent neural network based on an assumption, enforced by the compiler, that access to the tensor is limited to a fixed one or more slices of the tensor because this improves estimate of the factors.
Claim 29 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) in view of Takagi(US 20100070958 A1) and further in view of Hashemi(US 20190370632 ).
As to claim 29, Hashemi teaches input to the first recurrent neural network comprises a trip count associated with the recurrent neural network by an API call( through training, the embedding neural network learns parameters that result in similar program counter address and delta value pairs having similar embeddings. In other words, two embeddings that are close to each other, in a geometric sense, in the high dimensional embedding space, should be programmatically similar. For example, two different program counter addresses might each correspond to a function that regularly calls another, para[0076], In 1-12).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI and Takagi with Hashemi to incorporate the feature of input to the recurrent neural network comprises a trip count associated with the recurrent neural network by an API call because this predicts future memory addresses from which data will be fetched based on a past history of memory accesses.
Claim 30 is rejected under 35 U.S.C. 103 as being unpatentable over 27. Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) in view of Takagi(US 20100070958 A1) and further in view of Kovvuri( US 20190286973 A1).
As to claim 30, Kovvuri teaches wherein input to the recurrent neural network comprises a control tensor associated with the first recurrent neural network by an API call( The native framework 440 can be used to define and use a neural network model. As one example, the native framework 440 can include pre-defined APIs and/or programming primitives that can be used to specify one or more aspects of the neural network model. The pre-defined APIs can include both lower-level APIs (e.g., activation functions, cost or error functions, nodes, edges, and tensors), para[0078], In 1-6).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI and Takagi with Kovvuri to incorporate the feature of input to the recurrent neural network comprises a control tensor associated with the recurrent neural network by an API call because this evaluates the neural network model, where the hardware accelerator is configured using the configuration information.
Claim 31 is rejected under 35 U.S.C. 103 as being unpatentable over Nicol( US 20190279086 A1) in view of Chung( US 20180247190 A1) in view of El(US 20190286972 A1) in view of Takagi(US 20100070958 A1) and further in view of MENG(CN 107423398 A).
As to claim 31, Meng teaches the first recurrent neural network is multi- layered, and wherein execution of the recurrent neural network is optimized based at least in part on loop fusion( and performing closed-loop optimization and closed loop fusion. finally obtaining a global map of the characteristic point, the node and path, Se: fig.6, In 51-60/S806, sequentially input memory the feature map neural network model. Wherein the memory neural network model can sequence input for comprehensive treatment of the neural network model. memory neural network model is a recurrent neural network model. memory neural network model may specifically be LSTM (Long-Short-Term Memory short-term memory neural network). Specifically, the interactive robot can transmit each feature map neural network model for obtaining sequentially input memory for face and feature detection, step 806).
It would have been obvious to one of the ordinary skill in the at before the effective filling date of the claimed invention was made to modify the teaching of Nicol, Chung, EI and Takagi with Meng to incorporate the feature of the recurrent neural network is multi- layered, and wherein execution of the recurrent neural network is optimized based at least in part on loop fusion because this evaluates the neural network model because this provides the solution of the invention improves the interaction efficiency between the visitor.
Response to the argument:
Applicant amendment filed on 06/13/2025 has been considered but they are not 29. persuasive:
A. Applicant argued in substance that :
(1) “ Applicant respectfully submits that the proposed combination of Nurvitadhi and Xue fails to teach each and every feature of claim 1”
B. Examiner respectfully disagreed with Applicant's remarks:
As to the point (1), Nicol teaches the neural network can include a recurrent neural network (RNN), para[0084], ln 2-6/ . The nodes of the data flow graph, where the nodes of the data flow graph can represent neurons, layers, and so on, of a neural network, can compute updates 330. Updates can be accumulated, captured, or otherwise obtained from the nodes of the data flow graph. Updating can include forward-propagation of values within the data flow graph, back-propagation of values with the data flow graph, and the like. The updates can be captured based on iterations such as N iterations 332, averaging, reducing, scaling, compressing, and so on, para[0036], ln 25-40/ The system 1300 can include an updating component 1360. The updating component can include functions and instructions for updating the neural network, based on the N copies of a variable. The updating can include various techniques, where the techniques can include averaging, averaging after a number of iterations within the data flow graph, and so on. The averaging can include averaging the updates resulting from the distributing the N copies of a variable. The updating can be based on a running average of copies of the variable with the data flow graph, para[0095], ln 15-30).
Chung teaches the neural network model may comprise of many layers and each layer may be encoded as matrices or vectors[graph definition] of weights expressed in the form of coefficients or constants that have been obtained via off-line training of a neural network. Programmable hardware logic blocks in the nodes may process the matrices or vectors to perform various operations, including multiply, add, para[0027], ln 1-10/ / an LSTM network may comprise a sequence of repeating RNN layers or other types of layers, para[0028]/ inside each LSTM layer, the inputs and hidden states may be processed using a combination of vector operations (e.g., dot-product, inner product, or vector addition) and non-linear functions, para[0029], ln 1-6/ The programming model for the nodes may allow for subroutines to take up to the 30 runtime arguments. These arguments[recurrent attribute] may be passed into the subroutine through the node header as “auxiliary data.” In one example, Aux[0] may be reserved for selecting a subroutine. In one example, one common use for a runtime argument[recurrent attribute] is to set the number of iterations for a given LSTM evaluation. Each subroutine[API] may be a series of API calls that perform matrix and vector operations. Each of these API calls may correspond to an NFU instruction, and when the CSP encounters one of these API calls it may send that instruction to the NFU. In one example, all instructions may act on vector, para]0062], ln 1-15/ Assuming Subroutine 4[API] of Table 9 has executed and has stored a vector[graph definition] in global address 0 and the sigmoid of that vector in MFU 0's local address is 0, para[0068], ln 7-11).
El teaches a neural network model includes a plurality of interconnected neural nodes, where each neural node has associated weights and/or bias(es). Each of the neural nodes provides an output as a function of the weights and biases. In some examples, the output is a function of the dot product with the node weights multiplied with its input values plus a bias value. A number of edges connect the NN nodes, in a variety of topologies. In some examples, some of the nodes are recurrent nodes that provide output as a function of input plus a previous output of the node (e.g., gated recurrent unit (GRUs) nodes or long short-term memory (LSTM) nodes)., para[0020], ln 6-20/ As shown in FIG. 3, a subgraph 320 of the neural network model 310 is identified by a dashed circle. As illustrated, the subgraph 320 includes neural nodes 321-323 and 330-331., para[0062], ln 1-5/ A data scientist can create different neural network models by using different APIs, different numbers of APIs, and interconnecting the APIs in different ways, para[0070], ln 16-20/ selecting a particular API, or otherwise identifying edges and/or nodes that will become part of the subgraph. The API can include marker nodes at the interface of the subgraph. As one example, the marker nodes can be used by a compiler to identify subgraphs for acceleration. As another example, the marker nodes can be predefined nodes of the native format that do not perform operations in the neural network model. In other words, the marker nodes can be used as identifiers without affecting the execution of the neural network model on the machine learning execution engine, para[0111], ln 12-26/ The subgraph of the neural network model can be identified by determining that the subgraph was instantiated in the source code using an API that defines the subgraph as destined for the neural network accelerator. Additionally or alternatively, the subgraph of the neural network model can be identified based on various properties of the neural network model and/or the subgraph. The properties can include an amount of recurrence, connectivity, and/or parallelism within a given topological region, for example, para[0111], ln 5-16).
It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Nicol and EI with Chung to incorporate the above because this creates different neural network models by using different APIs, different numbers of APIs, and interconnecting the APIs in different ways and executes a tool flow for compiling, training, installing, and executing a deep neural network graph.
Conclusion
US 20200268252 A1 teaches meters during training or with mini-batch gradient descent. Some embodiments may iteratively adjust model parameters by a designated amount, set as a hyper parameter (or model parameter that is not varied during an instance of training as a result of the training set, but which may be changed.
US 20230306739 A1 teaches In at least one embodiment, Open VINO supports neural networks such as convolutional neural networks (CNNs), recurrent and/or attention-based neural networks, and/or various other neural network models. In at least one embodiment, Open VINO supports various software libraries such as OpenCV, OpenCL, and/or variations thereof.
US 20190279086 A1 teaches updating component can include functions and instructions for updating the neural network, based on the N copies of a variable. The updating can include various techniques, where the techniques can include averaging, averaging after a number of iterations within the data flow graph, and so on. The averaging can include averaging the updates resulting from the distributing the N copies of a variable. The updating can be based on a running average of copies of the variable with the data flow graph. Other updating techniques can include averaging two or more sets of updates resulting from the distributing the two or more sets of N copies. The averaging two or more sets of updates can include parallel training of different data
US 20190286972 A1 teaches The subgraph of the neural network model can be identified by determining that the subgraph was instantiated in the source code using an API that defines the subgraph as destined for the neural network accelerator. Additionally or alternatively, the subgraph of the neural network model can be identified based on various properties of the neural network model and/or the subgraph. The properties can include an amount of recurrence, connectivity, and/or parallelism within a given topological region, for example.
US 20180247190 A1 teaches The programming model for the nodes may allow for subroutines to take up to the 30 runtime arguments. These arguments may be passed into the subroutine through the node header as “auxiliary data.” In one example, Aux[0] may be reserved for selecting a subroutine. In one example, one common use for a runtime argument is to set the number of iterations for a given LSTM evaluation. Each subroutine may be a series of API calls that perform matrix and vector operations. Each of these API calls may correspond to an NFU instruction, and when the CSP encounters one of these API calls it may send that instruction to the.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LECHI TRUONG whose telephone number is (571)272-3767. The examiner can normally be reached 10-8 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Young Kevin can be reached on (571)270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LECHI TRUONG/ Primary Examiner, Art Unit 2194