DETAILED ACTION
Response to Amendment
The amendment filed on 5 December 2025 has been entered.
Claims 1, 3-4, 7-8, 10-11, 14-15, 17-18 are pending.
Claims 1, 8, 15 are amended.
Response to Arguments
Applicant's arguments filed on 5 December 2025 have been fully considered, but they are not persuasive.
Applicant’s remarks, regarding the rejections of claims under 35 USC 103, have been fully considered.
Applicant amends Claim 1 to further recite the processing to assign neural nodes of the neural network across the grouping of servers with each neural node and respective KVS key co-located on a same server and with each neural node having a unique identifier to identify node location within the neural network and location within the grouping of servers. Applicant submits the cited paragraphs of Meng1 fail to teach this feature. Applicant submits the combination of Meng1 and Istvan fails to disclose Claim 1.
Applicant’s arguments have been considered, but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 3, 8, 10, 15, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Meng (U.S. Pre-Grant Publication No. 2018/0131516, hereinafter 'Meng1'), in view of Istvan et al. (NPL: "Providing Multi-tenant Services with FPGAs: Case Study on a Key-Value Store", hereinafter 'Istvan'), and further in view of Cui et al. (NPL: "GeePS: scalable deep learning on distributed GPUs with a GPU specialized parameter server", hereinafter 'Cui').
Regarding claim 1 and analogous claims 8 and 15, Meng1 teaches A method for implementing a neural node in a neural network comprising a plurality of neural nodes in a key value store (KVS) cluster, the KVS cluster comprising a grouping of servers that implement a distributed KVS system configured to provide a serverless cloud native compute with a microfunction as a service framework, wherein a respective KVS key is associated with each neural node, the method comprising:
monitoring, by a microfunction runtime environment which checks for updates of KVS keys, a first KVS key for the neural node for an update of an input value to the neural node ([0074] The network devices may detect and record data related to the environment that it monitors, and transmit that data to computing environment 214.; [0104] The grid may add new machines at any time (e.g., initiated from any control node). Upon adding a new node to the grid, the control node may first add the new node to its table of grid nodes. The control node may also then notify every other control node about the new node. The nodes receiving the notification may acknowledge that they have updated their configuration information.; [0183] In block 1510, the processing device determines a number of nodes in a distributed computing environment to be used to process the hashed key-value pairs. In some examples, the processing device can determine the number of nodes by communicating with nodes in the distributed computing environment to monitoring, by a microfunction runtime environment which checks for updates of KVS keys determine which nodes are available to process the hashed key-value pairs.; [0192] In block 1802, a processing device receives key-value pairs. In some examples, the processing device can retrieve the key-value pairs from a remote database or a local database. In additional or alternative examples, the processing device can a first KVS key for the neural node for an update of an input value to the neural node receive communications, from remote computing devices, that include the key-value pairs.);
executing the microfunction for the neural node by the same server associated with the neural node, the microfunction of the neural node, and the first KVS key on the input value to generate an output value from the neural node, when detecting the update of the input value to the neural node ([0003] A distributed computing environment can include multiple nodes in communication with each other over a network for processing data. Examples of a node can include a computing device, a server, a virtual machine, or any combination of these.; [0005] The organizational process can include the first KVS key on the input value to generate an output value from the neural node to the neural node distributing each respective key-value pair in the plurality of key-value pairs to a respective node corresponding to a respective bin into which the respective key-value pair is categorized.; [0047] A executing the microfunction for the neural node process can correspond to a method, a the microfunction of the neural node function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.; [0165] The machine-learning model(s) can be implemented using a single computing device or multiple computing devices, such as the communications grid computing system 400 discussed above.; [0067] While each device, by the same server associated with the neural node server, and system in FIG. 1 is shown as a single device, multiple devices may instead be used.; [0051] In some examples, the node processes the indexed key-value pairs to determine a result associated with the job. Some or all of the nodes in the distributed computing environment can follow this indexing scheme, process their respective indexed key-value pairs, and produce results.); and
writing the output value from the neural node to at least a second neural node located at the same server to indicate an update from the neural node to update a second KVS key associated with the second neural node ([0003] A distributed computing environment can include multiple nodes in communication with each other over a network for processing data. Examples of a node can include a computing device, a server, a virtual machine, or any combination of these. [0005] The organizational process can include distributing each respective key-value pair in the plurality of key-value pairs to a respective node corresponding to a respective bin into which the respective key-value pair is categorized; [0165] The machine-learning model(s) can be implemented using a single computing device or multiple computing devices, such as the communications grid computing system 400 discussed above.; [0190] The node can then determine a destination node that corresponds to the particular bin and to which the data (or the hashed version of the data) is to be distributed. The writing the output value from the neural node to at least a second neural node located at the same server node can transmit the data (or the hashed version of the data) to the destination node, which can to indicate an update from the neural node to update a second KVS key associated with the second neural node receive and stored the data (or the hashed version of the data).; [0198] In block 1810, the processing device distributes each hashed key-value pair to the node corresponding to the bin into which the hashed key-value pair was categorized. Distributing the hashed-key value pairs can include maintaining a key-value pair in memory, transmitting a key-value pair to another node, deleting the key value pair that was transmitted to the other node from memory, or any combination of these.).
Meng1 fails to teach processing to assign neural nodes of the neural network across the grouping of servers with each neural node and respective KVS key co-located on a same server and with each neural node having a unique identifier to identify node location within the neural network and location within the grouping of servers; executing the microfunction for the neural node by the same server associated with the neural node, the microfunction of the neural node, and the first KVS key on the input value to generate an output value from the neural node, when detecting the update of the input value to the neural node; and
Istvan teaches executing the microfunction for the neural node by the same server associated with the neural node, the microfunction of the neural node, and the first KVS key on the input value to generate an output value from the neural node, when detecting the update of the input value to the neural node ([B. Key-value Stores and Caribou, pg. 120] Almost all distributed data processing applications require either a storage or a caching layer. As a result, key-value stores (KVSs) such as memcached and Redis and object stores such as Amazon S3, are widely used in the cloud. Most KVSs are built around a random access data structure that holds keys and pointers to values. These values can reside on disk or in main memory, and can be of various sizes. While different KVSs may offer different features, they all need to support read and write (get and set) operations to manipulate key value pairs. Although there is already much work on using FPGAs to accelerate these applications, ranging from partial offloading [10], to standalone solutions [5], [9], these works do not address multi-tenancy as part of the circuit design.; The interface to Caribou consists of operations to read and write (get and set) a when detecting the update of the input value to the neural node value corresponding to a key, to on the input value to generate an output value from the neural node read the value and apply a filtering operation, or to retrieve all data in the storage (scan) and apply a filtering operation on it. Its implementation is optimized for smaller value accesses (32-512 B), thus the multi-tenant variant has to be able to switch between tenants with high frequency to achieve linerate performance.); and
Meng1 and Istvan are considered to be analogous to the claimed invention because they are in the same field of machine learning. In view of the teachings of Meng1, it would have been obvious for a person of ordinary skill in the art to apply the teachings of Istvan to Meng1 before the effective filing date of the claimed invention in order to address key challenges of isolation in terms of data and performance and runtime flexibility in dividing the available bandwidth and compute resources among tenants by using single, multi-tenant application to service concurrent tenants while, at the same time, utilizing the FPGA resources efficiently (cf. Istvan, [1. Introduction, pg. 119] An alternative way of tackling high utilization is to take a service-centric view, exposing the joint implementation of an application (or the service) to a number of tenants, as sketched in Figure 1. Services such as storage and machine learning are widely used, and accelerating them with FPGAs is attractive both for clients and cloud providers. Benefiting from this approach requires addressing two key challenges: first, isolation between tenants, both in terms of data and performance, and second, runtime flexibility in dividing the available bandwidth and compute resources among tenants. In this work we show how the above challenges can be tackled for a distributed key-value store built with FPGA. We demonstrate how to use a single, multi-tenant application to service concurrent tenants while, at the same time, utilizing the FPGA resources efficiently and offering rich functionality.).
Cui teaches processing to assign neural nodes of the neural network across the grouping of servers with each neural node and respective KVS key co-located on a same server and with each neural node having a unique identifier to identify node location within the neural network and location within the grouping of servers ([2.3 Scaling ML with a parameter server] processing to assign neural nodes of the neural network across the grouping of servers with each neural node and respective KVS key co-located on a same server Figure 4 illustrates the basic parameter server architecture. All state shared among application workers (i.e., the model parameters being learned) is kept in distributed shared memory implemented as a specialized key-value store called a “parameter server”. An ML application’s workers process their assigned input data and use simple Read and Update methods to fetch or apply a delta to parameter values, leaving the communication and consistency issues to the parameter server.; [4.3 Parallelizing batched access] GeePS provides a with each neural node having a unique identifier key-value store interface to the application, where each parameter row is named by a unique key. When the application issues a read or update operation (for accessing a set of model parameters), it will provide a list of keys for the target rows. GeePS could use a hash map to to identify node location within the neural network and location within the grouping of servers map the row keys to the locations where the rows are stored. But, in order to make the batched access be executed by all GPU cores, GeePS will use the following mechanism. Suppose the application update n rows, each with m floating point values, in one Update operation, it will provide an array of n parameter row updates {{updates[i][j]}m j=1}n i=1, and (provided in PreUpdate)an array of n keys {keys[i]}n i=1. GeePS will use an index with n entries, where each of {index[i]}n i=1 stores the location of the cached parameter update. Then, it will do the following data operation for this Update: {{parameters[index[i]][j]+= updates[i][j]}m j=1}n i=1. This operation can be executed with all the GPU cores. Moreover, the index can be built just once for each batch of keys, based on the operation sequence gathered as described earlier, and re-used for each instance of the given batch access.);
Meng1, Istvan, and Cui are considered to be analogous to the claimed invention because they are in the same field of machine learning. In view of the teachings of Meng1 and Istvan, it would have been obvious for a person of ordinary skill in the art to apply the teachings of Cui to Meng1 before the effective filing date of the claimed invention in order to support scalable deep learning across GPUs distributed among multiple machines, achieving higher training throughput (cf. Cui, [Abstract] Large-scale deep learning requires huge computational resources to train a multi-layer neural network. Recent systems propose using 100s to 1000s of machines to train networks with tens of layers and billions of connections. While the computation involved can be done more efficiently on GPUs than on more traditional CPU cores, training such networks on a single GPU is too slow and training on distributed GPUs can be inefficient, due to data movement overheads, GPU stalls, and limited GPU memory. This paper describes a new parameter server, called GeePS, that supports scalable deep learning across GPUs distributed among multiple machines, overcoming these obstacles. We show that GeePS enables a state-of-the-art single-node GPU implementation to scale well, such as to 13 times the number of training images processed per second on 16 machines (relative to the original optimized single-node code). Moreover, GeePS achieves a higher training throughput with just four GPU machines than that a state-of-the-art CPU-only system achieves with 108 machines.).
Regarding claim 3, Meng1, as modified by Istvan and Cui, teaches The method of claim 1.
Meng1 teaches further comprising: determining whether all input values to the neural node have been updated before executing the microfunction and writing the output value to the second neural node ([0003] A distributed computing environment can include multiple nodes in communication with each other over a network for processing data. Examples of a node can include a computing device, a server, a virtual machine, or any combination of these. [0005] The organizational process can include distributing each respective key-value pair in the plurality of key-value pairs to a respective node corresponding to a respective bin into which the respective key-value pair is categorized.; [0047] A process can correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.; [0165] The machine-learning model(s) can be implemented using a single computing device or multiple computing devices, such as the communications grid computing system 400 discussed above.; [0204] The processes discussed above with respect to FIGS. 14-24 can result in stable data-processing, in which the output from a distributed computing environment is consistent for the same set of input key-value pairs, regardless of the number of nodes in the distributed computing environment.; [0208] In other examples, the leaf nodes of the tree structure can determining whether all input values to the neural node have been updated before executing the microfunction and writing the output value to the second neural node complete a first pass through all the key-value pairs, followed by the inner nodes of the tree structure further reducing the results produced by the leaf nodes. This process can be repeated until all of the nodes have performed passes on the key-value pairs.).
Meng1, Istvan, and Cui are combinable for the same rationale as set forth above with respect to claim 1.
Regarding claim 10, Meng1, as modified by Istvan and Cui, teaches The network device of claim 8.
Meng1 teaches wherein the microfunction runtime environment is further to determine whether all input values to the neural node have been updated before executing the microfunction and writing the output value to the second neural node ([0003] A distributed computing environment can include multiple nodes in communication with each other over a network for processing data. Examples of a node can include a computing device, a server, a virtual machine, or any combination of these. [0005] The organizational process can include distributing each respective key-value pair in the plurality of key-value pairs to a respective node corresponding to a respective bin into which the respective key-value pair is categorized.; [0047] A process can correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.; [0165] The machine-learning model(s) can be implemented using a single computing device or multiple computing devices, such as the communications grid computing system 400 discussed above.; [0204] The processes discussed above with respect to FIGS. 14-24 can result in stable data-processing, in which the output from a distributed computing environment is consistent for the same set of input key-value pairs, regardless of the number of nodes in the distributed computing environment.; [0208] In other examples, the leaf nodes of the tree structure can determining whether all input values to the neural node have been updated before executing the microfunction and writing the output value to the second neural node complete a first pass through all the key-value pairs, followed by the inner nodes of the tree structure further reducing the results produced by the leaf nodes. This process can be repeated until all of the nodes have performed passes on the key-value pairs.).
Meng1, Istvan, and Cui are combinable for the same rationale as set forth above with respect to claim 1.
Regarding claim 17, Meng1, as modified by Istvan and Cui, teaches The non-transitory computer-readable medium of claim 15.
Meng1 teaches having further instructions stored therein causing the computing system to perform operations further comprising: determining whether all input values to the neural node have been updated before executing the microfunction and writing the output value to the second neural node ([0003] A distributed computing environment can include multiple nodes in communication with each other over a network for processing data. Examples of a node can include a computing device, a server, a virtual machine, or any combination of these. [0005] The organizational process can include distributing each respective key-value pair in the plurality of key-value pairs to a respective node corresponding to a respective bin into which the respective key-value pair is categorized.; [0047] A process can correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.; [0165] The machine-learning model(s) can be implemented using a single computing device or multiple computing devices, such as the communications grid computing system 400 discussed above.; [0204] The processes discussed above with respect to FIGS. 14-24 can result in stable data-processing, in which the output from a distributed computing environment is consistent for the same set of input key-value pairs, regardless of the number of nodes in the distributed computing environment.; [0208] In other examples, the leaf nodes of the tree structure can determining whether all input values to the neural node have been updated before executing the microfunction and writing the output value to the second neural node complete a first pass through all the key-value pairs, followed by the inner nodes of the tree structure further reducing the results produced by the leaf nodes. This process can be repeated until all of the nodes have performed passes on the key-value pairs.).
Meng1, Istvan, and Cui are combinable for the same rationale as set forth above with respect to claim 1.
Claims 4, 7, 11, 14, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Meng1, in view of Istvan, Cui, Luo (U.S. Pre-Grant Publication No. 2020/0159563), and further in view of Ladwig et al. (NPL: "CumulusRDF: Linked Data Management on Nested Key-Value Stores", hereinafter 'Ladwig').
Regarding claim 4, Meng1, as modified by Istvan and Cui, teaches The method of claim 1.
Meng1 teaches further comprising: assigning the microfunction the first KVS key as a combination of a tenant identifier, a network identifier and a neural node identifier ([0165] The machine-learning model(s) can be implemented using a single computing device or multiple computing devices, such as the communications grid computing system 400 discussed above.; [0021] FIG. 12 is an example of a neural network according to some aspects.; [0103] When a node joins a communications grid (e.g., when the node is powered on or connected to an existing node on the grid or both), the a neural node identifier node is assigned (e.g., by an operating system of the grid) a universally unique identifier (UUID). This unique identifier may help other nodes and external entities (devices, users, etc.) to identify the node and distinguish it from other nodes.).
Meng1, as modified by Istvan and Cui, fails to teach further comprising: assigning the microfunction the first KVS key as a combination of a tenant identifier, a network identifier and a neural node identifier.
Luo teaches further comprising: assigning the microfunction the first KVS key as a combination of a tenant identifier, a network identifier and a neural node identifier ([0111] After receiving the request of GET http://169.254.169.254/openstack/latest/resetpwd_flag, the cloud service computing node queries key=resetpwd_flag in system_metadata in the metadata server. If the key exists, the cloud service computing node returns a corresponding value, and if the key does not exist, the cloud service computing node returns False.; [0118] OpenStack runs a neutron-ns-metadata-proxy component and a neutron-metadata-agent component on a network node. The neutron-ns-metadata-proxy component obtains a router identifier (router-id) and a a network identifier network identifier (network-id), and adds the router identifier and the network identifier to the identifier obtaining request. The neutron-metadata-agent component is responsible for adding an instance identifier (instance-id) and a tenant identifier tenant identifier (tenant-id) to the identifier obtaining request, and forwarding the received the identifier obtaining request to the nova-api-metadata component. The instance identifier is also an identifier of the virtual machine.).
Meng1, Istvan, Cui, and Luo are considered to be analogous to the claimed invention because they are in the same field of machine learning. In view of the teachings of Meng1, Istvan, and Cui, it would have been obvious for a person of ordinary skill in the art to apply the teachings of Luo to Meng1 before the effective filing date of the claimed invention in order to uniquely identify a machine (cf. Luo, [0010] With reference to the first aspect, in a first implementation of the first aspect, the reset password stored in the metadata server is an encrypted reset password, and the configuring, by the virtual machine, the reset password as a password of the virtual machine includes obtaining, by the virtual machine, a universally unique identifier (UUID) of the virtual machine, extracting, by the virtual machine, a salt from the encrypted reset password, generating, by the virtual machine, a key based on the UUID of the virtual machine and the salt, extracting, by the virtual machine, a ciphertext from the encrypted reset password, and decrypting the ciphertext using the key to obtain a plaintext password, and configuring, by the virtual machine, the plaintext password as the password of the virtual machine.).
Ladwig teaches further comprising: assigning the microfunction the first KVS key as a combination of a tenant identifier, a network identifier and a neural node identifier ([3.3 Flat Layout, pg. 35] We base our second storage layout on the standard key-value data model. As columns are stored in a sorted fashion, we can perform range scans and therefore prefix lookups on column keys. We thus store (s, p, o) triples as { s : { po : - } } where s occupies the row-key position, p the column-key position and o the value position. We use a the first KVS key as a combination of concatenated po as column key as column keys have to be unique.).
Meng1, Istvan, Cui, Luo, and Ladwig are considered to be analogous to the claimed invention because they are in the same field of machine learning. In view of the teachings of Meng1, Istvan, Cui, and Luo, it would have been obvious for a person of ordinary skill in the art to apply the teachings of Ladwig to Meng1 before the effective filing date of the claimed invention in order to assign a unique column key (cf. Luo, [3.3 Flat Layout, pg. 35] We base our second storage layout on the standard key-value data model. As columns are stored in a sorted fashion, we can perform range scans and therefore prefix lookups on column keys. We thus store (s, p, o) triples as { s : { po : - } } where s occupies the row-key position, p the column-key position and o the value position. We use a concatenated po as column key as column keys have to be unique.).
Regarding claim 7, Meng1, as modified by Istvan, Cui, Luo, and Ladwig, teaches The method of claim 4.
Meng1 teaches wherein the tenant identifier, network identifier, and neural node identifier are concatenated to form the first KVS key ([0021] FIG. 12 is an example of a neural network according to some aspects.; [0103] When a node joins a communications grid (e.g., when the node is powered on or connected to an existing node on the grid or both), the neural node identifier node is assigned (e.g., by an operating system of the grid) a universally unique identifier (UUID). This unique identifier may help other nodes and external entities (devices, users, etc.) to identify the node and distinguish it from other nodes.).
Luo teaches wherein the tenant identifier, network identifier, and neural node identifier are concatenated to form the first KVS key ([0111] After receiving the request of GET http://169.254.169.254/openstack/latest/resetpwd_flag, the cloud service computing node queries key=resetpwd_flag in system_metadata in the metadata server. If the key exists, the cloud service computing node returns a corresponding value, and if the key does not exist, the cloud service computing node returns False.; [0118] OpenStack runs a neutron-ns-metadata-proxy component and a neutron-metadata-agent component on a network node. The neutron-ns-metadata-proxy component obtains a router identifier (router-id) and a network identifier network identifier (network-id), and adds the router identifier and the network identifier to the identifier obtaining request. The neutron-metadata-agent component is responsible for adding an instance identifier (instance-id) and a tenant identifier tenant identifier (tenant-id) to the identifier obtaining request, and forwarding the received the identifier obtaining request to the nova-api-metadata component. The instance identifier is also an identifier of the virtual machine.).
Ladwig teaches wherein the tenant identifier, network identifier, and neural node identifier are concatenated to form the first KVS key ([3.3 Flat Layout, pg. 35] We base our second storage layout on the standard key-value data model. As columns are stored in a sorted fashion, we can perform range scans and therefore prefix lookups on column keys. We thus store (s, p, o) triples as { s : { po : - } } where s occupies the row-key position, p the column-key position and o the value position. We use a are concatenated to form the first KVS key concatenated po as column key as column keys have to be unique.).
Meng1, Istvan, Cui, Luo, and Ladwig are combinable for the same rationale as set forth above with respect to claim 4.
Regarding claim 11, Meng1, as modified by Istvan and Cui, teaches The network device of claim 8.
Meng1 teaches wherein the microfunction runtime environment is further to assign the microfunction the first KVS key as a combination of a tenant identifier, a network identifier and a neural node identifier ([0021] FIG. 12 is an example of a neural network according to some aspects.; [0103] When a node joins a communications grid (e.g., when the node is powered on or connected to an existing node on the grid or both), the a neural node identifier node is assigned (e.g., by an operating system of the grid) a universally unique identifier (UUID). This unique identifier may help other nodes and external entities (devices, users, etc.) to identify the node and distinguish it from other nodes.).
Meng1, as modified by Istvan and Cui, fails to teach wherein the microfunction runtime environment is further configured to assign the microfunction the first KVS key as a combination of a tenant identifier, a network identifier and a neural node identifier.
Luo teaches wherein the microfunction runtime environment is further configured to assign the microfunction the first KVS key as a combination of a tenant identifier, a network identifier and a neural node identifier ([0111] After receiving the request of GET http://169.254.169.254/openstack/latest/resetpwd_flag, the cloud service computing node queries key=resetpwd_flag in system_metadata in the metadata server. If the key exists, the cloud service computing node returns a corresponding value, and if the key does not exist, the cloud service computing node returns False.; [0118] OpenStack runs a neutron-ns-metadata-proxy component and a neutron-metadata-agent component on a network node. The neutron-ns-metadata-proxy component obtains a router identifier (router-id) and a a network identifier network identifier (network-id), and adds the router identifier and the network identifier to the identifier obtaining request. The neutron-metadata-agent component is responsible for adding an instance identifier (instance-id) and a tenant identifier tenant identifier (tenant-id) to the identifier obtaining request, and forwarding the received the identifier obtaining request to the nova-api-metadata component. The instance identifier is also an identifier of the virtual machine.).
Meng1, Istvan, Cui, and Luo are combinable for the same rationale as set forth above with respect to claim 4.
Ladwig teaches wherein the microfunction runtime environment is further configured to assign the microfunction the first KVS key as a combination of a tenant identifier, a network identifier and a neural node identifier ([3.3 Flat Layout, pg. 35] We base our second storage layout on the standard key-value data model. As columns are stored in a sorted fashion, we can perform range scans and therefore prefix lookups on column keys. We thus store (s, p, o) triples as { s : { po : - } } where s occupies the row-key position, p the column-key position and o the value position. We use a the first KVS key as a combination of concatenated po as column key as column keys have to be unique.).
Meng1, Istvan, Cui, Luo, and Ladwig are combinable for the same rationale as set forth above with respect to claim 4.
Regarding claim 14, Meng1, as modified by Istvan, Cui, Luo, and Ladwig, teaches The network device of claim 11.
Meng1 teaches wherein the tenant identifier, network identifier, and neural node identifier are concatenated to form the first KVS key ([0021] FIG. 12 is an example of a neural network according to some aspects.; [0103] When a node joins a communiuycations grid (e.g., when the node is powered on or connected to an existing node on the grid or both), the neural node identifier node is assigned (e.g., by an operating system of the grid) a universally unique identifier (UUID). This unique identifier may help other nodes and external entities (devices, users, etc.) to identify the node and distinguish it from other nodes.).
Luo teaches wherein the tenant identifier, network identifier, and neural node identifier are concatenated to form the first KVS key ([0111] After receiving the request of GET http://169.254.169.254/openstack/latest/resetpwd_flag, the cloud service computing node queries key=resetpwd_flag in system_metadata in the metadata server. If the key exists, the cloud service computing node returns a corresponding value, and if the key does not exist, the cloud service computing node returns False.; [0118] OpenStack runs a neutron-ns-metadata-proxy component and a neutron-metadata-agent component on a network node. The neutron-ns-metadata-proxy component obtains a router identifier (router-id) and a network identifier network identifier (network-id), and adds the router identifier and the network identifier to the identifier obtaining request. The neutron-metadata-agent component is responsible for adding an instance identifier (instance-id) and a tenant identifier tenant identifier (tenant-id) to the identifier obtaining request, and forwarding the received the identifier obtaining request to the nova-api-metadata component. The instance identifier is also an identifier of the virtual machine.).
Ladwig teaches wherein the tenant identifier, network identifier, and neural node identifier are concatenated to form the first KVS key ([3.3 Flat Layout, pg. 35] We base our second storage layout on the standard key-value data model. As columns are stored in a sorted fashion, we can perform range scans and therefore prefix lookups on column keys. We thus store (s, p, o) triples as { s : { po : - } } where s occupies the row-key position, p the column-key position and o the value position. We use a are concatenated to form the first KVS key concatenated po as column key as column keys have to be unique.).
Meng1, Istvan, Cui, Luo, and Ladwig are combinable for the same rationale as set forth above with respect to claim 4.
Regarding claim 18, Meng1, as modified by Istvan and Cui, teaches The non-transitory computer-readable medium of claim 15.
Meng1 teaches having further instructions stored therein causing the computing system to perform operations further comprising: assigning the microfunction the first KVS key as a combination of a tenant identifier, a network identifier and a neural node identifier ([0021] FIG. 12 is an example of a neural network according to some aspects.; [0103] When a node joins a communications grid (e.g., when the node is powered on or connected to an existing node on the grid or both), the a neural node identifier node is assigned (e.g., by an operating system of the grid) a universally unique identifier (UUID). This unique identifier may help other nodes and external entities (devices, users, etc.) to identify the node and distinguish it from other nodes.).
Meng1, as modified by Istvan and Cui, fails to teach having further instructions stored therein causing the computing system to perform operations further comprising: assigning the microfunction the first KVS key as a combination of a tenant identifier, a network identifier and a neural node identifier.
Luo teaches having further instructions stored therein causing the computing system to perform operations further comprising: assigning the microfunction the first KVS key as a combination of a tenant identifier, a network identifier and a neural node identifier ([0111] After receiving the request of GET http://169.254.169.254/openstack/latest/resetpwd_flag, the cloud service computing node queries key=resetpwd_flag in system_metadata in the metadata server. If the key exists, the cloud service computing node returns a corresponding value, and if the key does not exist, the cloud service computing node returns False.; [0118] OpenStack runs a neutron-ns-metadata-proxy component and a neutron-metadata-agent component on a network node. The neutron-ns-metadata-proxy component obtains a router identifier (router-id) and a a network identifier network identifier (network-id), and adds the router identifier and the network identifier to the identifier obtaining request. The neutron-metadata-agent component is responsible for adding an instance identifier (instance-id) and a tenant identifier tenant identifier (tenant-id) to the identifier obtaining request, and forwarding the received the identifier obtaining request to the nova-api-metadata component. The instance identifier is also an identifier of the virtual machine.).
Meng1, Istvan, Cui, and Luo are combinable for the same rationale as set forth above with respect to claim 4.
Ladwig teaches having further instructions stored therein causing the computing system to perform operations further comprising: assigning the microfunction the first KVS key as a combination of a tenant identifier, a network identifier and a neural node identifier ([3.3 Flat Layout, pg. 35] We base our second storage layout on the standard key-value data model. As columns are stored in a sorted fashion, we can perform range scans and therefore prefix lookups on column keys. We thus store (s, p, o) triples as { s : { po : - } } where s occupies the row-key position, p the column-key position and o the value position. We use a the first KVS key as a combination of concatenated po as column key as column keys have to be unique.).
Meng1, Istvan, Cui, Luo, and Ladwig are combinable for the same rationale as set forth above with respect to claim 4.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Sun et al. (NPL: “DPS: A DSM-based Parameter Server for Machine Learning”) teaches DPS, a parameter server based on Distributed Shared Memory (DSM) for machine learning.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAGGIE MAIDO whose telephone number is (703) 756-1953. The examiner can normally be reached M-Th: 6am - 4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached on (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MM/Examiner, Art Unit 2129
/MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129