DETAILED ACTION
This office action is in response to RCE filed on 9/22/2025.
Claims 1 – 4, 10, 13, 16, 18 and 21 are amended.
Claims 1 – 18, 20 and 21 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 9/22/2025 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 – 9, 16 – 18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nghiem (US 20170220944), in view of Beloussov et al (US 20170206368, hereinafter Beloussov), and in view of Papadantonakis et al (US 20200412666, hereinafter Papadantonakis), and further in view of Smith et al (US 20200127980, hereinafter Smith).
As per claim 1, Nghiem discloses: An apparatus comprising: [manager] circuitry to:
based on telemetry data of one or more nodes communicatively coupled to the [manager] and network traffic, select one or more processes to execute on the one or more nodes, wherein the one or more processes comprise a source process and at least one target process. (Nghiem [0094]: “The Resource Manager has a built-in scheduler, which allocates resources across all applications based on the applications' resource requirements. (2) The MR Application Master, which negotiates appropriate resource containers from the scheduler and tracks their progress, coordinates and manages each and every instance of MapReduce jobs executed on YARN. (3) The Node Manager, which is responsible for containers, monitors each and every node’s resource usage (CPU, memory, disk, network bandwidth) within YARN”; [0096]: “Step (8), MR AppMaster decides how to run the MapReduce task. Small jobs can be run on the same JVM on a single node as an Uber task. Large jobs request for more resources to be allocated by ResourceManager which gathers information from the heartbeats of NodeManagers to consider data locality in its node allocation. Step (9), MR AppMaster contacts a NodeManager to start a new container for task execution. A YarnChild is launched to run on a separate JVM to isolate user codes from long running system deamons. Step (10), YarnChild retrieves job resources from HDFS. Step (11), YarnChild runs Map task or Reduce task. In every 3 secs, YarnChild sends a progress report to MR AppMaster which aggregates all reports and sends an update directly to the job client. Upon job completion, MR AppMaster and task containers clean up their working states, and terminate themselves to release resources”; [0071]: “All nodes are connected to a switch with a backplane speed of 48 Gbps”.)
Nghiem did not explicitly disclose:
Wherein the [manager] comprises a switch, the switch comprising: an interface to an ingress port; an interface to an egress port; a switch fabric;
wherein the network traffic comprises of packets among the one or more nodes;
and select a memory pool to store data generated by the one or more processes.
and wherein the one or more processes are to communicate via a service mesh, and wherein the one or more processes comprise at least one micro service.
Beloussov teaches:
and select a memory pool to store data generated by the one or more processes. (Beloussov [0052])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Beloussov into that of Nghiem in order to select a memory pool to store data generated by the one or more processes. Beloussov [0052] has shown the claimed limitations are merely commonly known steps in scheduling the execution of map reduce type operations. Applicants have thus merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
Papadantonakis teaches:
Wherein the [manager] comprises a switch, the switch comprising: an interface to an ingress port; an interface to an egress port; a switch fabric, and wherein the one or more processes are to communicate via a service mesh; (Papadantonakis [0044]: “Switch 104 can use ingress system 106 to process received packets from a network. Ingress system 106 can decide which port to transfer received packets or frames to using a table that maps packet characteristics with an associated output port or other calculation. Switch 104 can use egress system 108 to fetch packets from mesh 110, process packets, schedule egress of packets to a network using one or more ports, or drop packets. In addition, egress system 108 can perform packet replication for forwarding of a packet or frame to multiple ports and queuing of packets or frames prior to transfer to an output port”.)
wherein the network traffic comprises of packets among the one or more nodes; (Papadantonakis [0032]: “mesh that provides traffic management in a datacenter, server, rack, blade, inter-component communication within a datacenter, and so forth. For example, north-south traffic or south-north traffic can include traffic that is received an external device (e.g., client, server, and so forth) but can include internal data center traffic (e.g., within a rack, server, between virtual machines, between containers). For example, east-west traffic or west-east traffic can include internal data center traffic (e.g., within a rack, server, between virtual machines, between containers), but can include traffic that is received an external device (e.g., client, server, and so forth).”; [0034]: packets.)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Papadantonakis into that of Nghiem and Beloussov in order to have a switch circuitry comprising: an interface to an ingress port; an interface to an egress port; a switch fabric, and wherein the one or more processes are to communicate via a service mesh. Nghiem [0071] teaches “all nodes are connected to a switch”, while Papadantonakis has shown that claimed ingress and egress ports are merely commonly known parts and functions of a switch in a switch network, Papadantonakis [0002] further teaches “Mesh designs for interconnecting memory or processor cores are well known”, thus applicants have merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
Smith teaches:
and wherein the one or more processes comprise at least one microservice. (Smith [0022] and [0027])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Smith into that of Nghiem, Beloussov and Papadantonakis in order to have the one or more processes are to communicate via a service mesh, and wherein the one or more processes comprise at least one micro service. Papadantonakis [0022] and [0027] teaches service meshes are commonly used to implement microservice architecture, applicants have merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
As per claim 2, the combination of Nghiem, Beloussov, Papadantonakis and Smith further teach:
The apparatus of claim 1, wherein the telemetry data of the one or more nodes comprises two or more of: current load on the one or more nodes, current expected completion time for process execution on the one or more nodes, and/or expected available resources on the one or more nodes. (Nghiem [0094]: “The Resource Manager has a built-in scheduler, which allocates resources across all applications based on the applications' resource requirements. (2) The MR Application Master, which negotiates appropriate resource containers from the scheduler and tracks their progress, coordinates and manages each and every instance of MapReduce jobs executed on YARN. (3) The Node Manager, which is responsible for containers, monitors each and every node's resource usage (CPU, memory, disk, network bandwidth) within YARN”; [0013]: “Collect necessary preview job performance data from historical runtime performances or sampled executions on the same target production system… the preview job performance data includes runtime, performance or execution time which is equivalent to the completion time of a job”.)
As per claim 3, the combination of Nghiem, Beloussov, Papadantonakis and Smith further teach:
The apparatus of claim 2, wherein the telemetry data of the network traffic comprises: latency and bandwidth of communications among the nodes via the service mesh. (Nghiem [0094]: “The Resource Manager has a built-in scheduler, which allocates resources across all applications based on the applications' resource requirements. (2) The MR Application Master, which negotiates appropriate resource containers from the scheduler and tracks their progress, coordinates and manages each and every instance of MapReduce jobs executed on YARN. (3) The Node Manager, which is responsible for containers, monitors each and every node's resource usage (CPU, memory, disk, network bandwidth) within YARN”; Papadantonakis [0037] – [0039]: latency estimates.)
As per claim 4, the combination of Nghiem, Beloussov, Papadantonakis and Smith further teach:
The apparatus of claim 2, wherein the telemetry data of the one or more nodes comprises telemetry associated with the memory pool and includes one or more of: an available amount of memory space and latency and/or bandwidth between the one or more nodes and the memory pool. (Beloussov [0052])
As per claim 5, the combination of Nghiem, Beloussov, Papadantonakis and Smith further teach:
The apparatus of claim 1, wherein the switch is to determine a graph of the one or more processes and corresponding nodes of the one or more nodes to execute the one or more processes and cause execution of the one or more processes on the corresponding nodes. (Nghiem [0094] and [0096])
As per claim 6, the combination of Nghiem, Beloussov, Papadantonakis and Smith further teach:
The apparatus of claim 1, wherein the one or more processes are to access data from the memory pool. (Beloussov [0052])
As per claim 7, the combination of Nghiem, Beloussov, Papadantonakis and Smith further teach:
The apparatus of claim 1, wherein the switch comprises one or more of: a network interface controller (NIC), SmartNIC, router, forwarding element, infrastructure processing unit (IPU), or data processing unit (DPU). (Nghiem [0191])
As per claim 8, the combination of Nghiem, Beloussov, Papadantonakis and Smith further teach:
The apparatus of claim 1, comprising a host to execute the source process and wherein the host is communicatively coupled to the switch using a network interface device. (Nghiem [0096])
As per claim 9, the combination of Nghiem, Beloussov, Papadantonakis and Smith further teach:
The apparatus of claim 8, comprising a data center that comprises the one or more nodes that execute the one or more processes. (Nghiem [0094] and [0173])
As per claim 16, Nghiem discloses: A method comprising:
a [manager] performing: allocating [resource] accessed and generated by [process] based on telemetry data associated with one or more nodes and causing execution of the [process] on nodes based on the telemetry data associated with the one or more nodes. (Nghiem [0094]: “The Resource Manager has a built-in scheduler, which allocates resources across all applications based on the applications' resource requirements. (2) The MR Application Master, which negotiates appropriate resource containers from the scheduler and tracks their progress, coordinates and manages each and every instance of MapReduce jobs executed on YARN. (3) The Node Manager, which is responsible for containers, monitors each and every node’s resource usage (CPU, memory, disk, network bandwidth) within YARN”; [0096]: “Step (8), MR AppMaster decides how to run the MapReduce task. Small jobs can be run on the same JVM on a single node as an Uber task. Large jobs request for more resources to be allocated by ResourceManager which gathers information from the heartbeats of NodeManagers to consider data locality in its node allocation. Step (9), MR AppMaster contacts a NodeManager to start a new container for task execution. A YarnChild is launched to run on a separate JVM to isolate user codes from long running system deamons. Step (10), YarnChild retrieves job resources from HDFS. Step (11), YarnChild runs Map task or Reduce task. In every 3 secs, YarnChild sends a progress report to MR AppMaster which aggregates all reports and sends an update directly to the job client. Upon job completion, MR AppMaster and task containers clean up their working states, and terminate themselves to release resources”.)
Nghiem did not explicitly disclose:
Wherein the manager comprises a switch;
wherein allocating [resources] comprises allocating a memory pool to store data;
wherein the [process] comprises microservices; wherein the microservices communicate using a service mesh;
wherein the switch comprises: an interface to an ingress port, an interface to an egress port, and a switch fabric;
Beloussov teaches:
wherein allocating [resources[ comprises allocating a memory pool to store data; (Beloussov [0052])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Beloussov into that of Nghiem in order to select a memory pool to store data generated by the one or more processes. Dirac [0031] has shown the claimed limitations are merely commonly known steps in scheduling the execution of map reduce type operations. Applicants have thus merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
Papadantonakis teaches:
Wherein the [manager] comprises a switch; wherein the switch comprises: an interface to an ingress port, an interface to an egress port, and a switch fabric; wherein the process communicates using a service mesh. (Papadantonakis [0044]: “Switch 104 can use ingress system 106 to process received packets from a network. Ingress system 106 can decide which port to transfer received packets or frames to using a table that maps packet characteristics with an associated output port or other calculation. Switch 104 can use egress system 108 to fetch packets from mesh 110, process packets, schedule egress of packets to a network using one or more ports, or drop packets. In addition, egress system 108 can perform packet replication for forwarding of a packet or frame to multiple ports and queuing of packets or frames prior to transfer to an output port”.)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Papadantonakis into that of Nghiem and Beloussov in order to have the switch comprises an interface to an ingress port, an interface to an egress port, and a switch fabric. Nghiem [0071] teaches “all nodes are connected to a switch”, while Papadantonakis has shown that claimed limitations are merely commonly known functions and part of a switch, and is therefore rejected under 35 USC 103.
Smith teaches:
and wherein the one or more processes comprise at least one microservice. (Smith [0022] and [0027])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Smith into that of Nghiem, Beloussov and Papadantonakis in order to have the one or more processes are to communicate via a service mesh, and wherein the one or more processes comprise at least one microservice. Papadantonakis [0022] and [0027] teaches service meshes are commonly used to implement microservice architecture, applicants have merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
As per claim 17, the combination of Nghiem, Beloussov, Papadantonakis and Smith further teach:
The method of claim 16, wherein the telemetry data of the one or more nodes comprises two or more of: current load on the one or more nodes, current expected completion time for process execution on the one or more nodes, and/or expected available resources on the one or more nodes. (Nghiem [0094]: “The Resource Manager has a built-in scheduler, which allocates resources across all applications based on the applications' resource requirements. (2) The MR Application Master, which negotiates appropriate resource containers from the scheduler and tracks their progress, coordinates and manages each and every instance of MapReduce jobs executed on YARN. (3) The Node Manager, which is responsible for containers, monitors each and every node's resource usage (CPU, memory, disk, network bandwidth) within YARN”; [0013]: “Collect necessary preview job performance data from historical runtime performances or sampled executions on the same target production system… the preview job performance data includes runtime, performance or execution time which is equivalent to the completion time of a job”.)
As per claim 18, the combination of Nghiem, Beloussov, Papadantonakis and Smith further teach:
The method of claim 16, wherein the telemetry data associated with one or more nodes comprises telemetry associated with the memory pool and includes one or more of: an available amount of memory space and latency and/or bandwidth between one or more nodes and the memory pool. (Beloussov [0052])
As per claim 20, the combination of Nghiem, Beloussov, Papadantonakis and Smith further teach:
The method of claim 16, wherein the microservices are to write data to the memory pool and/or access data from the memory pool. (Beloussov [0052])
Claim(s) 10 – 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nghiem, Beloussov, and in view of Papadantonakis.
As per claim 10, Nghiem discloses: At least one non-transitory computer-readable medium comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to:
offload [network] management and [resource management] to a [manager], wherein the [manager] is to select one or more processes to execute on one or more nodes based on telemetry data of one or more nodes communicatively couples to the [manager]. (Nghiem [0094]: “The Resource Manager has a built-in scheduler, which allocates resources across all applications based on the applications' resource requirements. (2) The MR Application Master, which negotiates appropriate resource containers from the scheduler and tracks their progress, coordinates and manages each and every instance of MapReduce jobs executed on YARN. (3) The Node Manager, which is responsible for containers, monitors each and every node’s resource usage (CPU, memory, disk, network bandwidth) within YARN”; [0096]: “Step (1), a MapReduce job is submitted to a job client. Step (2), the job client requests for a new application ID from ResourceManager. Step (3), the job client checks HDFS to see whether an output has been created for that input and copy the result from HDFS directly if it exists. Otherwise, the job client copies job resources from HDFS. Step (4), the job is submitted to ResourceManager where a Scheduler allocates resources and an Application Manager monitors progress and status of the job… Step (8), MR AppMaster decides how to run the MapReduce task. Small jobs can be run on the same JVM on a single node as an Uber task. Large jobs request for more resources to be allocated by ResourceManager which gathers information from the heartbeats of NodeManagers to consider data locality in its node allocation. Step (9), MR AppMaster contacts a NodeManager to start a new container for task execution. A YarnChild is launched to run on a separate JVM to isolate user codes from long running system deamons. Step (10), YarnChild retrieves job resources from HDFS. Step (11), YarnChild runs Map task or Reduce task. In every 3 secs, YarnChild sends a progress report to MR AppMaster which aggregates all reports and sends an update directly to the job client. Upon job completion, MR AppMaster and task containers clean up their working states, and terminate themselves to release resources”. Examiner notes that the act of the client submitting job to resource manager is the same as offloading the management to the resource manager, by the client; [0071]: “All nodes are connected to a switch with a backplane speed of 48 Gbps”.)
Nghiem did not explicitly disclose:
Wherein the [manager] comprises a switch;
Wherein the [network] management comprises service mesh management;
Wherein the [resource management] comprises selection of memory pool accessed by services associated with the service mesh;
Wherein the telemetry data comprises of network traffic of packets transmitted among the one or more nodes, and wherein the one or more processes are to communicate via a service mesh;
wherein the switch comprises an interface to an ingress port, an interface to an egress port, and a switch fabric;
Beloussov teaches:
Wherein the [resource management] comprises selection of memory pool accessed by [processes] associated with the [network]. (Beloussov [0052])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Beloussov into that of Nghiem in order to select a memory pool to store data generated by the one or more processes. Beloussov [0052] has shown the claimed limitations are merely commonly known steps in scheduling the execution of map reduce type operations. Applicants have thus merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
Papadantonakis teaches:
Wherein the [manager] comprises a switch; wherein the [network] management comprises service mesh management; wherein the switch comprises an interface to an ingress port, an interface to an egress port, and a switch fabric and wherein the one or more processes are to communicate via a service mesh. (Papadantonakis [0044]: “Switch 104 can use ingress system 106 to process received packets from a network. Ingress system 106 can decide which port to transfer received packets or frames to using a table that maps packet characteristics with an associated output port or other calculation. Switch 104 can use egress system 108 to fetch packets from mesh 110, process packets, schedule egress of packets to a network using one or more ports, or drop packets. In addition, egress system 108 can perform packet replication for forwarding of a packet or frame to multiple ports and queuing of packets or frames prior to transfer to an output port”.)
Wherein the telemetry data comprises of network traffic of packets transmitted among the one or more nodes; (Papadantonakis [0032]: “mesh that provides traffic management in a datacenter, server, rack, blade, inter-component communication within a datacenter, and so forth. For example, north-south traffic or south-north traffic can include traffic that is received an external device (e.g., client, server, and so forth) but can include internal data center traffic (e.g., within a rack, server, between virtual machines, between containers). For example, east-west traffic or west-east traffic can include internal data center traffic (e.g., within a rack, server, between virtual machines, between containers), but can include traffic that is received an external device (e.g., client, server, and so forth).”)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Papadantonakis into that of Nghiem and Beloussov in order to have a switch circuitry comprising: an interface to an ingress port; an interface to an egress port; a switch fabric, and wherein the one or more processes are to communicate via a service mesh. Nghiem [0071] teaches “all nodes are connected to a switch”, while Papadantonakis has shown that claimed ingress and egress ports are merely commonly known parts and functions of a switch in a switch network, Papadantonakis [0002] further teaches “Mesh designs for interconnecting memory or processor cores are well known”, thus applicants have merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
As per claim 11, the combination of Nghiem, Beloussov and Papadantonakis further teach:
The computer-readable medium of claim 10, wherein offload service mesh management and selection of memory pool accessed by services associated with the service mesh to a switch comprises: cause the switch to, based on telemetry data of one or more nodes and network traffic, select one or more processes to execute on the one or more nodes and select a memory pool to store data generated by the one or more processes. (Nghiem [0094], [0096] and Beloussov [0052].)
As per claim 12, the combination of Nghiem, Beloussov and Papadantonakis further teach:
The computer-readable medium of claim 11, wherein the one or more processes are to write data to the memory pool and/or access data from the memory pool. (Beloussov [0052])
As per claim 13, the combination of Nghiem, Beloussov and Papadantonakis further teach:
The computer-readable medium of claim 11, wherein the telemetry data of the one or more nodes comprises: current load on the one or more nodes, current expected completion time for process execution on the one or more nodes, and expected available resources on the one or more nodes and the telemetry data of the network traffic comprises one or more of: latency and/or bandwidth of communications between the nodes. (Nghiem [0094]: “The Resource Manager has a built-in scheduler, which allocates resources across all applications based on the applications' resource requirements. (2) The MR Application Master, which negotiates appropriate resource containers from the scheduler and tracks their progress, coordinates and manages each and every instance of MapReduce jobs executed on YARN. (3) The Node Manager, which is responsible for containers, monitors each and every node's resource usage (CPU, memory, disk, network bandwidth) within YARN”; [0013]: “Collect necessary preview job performance data from historical runtime performances or sampled executions on the same target production system… the preview job performance data includes runtime, performance or execution time which is equivalent to the completion time of a job”.)
As per claim 14, the combination of Nghiem, Beloussov and Papadantonakis further teach:
The computer-readable medium of claim 11, wherein the telemetry data of one or more nodes comprises telemetry associated with the memory pool and includes one or more of: an available amount of memory space and latency and/or bandwidth between the one or more nodes and the memory pool. (Beloussov [0052])
As per claim 15, the combination of Nghiem, Beloussov and Papadantonakis further teach:
The computer-readable medium of claim 11, wherein the switch is to determine a graph of the one or more processes and corresponding nodes of the one or more nodes to execute the one or more processes and cause execution of the one or more processes on the corresponding nodes. (Nghiem [0094] and [0096])
Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nghiem, Beloussov, Papadantonakis and Smith, and further in view of Bernat et al (US 20190042138, hereinafter Bernat).
As per claim 21, the combination of Nghiem, Beloussov, Papadantonakis and Smith further teach:
The apparatus of claim 1, and wherein the telemetry data comprises expected completion time for process execution on the one or more nodes; (Nghiem [0013]: “Collect necessary preview job performance data from historical runtime performances or sampled executions on the same target production system… the preview job performance data includes runtime, performance or execution time which is equivalent to the completion time of a job”.)
The combination of Nghiem, Beloussov, Papadantonakis and Smith did not teach:
The apparatus of claim 1, wherein the circuitry is to: based on the telemetry data of the one or more nodes and the network traffic, migrate data associated with a process of the one or more processes to a second memory pool for access by the one or more nodes;
However, Bernat teaches:
The apparatus of claim 1, wherein the circuitry is to: based on the telemetry data of the one or more nodes and the network traffic, migrate data associated with a process of the one or more processes to a second memory pool for access by the one or more nodes. (Bernat [0022])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Bernat into that of Nghiem, Beloussov, Papadantonakis and Smith in order to have wherein the circuitry is to migrate based on the telemetry data of the one or more nodes and the network traffic, migrate data associated with a process of the one or more processes to a second memory pool. Bernat has shown that claimed limitations are merely commonly known functions of a distributed memory system, and applicants have thus merely claimed the combination of known parts of the field to achieve predictable results and is therefore rejected under 35 USC 103.
Response to Arguments
Applicant's arguments filed 9/22/2025 have been fully considered but they are not persuasive.
Independent claims 1, 10 and 16:
Applicant argued on pages 6 – 7 that “Neither Nghiem, Beloussov, or Papadantonakis teach a switch circuitry that is to select one or more processes to execute on the one or more nodes and select a memory pool to store data generated by the one or more processes based on telemetry data of one or more nodes communicatively coupled to the switch and network traffic of packets among the one or more nodes.”
The examiner disagrees as the combination of Nghiem, Beloussov and Papadantonakis teaches the claimed limitations in full. More specifically, Nghiem [0094] teaches a resource manager allocates resource based on application’s requirement, a node manager monitors each node’s resource usage, [0094] teaches allocating map reduce tasks to nodes based on heartbeat from node manager. Nghiem did not explicitly teach the manager being a switch, the telemetry further includes network traffic comprises of packets among the one or more nodes, nor select a memory pool to store data generated by the one or more processes.
Those deficiencies are cured through introduction of Beloussov and Papadantonakis. Beloussov [0052] teaches intermediate nodes used to store outputs can be selected based on parameters such as available bandwidth, latency, geographical proximity. It would be obvious to combine Beloussov into that of Nghiem because Beloussov [0052] teaches such resource allocation is commonly utilized in map reduce execution framework. Papadantonakis [44] teaches a service mesh network comprising a switch while [0032] teaches the telemetry of nodes comprises packets between nodes, such combination would be obvious to one of ordinary skill in the art Nghiem [0071] teaches all nodes are connected to a switch while Papadantonakis [0002] teaches that service mesh network is commonly known and used for connecting cores and memory. The combination of Nghiem, Beloussov and Papadantonakis thus teaches the claimed limitations in question.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES M SWIFT whose telephone number is (571)270-7756. The examiner can normally be reached Monday - Friday: 9:30 AM - 7PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at 5712701014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLES M SWIFT/Primary Examiner, Art Unit 2196