DETAILED ACTION Claims 1-20 are pending in this application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg , 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman , 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi , 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum , 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel , 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington , 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA/25, or PTO/AIA/26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of copending Application No. 18/090,373 to Soule et al. in view of U.S. Pub. No. 20100083289 A1 to Peng et al. This is a provisional nonstatutory double patenting rejection. Instant Application No. 18/090,356 Application No. 18/090,373 Claim 1: An apparatus comprising: a network interface device comprising: packet processing circuitry and circuitry to: execute a first process of partitioned processes to provide a remote procedure call (RPC) interface for a second process, wherein the second process of the partitioned processes comprises a business logic and the partitioned processes comprise resource and deployment definition are based on an Interface Description Language (IDL) and a memory allocation. Claim 1: An apparatus comprising: a network interface device comprising: packet processing circuitry and circuitry to: execute a first process to provide a remote procedure call (RPC) interface for a second process, wherein the second process comprises a business logic, resource and deployment definitions of the first and second processes are based on an Interface Description Language (IDL) and a memory allocation, and the memory allocation among the processes provides share at least one RPC message as at least one formatted object accessible from memory. Claim 2: The apparatus of claim 1, wherein to provide the RPC interface, the first process is to utilize one or more accelerator devices that perform one or more of: data transformation, encryption, reliable transport, load balancing, traffic routing, secure key storage, authentication, and/or observability. Claim 2: The apparatus of claim 1, wherein to provide the RPC interface, the first process is to utilize one or more accelerator devices that perform one or more of: data transformation, encryption, reliable transport, load balancing, traffic routing, secure key storage, authentication, and/or observability. Claim 3: The apparatus of claim 1, wherein the memory allocation comprises one or more of: arena based memory allocation, non-arena based memory allocation, memory allocation near processing cores, processing requirements for security, observability and data transformation, and/or request and completion queues. Claim 3 : The apparatus of claim 1, wherein the memory allocation comprises one or more of: arena based memory allocation, non-arena based memory allocation, memory allocation near processing cores, processing requirements for security, observability and data transformation, and/or request and completion queues. Claim 4: The apparatus of claim 1, wherein a shepherding layer is to provide communication between the partitioned processes to utilize direct memory access (DMA), shared memory, polling threads, and/or timers. Claim 4: The apparatus of claim 1, wherein a shepherding layer is to provide communication between the partitioned processes to utilize direct memory access (DMA), shared memory, polling threads, and/or timers. Claim 5 : The apparatus of claim 1, wherein the first process and the second process are to share a linearized object structure comprising a C++ object with member data references in one or more contiguous memory blocks. Claim 5 : The apparatus of claim 1, wherein the first process and the second process are to share a linearized object structure comprising a C++ object with member data references in one or more contiguous memory blocks. Claim 6 : The apparatus of claim 5, wherein the first service is to cause a network interface device to linearize at least one object and store the linearized at least one object into memory for access by the second service. Claim 6 : T he apparatus of claim 5, wherein the first service is to cause a network interface device to linearize at least one object and store the linearized at least one object into memory for access by the second service. Claim 7 : The apparatus of claim 5, comprising circuitry is to perform linearization of the at least one object and transmit the linearized at least one object to memory accessible to the first process. Claim 7 : The apparatus of claim 5, comprising circuitry is to perform linearization of the at least one object and transmit the linearized at least one object to memory accessible to the first process. Claim 8 : The apparatus of claim 1, wherein the network interface device comprises one or more of: a network interface controller (NIC), a remote direct memory access (RDMA)-enabled NIC, SmartNIC, router, switch, forwarding element, infrastructure processing unit (IPU), data processing unit (DPU), accelerator, or network-attached appliance. Claim 8 : The apparatus of claim 1, wherein the network interface device comprises one or more of: a network interface controller (NIC), a remote direct memory access (RDMA)-enabled NIC, SmartNIC, router, switch, forwarding element, infrastructure processing unit (IPU), data processing unit (DPU), accelerator, or network-attached appliance. Claim 9 : A non-transitory computer-readable medium comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to:a compiler to generate partitioned processes with resource and deployment definition based on Interface Description Language (IDL) and a memory allocation, wherein a first process of the partitioned processes comprises a business logic and a second process of the partitioned processes is to provide a remote procedure call (RPC) interface for the first process. Claim 9 : A non-transitory computer-readable medium comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to:a compiler to generate first and second processes, wherein the first process comprises a business logic, the second process is to provide a remote procedure call (RPC) interface for the first process, and a memory allocation among the first and second processes permits sharing at least one RPC message as at least one formatted object accessible from memory . Claim 10 : The computer-readable medium of claim 9, wherein to provide the RPC interface, the second process is to utilize one or more accelerator devices that perform one or more of. data transformation, encryption, reliable transport, load balancing, traffic routing, secure key storage, authentication, and/or observability. Claim 10 : The computer-readable medium of claim 9, wherein to provide the RPC interface, the second process is to utilize one or more accelerator devices that perform one or more of: data transformation, encryption, reliable transport, load balancing, traffic routing, secure key storage, authentication, and/or observability. Claim 11 : The computer-readable medium of claim 9, wherein the memory allocation comprises one or more of: arena based memory allocation, non-arena based memory allocation, memory allocation near processing cores, processing requirements for security, observability and data transformation, and/or request and completion queues. Claim 11 : The computer-readable medium of claim 9, wherein the memory allocation comprises one or more of: arena based memory allocation, non-arena based memory allocation, memory allocation near processing cores, processing requirements for security, observability and data transformation, and/or request and completion queues. Claim 12 : The computer-readable medium of claim 9, wherein the compiler is to generate a shepherding layer to provide communication between the partitioned processes to utilize direct memory access (DMA), shared memory, polling threads, and/or timers. Claim 12 : The computer-readable medium of claim 9, wherein the compiler is to generate a shepherding layer to provide communication between the partitioned processes to utilize direct memory access (DMA), shared memory, polling threads, and/or timers. Claim 13 : The computer-readable medium of claim 9, wherein the first process and the second process are to share a linearized object structure comprising a C++ object with member data references in one or more contiguous memory blocks. Claim 13 : The computer-readable medium of claim 9, wherein the first process and the second process are to share a linearized object structure comprising a C++ object with member data references in one or more contiguous memory blocks. Claim 14 : The computer-readable medium of claim 13, wherein the first service is to cause a network interface device to linearize at least one object and store the linearized at least one object into memory for access by the second service. Claim 14 : The computer-readable medium of claim 13, wherein the first service is to cause a network interface device to linearize at least one object and store the linearized at least one object into memory for access by the second service. Claim 15 : The computer-readable medium of claim 13, wherein circuitry is to perform linearization of the at least one object and transmit the linearized at least one object to memory accessible to the first process. Claim 15 : The computer-readable medium of claim 13, wherein circuitry is to perform linearization of the at least one object and transmit the linearized at least one object to memory accessible to the first process. Claim 16 : The computer-readable medium of claim 13, wherein the compiler is to generate programming language classes and object access methods for a linearized structure for a software and data structure template for input to the network interface device and circuitry to perform linearization of the at least one object. Claim 16 : The computer-readable medium of claim 13, wherein the compiler is to generate programming language classes and object access methods for a linearized structure for a software and data structure template for input to the network interface device and circuitry to perform linearization of the at least one object. Claim 17 : A method comprising: in a data center: a first process, executed by a server, accessing a second process, executed by a network interface device, wherein the second process provides a remote procedure call (RPC) interface for the first process. Claim 17 : A method comprising: in a data center: a first process, executed by a server, accessing a second process, executed by a network interface device, wherein the second process provides a remote procedure call (RPC) interface for the first process and allocating memory to share at least one RPC message as at least one formatted object among the first and second processes . Claim 18 : The method of claim 17, comprising: allocating memory to share at least one RPC message as at least one formatted object among the first and second processes, wherein the at least one formatted object comprises a linearized object structure comprising a C++ object with member data references in one or more contiguous memory blocks. Claim 18 : The method of claim 17, wherein the at least one formatted object comprises a linearized object structure comprising a C++ object with member data references in one or more contiguous memory blocks. Claim 19 : The method of claim 18, comprising: storing the linearized object structure as a C++ object with member data references in one or more contiguous memory blocks. Claim 19 : The method of claim 17, comprising: storing the linearized object structure as a C++ object with member data references in one or more contiguous memory blocks. Claim 20 : The method of claim 18, wherein the second process provides a RPC interface for the first process comprises utilizing one or more accelerator devices that perform one or more of: data transformation, encryption, reliable transport, load balancing, traffic routing, secure key storage, authentication, and/or observability. Claim 20 : The method of claim 18, wherein the second process provides a RPC interface for the first process comprises utilizing one or more accelerator devices that perform one or more of: data transformation, encryption, reliable transport, load balancing, traffic routing, secure key storage, authentication, and/or observability. Soule is silent with reference to the memory allocation among the processes provides share at least one RPC message as at least one formatted object accessible from memory. Peng teaches the memory allocation among the processes provides share at least one RPC message as at least one formatted object accessible from memory ( RPC Channel Memory 206/one or more RPC channels) (using direct memory access (DMA) hardware channels) (“… At step 404, the run-time controller 204 stores at least the value(s) of the input argument(s) in each un-registered RPC channel to the context memory 210. That is, the task stub 202 may have allocated one or more RPC channels in response to one or more new task calls between synchronization points. In general, the RPC channel memory 206 may include multiple RPC channels for concurrent active threads of the task 130, some of which have been previously registered by the run-time controller 204, and other(s) of which are un-registered (i.e., for new task calls). The transfer between the RPC channel memory 206 and the context memory 210 may be implemented using direct memory access (DMA) hardware channels between the computer 102 and the hardware accelerator 104. DMA techniques are well known in the art ….” paragraph 0032). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Soule with the teaching of Peng because the teaching of Peng would improve the system of Soule by providing a shared RPC channel memory for concurrently data/messages between processes using direct memory access. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim 17 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Pub. No. 2011 / 0078798 A1 to Chen et al. As to claim 17 , Chen teaches a method comprising: in a data center ( figure 1 ) : a first process, executed by a server ( RPC server application 54 ) , accessing a second process ( RPC client application 50 ) , executed by a network interface device ( Network 160 ) , wherein the second process provides a remote procedure call (RPC) interface for the first process ( RPC interfaces) (“… Referring to step 14 in FIG. 1, an RPC program includes RPC interfaces defined in an IDL file. In step 18, the IDL file is compiled using an RPC compiler. The RPC compiler generates client side source files 22, common header files 26, and server side source files 30. The client side source files 22, common header files 26 and a client side main source program 38 are compiled and linked to form an RPC client application 50. Similarly, the server side source files 22, common header files 26 and a server side main source program 42 are compiled and linked to form an RPC server application 54. At this point, the generated RPC client application and server application have no other interaction other than being connected to each other via the network. Further, a user can add business logic for a specified purpose into the client side source files 22 and the server side source files 30, to generate the RPC client application 50 and the RPC server application 54, then these applications can execute the business logic and accomplish the specified purpose …” paragraph 0019) . Claim 17 is rejected under 35 U.S.C. 102(a)( 2 ) as being anticipated by U.S. Pub. No. 2021/0329100 A1 to Knight et al. As to claim 17, Knight teaches a method comprising: in a data center ( Figure s 4/ 7) : a first process, executed by a server ( Server Application(s) 174 ), accessing a second process ( Client Application 154A/B ), executed by a network interface device ( Cloud Computing Environment Services 160 ), wherein the second process provides a remote procedure call (RPC) interface for the first process ( g RPC Environment/APIs 208 ) (“… As illustrated in FIG. 4, in accordance with an embodiment, a microservices environment can enable a client (computer) device 150, having a device hardware 152 and client application 154, to interact with, for example, one or more cloud 160, server 170, 172, 174, database 180, or other cloud or on-premise systems or services… FIG. 7 illustrates the use of a remote procedure call framework, such as a gRPC framework, in a microservices environment, in accordance with an embodiment… As illustrated in FIG. 7, in accordance with an embodiment, a gRPC framework provides an environment or APIs 208 that enables connection of services, with support for advanced features such as load balancing, tracing, health checking, and authentication … Generally described, the gRPC framework uses protocol buffers as an Interface Definition Language (IDL) and message interchange format; and enables definition of a service and methods that can be called remotely. A computer server runs a gRPC server (GrpcServer) to handle client calls, via a local object (stub) at the client, which enables the client application to directly call a method on a server application as if it were a local object. Clients and servers can communicate with each other in a variety of different types of environments and supported languages, such as, for example, Java, Ruby, Android, Swift, or other types of languages …” paragraph s 0031/ 00 4 9 -0051 ). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim s 1-4, 8-12 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 20110078798 A1 to Chen et al. in view of U.S. Pub. No. 2010 / 0083289 A1 to Peng et al. A s to claim 1, Chen teaches a n apparatus comprising: a network interface device ( Network 160 ) comprising: packet processing circuitry and circuitry ( processing unit) to: execute a first process of partitioned processes ( RPC client application 50/RPC server application 54 ) to provide a remote procedure call (RPC) interface ( RPC interfaces) for a second process, wherein the second process of the partitioned processes comprises a business logic ( business logic) and the partitioned processes comprise resource and deployment definition are based on an Interface Description Language (IDL) ( IDL file) (“… Referring to step 14 in FIG. 1, an RPC program includes RPC interfaces defined in an IDL file. In step 18, the IDL file is compiled using an RPC compiler. The RPC compiler generates client side source files 22, common header files 26, and server side source files 30. The client side source files 22, common header files 26 and a client side main source program 38 are compiled and linked to form an RPC client application 50. Similarly, the server side source files 22, common header files 26 and a server side main source program 42 are compiled and linked to form an RPC server application 54. At this point, the generated RPC client application and server application have no other interaction other than being connected to each other via the network. Further, a user can add business logic for a specified purpose into the client side source files 22 and the server side source files 30, to generate the RPC client application 50 and the RPC server application 54, then these applications can execute the business logic and accomplish the specified purpose …” paragraph 0019) . Chen is silent with reference to a memory allocation Peng teaches a memory allocation ( RPC Channel Memory 206 /Step 305 ) (“… At step 304, where a thread of the task stub 202 is spawned for the new call to the task 130. At step 305, the task stub 202 dynamically allocates an RPC channel in the RPC channel memory 206. That is, each new call to the task 130 spawns a thread of the task stub 202, and each thread of the task stub 202 allocates an RPC channel. Thus, each new active thread of the task 103 is associated with a respective one RPC channel …” paragraph 0028). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Chen with the teaching of Peng because the teaching of Peng would improve the system of Chen by providing a technique for allocating a data structure for storing and transmitting information between processes. As to claim 2 , Peng teaches t he apparatus of claim 1, wherein to provide the RPC interface, the first process ( Processes 212/214 ) is to utilize one or more accelerator devices ( Hardware Accelerator 104 ) that perform one or more of: data transformation, encryption, reliable transport, load balancing, traffic routing (manage remote procedure calls) , secure key storage, authentication, and/or observability (“… The test bench 118 may include one or more processes that call the task 130 in order to communicate with the logic design 118. In the present example, processes 212 and 214 are shown, but the test bench 118 may include more or less processes. The functionality of the task 130 is performed by the task server 208 in the hardware accelerator 104. In the simulation tool 152, the task stub 202 is configured to manage remote procedure calls for communicating with the task server 208. The task stub 202 is defined to be an automatic and time consuming process. In the processes 212 and 214, the simulation tool 152 directs calls to the task 130 to the task stub 202. Each call to the task stub 202 transfers the execution thread of the calling process to the task stub 202. Since multiple processes can call the same task 130, and since a single process can dynamically fork multiple task execution threads, multiple threads of the task stub 202 can be active at the same simulation time. In the present example, the two processes 212 and 214 are actively calling the same task 130 and may become two execution threads of the task stub 202. The simulation tool 152 may manage threading for the task stub 202 …” paragraph 0024) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Chen with the teaching of Peng because the teaching of Peng would improve the system of Chen by providing a dedicated hardware components, such as GPUs, TPUs, and FPGAs, that enhance energy efficiency by offloading computational tasks from general-purpose processors to optimize the execution of specific calculations . As to claim 3 , Peng teaches t he apparatus of claim 1, wherein the memory allocation comprises one or more of: arena based memory allocation, non-arena based memory allocation, memory allocation near processing cores , processing requirements for security, observability and data transformation, and/or request and completion queues ( Step 305 ) (“… At step 304, where a thread of the task stub 202 is spawned for the new call to the task 130. At step 305, the task stub 202 dynamically allocates an RPC channel in the RPC channel memory 206. That is, each new call to the task 130 spawns a thread of the task stub 202, and each thread of the task stub 202 allocates an RPC channel. Thus, each new active thread of the task 103 is associated with a respective one RPC channel …” paragraph 0028). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Chen with the teaching of Peng because the teaching of Peng would improve the system of Chen by providing a technique for allocating a data structure for storing and transmitting information between processes. As to claim 4 , Peng teaches t he apparatus of claim 1, wherein a shepherding layer is to provide communication between the partitioned processes to utilize direct memory access (DMA) , shared memory, polling threads, and/or timers ( RPC Channel Memory 206/one or more RPC channels) (using direct memory access (DMA) hardware channels) (“… At step 404, the run-time controller 204 stores at least the value(s) of the input argument(s) in each un-registered RPC channel to the context memory 210. That is, the task stub 202 may have allocated one or more RPC channels in response to one or more new task calls between synchronization points. In general, the RPC channel memory 206 may include multiple RPC channels for concurrent active threads of the task 130, some of which have been previously registered by the run-time controller 204, and other(s) of which are un-registered (i.e., for new task calls). The transfer between the RPC channel memory 206 and the context memory 210 may be implemented using direct memory access (DMA) hardware channels between the computer 102 and the hardware accelerator 104. DMA techniques are well known in the art ….” paragraph 0032). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Chen with the teaching of Peng because the teaching of Peng would improve the system of Chen by providing a direct memory access (DMA) that allows hardware subsystems to access main system memory independently of a processor unit . As to clam 8 , Chen teaches t he apparatus of claim 1, wherein the network interface device comprises one or more of: a network interface controller (NIC), a remote direct memory access (RDMA)-enabled NIC, SmartNIC, router, switch, forwarding element, infrastructure processing unit (IPU), data processing unit (DPU), accelerator, or network-attached appliance ( Network 160 ) . As to clam 9 , Chen teaches a non-transitory computer-readable medium comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: a compiler ( RPC compiler) to generate partitioned processes with resource and deployment definition based on Interface Description Language (IDL) ( IDL file) , wherein a first process of the partitioned processes comprises a business logic ( business logic ) and a second process of the partitioned processes is to provide a remote procedure call (RPC) interface ( RPC interfaces) for the first process ( RPC client application 50/RPC server application 54 ) (“… Referring to step 14 in FIG. 1, an RPC program includes RPC interfaces defined in an IDL file. In step 18, the IDL file is compiled using an RPC compiler. The RPC compiler generates client side source files 22, common header files 26, and server side source files 30. The client side source files 22, common header files 26 and a client side main source program 38 are compiled and linked to form an RPC client application 50. Similarly, the server side source files 22, common header files 26 and a server side main source program 42 are compiled and linked to form an RPC server application 54. At this point, the generated RPC client application and server application have no other interaction other than being connected to each other via the network. Further, a user can add business logic for a specified purpose into the client side source files 22 and the server side source files 30, to generate the RPC client application 50 and the RPC server application 54, then these applications can execute the business logic and accomplish the specified purpose …” paragraph 0019) . Chen is silent with reference to a memory allocation . Peng teaches a memory allocation ( RPC Channel Memory 206 ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Chen with the teaching of Peng because the teaching of Peng would improve the system of Chen by providing a shared RPC channel memory for concurrently data/messages between processes using direct memory access. As to claims 10-12, see the rejection of claims 2-4 respectively. As to clam 20 , Chen teaches t he method of claim 18, however it is silent with reference to wherein the second process provides a RPC interface for the first process comprises utilizing one or more accelerator devices that perform one or more of: data transformation, encryption, reliable transport, load balancing, traffic routing, secure key storage, authentication, and/or observability. Peng teaches wherein the second process provides a RPC interface for the first process ( Processes 212/214 ) comprises utilizing one or more accelerator devices ( Hardware Accelerator 104 ) that perform one or more of: data transformation, encryption, reliable transport, load balancing, traffic routing (manage remote procedure calls) , secure key storage, authentication, and/or observability (“… The test bench 118 may include one or more processes that call the task 130 in order to communicate with the logic design 118. In the present example, processes 212 and 214 are shown, but the test bench 118 may include more or less processes. The functionality of the task 130 is performed by the task server 208 in the hardware accelerator 104. In the simulation tool 152, the task stub 202 is configured to manage remote procedure calls for communicating with the task server 208. The task stub 202 is defined to be an automatic and time consuming process. In the processes 212 and 214, the simulation tool 152 directs calls to the task 130 to the task stub 202. Each call to the task stub 202 transfers the execution thread of the calling process to the task stub 202. Since multiple processes can call the same task 130, and since a single process can dynamically fork multiple task execution threads, multiple threads of the task stub 202 can be active at the same simulation time. In the present example, the two processes 212 and 214 are actively calling the same task 130 and may become two execution threads of the task stub 202. The simulation tool 152 may manage threading for the task stub 202 …” paragraph 0024) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Chen with the teaching of Peng because the teaching of Peng would improve the system of Chen by providing a dedicated hardware components, such as GPUs, TPUs, and FPGAs, that enhance energy efficiency by offloading computational tasks from general-purpose processors to optimize the execution of specific calculations. Claim s 5-7 , 13-15, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 20110078798 A1 to Chen et al. in view of U.S. Pub. No. 2010/0083289 A1 to Peng et al. as applied to claim s 1 , 9 and 13 above, and further in view of W.O. No. 2016051242 A1 to Demchenko et al. As to claim 5 , Chen as modified by Peng teaches t he apparatus of claim 1, however it is silent with reference to wherein the first process and the second process are to share a linearized object structure comprising a C++ object with member data references in one or more contiguous memory blocks. Demchenko teaches wherein the first process and the second process are to share a linearized object structure comprising a C++ object with member data references in one or more contiguous memory blocks ( H aving been serialized, the object may then be transferred to the second process by copying the contiguous block of memory 216 to a contiguous block of memory 226 in the second memory address space 220, the contiguous block of memory 226 being at least as large as the contiguous block of memory 216 ) (“…In order to render the object susceptible for being transferred to the second memory address space 220, the object must first be serialized, i.e. represented as a contiguous sequence of bytes. According to implementations of the present technology, this is achieved by allocating a contiguous block of memory 216 sufficiently large to contain all of the data of the object, and then copying that data to the contiguous block of memory 216. These one or more copy operations are represented by the two arrows from blocks of memory 212 and 214 to the contiguous block of memory 216… As part of copying the object, the object's references to its constituent Door and Wall objects are updated to refer to the memory addresses within the contiguous block of memory 216 where the Door and Wall objects are stored…H aving been serialized, the object may then be transferred to the second process by copying the contiguous block of memory 216 to a contiguous block of memory 226 in the second memory address space 220, the contiguous block of memory 226 being at least as large as the contiguous block of memory 216. In doing so, care must be taken to ensure that the address in the second memory address space 220 (i.e. the address of the contiguous block of memory 226) is the same as the address in the first memory address space 210 where the contiguous block of memory 216 was stored. This preserves the integrity of any absolute references to memory addresses included in the object (e.g. the Building object's references to its constituent Door and Wall objects) … Turning to Figure 3, there is illustrated a first method implementation 300 of the present technology. The method 300 may be carried out, for example, by the processor 112 of computer 110 in the context of the networked computing environment 100 of Figure 1. The method 300 is for transferring an object from a first process to a second process, the first process having a first memory address space and the second process having a second memory address space, the method being executable by a processor of a computing device. The method 300 comprises steps 310 to 330 … At step 310, a contiguous block of memory 216 is allocated at an address of the first memory address space 210 … At step 320, the object is copied into the contiguous block of memory 216 from one or more other blocks of memory 212, 214 in the first memory address space 210. In some implementations, step 320 may comprise substituting a custom memory allocator for a default memory allocator normally used by a copy function (e.g. a copy method of the object), the custom memory allocator being configured to allocate memory in the contiguous block of memory 216, then executing the copy function in respect of the object, the copy function using the custom memory allocator to allocate memory for a copy of the object in the contiguous block of memory 216. For programs written in C or C++, substitution of the memory allocator can be achieved by overloading the malloc, calloc and new functions so that as new variables are declared and allocated at program execution time, they are written to the contiguous region of memory 216 rather than being distributed across non-contiguous memory locations (e.g. 212, 214) …”) . It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Chen and Peng with the teaching of Demchenko because the teaching of Demchenko would improve the system of Chen and Peng by providing a technique for transferring data between processes using contiguous memory locations so as to allow for seamless data communication . As to claim 6 , Chen as modified by Peng teaches t he apparatus of claim 5, however it is silent with reference to wherein the first service is to cause a network interface device to linearize at least one object and store the linearized at least one object into memory for access by the second service. Demchenko teaches wherein the first service is to cause a network interface device to linearize at least one object and store the linearized at least one object into memory for access by the second service ( H aving been serialized, the object may then be transferred to the second process by copying the contiguous block of memory 216 to a contiguous block of memory 226 in the second memory address space 220, the contiguous block of memory 226 being at least as large as the contiguous block of memory 216 ) (“…In order to render the object susceptible for being transferred to the second memory address space 220, the object must first be serialized, i.e. represented as a contiguous sequence of bytes. According to implementations of the present technology, this is achieved by allocating a contiguous block of memory 216 sufficiently large to contain all of the data of the object, and then copying that data to the contiguous block of memory 216. These one or more copy operations are represented by the two arrows from blocks of memory 212 and 214 to the contiguous block of memory 216… As part of copying the object, the object's references to its constituent Door and Wall objects are updated to refer to the memory addresses within the contiguous block of memory 216 where the Door and Wall objects are stored…H aving been serialized, the object may then be transferred to the second process by copying the contiguous block of memory 216 to a contiguous block of memory 226 in the second memory address space 220, the contiguous block of memory 226 being at least as large as the contiguous block of memory 216. In doing so, care must be taken to ensure that the address in the second memory address space 220 (i.e. the address of the contiguous block of memory 226) is the same as the address in the first memory address space 210 where the contiguous block of memory 216 was stored. This preserves the integrity of any absolute references to memory addresses included in the object (e.g. the Building object's references to its constituent Door and Wall objects) … Turning to Figure 3, there is illustrated a first method implementation 300 of the present technology. The method 300 may be carried out, for example, by the processor 112 of computer 110 in the context of the networked computing environment 100 of Figure 1. The method 300 is for transferring an object from a first process to a second process, the first process having a first memory address space and the second process having a second memory address space, the method being executable by a processor of a computing device. The method 300 comprises steps 310 to 330 …”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Chen and Peng with the teaching of Demchenko because the teaching of Demchenko would improve the system of Chen and Peng by providing a technique for transferring data between processes using contiguous memory locations so as to allow for seamless data communication. As to claim 7 , Chen as modified by Peng teaches t he apparatus of claim 5, however it is silent with reference to comprising circuitry is to perform linearization of the at least one object and transmit the linearized at least one object to memory accessible to the first process. Demchenko teaches comprising circuitry is to perform linearization of the at least one object and transmit the linearized at least one object to memory accessible to the first process ( H aving been serialized, the object may then be transferred to the second process by copying the contiguous block of memory 216 to a contiguous block of memory 226 in the second memory address space 220, the contiguous block of memory 226 being at least as large as the contiguous block of memory 216 ) (“…In order to render the object susceptible for being transferred to the second memory address space 220, the object must first be serialized, i.e. represented as a contiguous sequence of bytes. According to implementations of the present technology, this is achieved by allocating a contiguous block of memory 216 sufficiently large to contain all of the data of the object, and then copying that data to the contiguous block of memory 216. These one or more copy operations are represented by the two arrows from blocks of memory 212 and 214 to the contiguous block of memory 216… As part of copying the object, the object's references to its constituent Door and Wall objects are updated to refer to the memory addresses within the contiguous block of memory 216 where the Door and Wall objects are stored…H aving been serialized, the object may then be transferred to the second process by copying the contiguous block of memory 216 to a contiguous block of memory 226 in the second memory address space 220, the contiguous block of memory 226 being at least as large as the contiguous block of memory 216. In doing so, care must be taken to ensure that the address in the second memory address space 220 (i.e. the address of the contiguous block of memory 226) is the same as the address in the first memory address space 210 where the contiguous block of memory 216 was stored. This preserves the integrity of any absolute references to memory addresses included in the object (e.g. the Building object's references to its constituent Door and Wall objects) … Turning to Figure 3, there is illustrated a first method implementation 300 of the present technology. The method 300 may be carried out, for example, by the processor 112 of computer 110 in the context of the networked computing environment 100 of Figure 1. The method 300 is for transferring an object from a first process to a second process, the first process having a first memory address space and the second process having a second memory address space, the method being executable by a processor of a computing device. The method 300 comprises steps 310 to 330 …”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Chen and Peng with the teaching of Demchenko because the teaching of Demchenko would improve the system of Chen and Peng by providing a technique for transferring data between processes using contiguous memory locations so as to allow for seamless data communication. As to claims 13-15, see the rejection of claims 5-7 respectively. As to claim 18 , Peng teaches t he method of claim 17, comprising: allocating memory to share at least one RPC message as at least one formatted object among the first and second processes ( Step 305 ) (“… At step 304, where a thread of the task stub 202 is spawned for the new call to the task 130. At step 305, the task stub 202 dynamically allocates an RPC channel in the RPC channel memory 206. That is, each new call to the task 130 spawns a thread of the task stub 202, and each thread of the task stub 202 allocates an RPC channel. Thus, each new active thread of the task 103 is associated with a respective one RPC channel …” paragraph 0028). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Chen with the teaching of Peng because the teaching of Peng would improve the system of Chen by providing a technique for allocating a data structure for storing and transmitting information between processes. Demchenko teaches wherein the at least one formatted object comprises a linearized object structure comprising a C++ object with member data references in one or more contiguous memory blocks ( H aving been serialized, the object may then be transferred to the second process by copying the contiguous block of memory 216 to a contiguous block of memory 226 in the second memory address space 220, the contiguous block of memory 226 being at least as large as the contiguous block of memory 216 ) (“…In order to render the object susceptible for being transferred to the second memory address space 220, the object must first be serialized, i.e. represented as a contiguous sequence of bytes. According to implementations of the present technology, this is achieved by allocating a contiguous block of memory 216 sufficiently large to contain all of the data of the object, and then copying that data to the contiguous block of memory 216. These one or more copy operations are represented by the two arrows from blocks of memory 212 and 214 to the contiguous block of memory 216… As part of copying the object, the object's references to its constituent Door and Wall objects are updated to refer to the memory addresses within the contiguous block of memory 216 where the Door and Wall objects are stored…H aving been serialized, the object may then be transferred to the second process by copying the contiguous block of memory 216 to a contiguous block of memory 226 in the second memory address space 220, the contiguous block of memory 226 being at least as large as the contiguous block of memory 216. In doing so, care must be taken to ensure that the address in the second memory address space 220 (i.e. the address of the contiguous block of memory 226) is the same as the address in the first memory address space 210 where the contiguous block of memory 216 was stored. This preserves the integrity of any absolute references to memory addresses included in the object (e.g. the Building object's references to its constituent Door and Wall objects) … Turning to Figure 3, there is illustrated a first method implementation 300 of the present technology. The method 300 may be carried out, for example, by the processor 112 of computer 110 in the context of the networked computing environment 100 of Figure 1. The method 300 is for transferring an object from a first process to a second process, the first process having a first memory address space and the second process having a second memory address space, the method being executable by a processor of a computing device. The method 300 comprises steps 310 to 330 … At step 310, a contiguous block of memory 216 is allocated at an address of the first memory address space 210 … At step 320, the object is copied into the contiguous block of memory 216 from one or more other blocks of memory 212, 214 in the first memory address space 210. In some implementations, step 320 may comprise subst