Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examiner Notes
Examiner cites particular columns and line numbers in the references as applied to the claims below for convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references cited in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/21/2025 has been entered.
Response to Amendment
The Amendment filed 11/21/2025 has been entered. The amendments to the claims have overcome all Objections set forth in the Final Office Action mailed 8/21/2025. Claims 1-35 remain pending in the present Office Action.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 5-12, 18-20, 26-28, 33, and 35 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 5, 8-10, 17-18, 25-26, 28, 33, and 35 of copending Application No. 17/720,196 (the reference application) in view of Francini et al (U.S. Pub. No. 2018/0159965), hereinafter Francini. The claims of the instant application and the claims of the reference application are compared in Table 1 below, with limitations not taught by the reference claims in bold. Unless indicated otherwise, the limitations of the claims of the instant application have been compared to the limitations of the claim of the same number in the reference application.
Regarding claims 1, 10, 18, and 26 of the instant application, claims 1, 10, 18 and 26 of the reference application substantially recite all the limitations of the claims except cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols. However, Francini teaches to cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols (Fig. 1, buffer 123 = “storage”; FIG. 2B, transport layer 224S = “first wireless computing resource”; [0051] – the transport layer 224S may use protocols such as TCP; [0052] – “The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120.”; [0022] – “The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Francini to improve communication between devices using transport layer connections by supporting use of a networked transport layer socket with a API that communicates data between the networked transport layer socket and the application layer of a communication device via the link layer and physical layer of a communication protocol stack (Francini: [0006], [0063]-[0065] and [0068]).
Regarding claim 6, claim 1 of Application 17/720,196 in view of Francini teaches substantially all of the limitations except that at least one of the one or more transport protocols is a transport layer protocol; and to cause the data to be provided from the storage comprises sending the data from the storage. However, Francini teaches at least one of the one or more transport protocols is a transport layer protocol (Francini: [0052] – “The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection). The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120.”); and to cause the data to be provided from the storage comprises sending the data from the storage ([0052] – “The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120. The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222C of the WAD 120. The reliable link layer 222C of the WAD 120 receives the primitive messages”. The reliable link layer 222C receives the application data in the primitive messages, i.e., the client socket API sends the application data (“the data from the storage”), within primitive messages, to the reliable link layer 222C.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Francini to improve communication between devices using transport layer connections by supporting use of a networked transport layer socket with a API that communicates data between the networked transport layer socket and the application layer of a communication device via the link layer and physical layer of a communication protocol stack (Francini: [0006], [0063]-[0065] and [0068]).
Regarding claim 15, claim 9 of Application 17/720,196 teaches substantially all of the limitations except that information is to be transferred using an application. However, Francini teaches information is to be transferred using an application ([0031] – “the client socket API 215 is an application programming interface that is configured to allow application programs to control and use network sockets."; [0052] – “In FIG. 2B, in a direction of transmission from the application layer 236 of the server 130 toward the application layer 216 of the MHD 110, communication of the application data of the application layer 236 of the server 230 may be performed as follows. The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection). The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120. The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222C of the WAD 120. The reliable link layer 222C of the WAD 120 receives the primitive messages, places the primitive messages into link layer data structures supported by the reliable link layer 222C").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Francini to improve communication between devices using transport layer connections by supporting use of a networked transport layer socket with a API that communicates data between the networked transport layer socket and the application layer of a communication device via the link layer and physical layer of a communication protocol stack (Francini: [0006], [0063]-[0065] and [0068]). Additionally, it would have been obvious to one of ordinary skill in the art to have applied the functions performed by the processor of claim 9 of Application 17/720,196 to the system of claim 15.
Claims 5, 8-9, 11-12, 19-20, 27-28, 33, and 35 recite additional limitations that are substantially the same or identical to limitations recited in claims 1, 5, 8-10, 17-18, 25-26, 28, 33, and 35 of the reference application as indicated in Table 1, and are rejected as well.
This is a provisional nonstatutory double patenting rejection.
Claims 2 and 24 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 2, and 18-19 of copending Application No. 17/720,196 (the reference application) in view of Francini, and further in view of Fuente et al. (U.S. Pub. No. 2005/0265370), hereinafter Fuente. The claims of the instant application and the claims of the reference application are compared in Table 1 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claim 2 of the instant application, claim 2 of the reference application teaches all of the limitations except the API causes the first wireless computing resource to: send the data stored in the storage to the second wireless computing resource; and to decrement a reference counter. However, Francini teaches the API causes the first wireless computing resource to: send the data stored in the storage to the second wireless computing resource ([0052] – “The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120. The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222C of the WAD 120.”; [0022] – “The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Francini to improve communication between devices using transport layer connections by supporting use of a networked transport layer socket with a API that communicates data between the networked transport layer socket and the application layer of a communication device via the link layer and physical layer of a communication protocol stack (Francini: [0006], [0063]-[0065] and [0068]).
Claim 2 of the reference application in view of Francini teaches all of the limitations except to decrement a reference counter.
However, Fuente teaches to decrement a reference counter ([0025]-[0026] – “Counter (108) maintains a count of the number of references to the buffer memory (104} by the accessors (106, 110}, the count being incremented on each reference and decremented on completion of each accessor's data transmission. [...] Memory manager (114) is adapted to lock buffer memory during write activity, to permit read access to the buffer memory (104) by accessors (106, 110) and to return the buffer memory to a free buffer pool when counter (108) signals that the count has reached zero.").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Fuente in order to allow the buffer memory to be allocated, pinned and freed without preventing read accesses by multiple accessors that transmit the data stored, which further allows for rapid retransmission of the data stored in the buffer when a transmission fails (Fuente: [0032]).
Regarding claim 24 of the instant application, claims 18-19 of the reference application teaches all of the limitations except to decrement a reference counter and if the decremented reference counter holds a value of zero, the allocated buffer is deselected. However, Fuente teaches to decrement a reference counter and if the decremented reference counter holds a value of zero, the allocated buffer is deselected ([0025]-[0026] - "Counter (108) maintains a count of the number of references to the buffer memory (104) by the accessors (106, 110), the count being incremented on each reference and decremented on completion of each accessor's data transmission. [...] Memory manager (114) is adapted to lock buffer memory during write activity, to permit read access to the buffer memory (104) by accessors (106, 110) and to return the buffer memory to a free buffer pool when counter (108) signals that the count has reached zero."; [0032] - "the counter of the preferred embodiment allows the buffer memory to be allocated, "pinned ", and freed"; [0042] - "At step (226), a further test is performed to determine whether the count has reached zero or not. If it has not reached zero, this part of the logic process returns to step (228) to be triggered by the next completion. If on any iteration, the count is determined to have reached zero, the memory manager (114) releases the buffer memory (104).").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Fuente in order to allow the buffer memory to be allocated, pinned and freed without preventing read accesses by multiple accessors that transmit the data stored, which further allows for rapid retransmission of the data stored in the buffer when a transmission fails (Fuente: [0032]).
This is a provisional nonstatutory double patenting rejection.
Claims 3, 7, 13, 17, 22, 23 and 31 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 3, 5, 9-10, 18, and 26 of copending Application No. 17/720,196 (the reference application) in view of Francini, and further in view of Sen et al. (U.S. Pub. No. 2020/0218684), hereinafter Sen. The claims of the instant application and the claims of the reference application are compared in Table 1 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claim 3 of the instant application, claim 3 of the reference application in view of Francini teaches all the limitations except the information is to be transferred based at least on a transport layer used to associate functions of the protocols. However, Sen teaches the information is to be transferred based at least on a transport layer used to associate functions of the protocols ([0033] – “bindings are used to link the device libraries and transport definitions with device drivers and transport protocols, respectively. Bindings involve a process or technique of connecting two or more data elements or entities together. Bindings allows the accelerator library to be bound to multiple protocols, including one or more IX protocols and one or more transport protocols.”; [0034] – “The transport definition is independent of the transport protocols that are used to carry data to remote accelerator resources. […] Each of the transport layers (e.g., transport layers 823, 824 of FIG. 8) may include primitives such as read/write data from/to device; process device command; get/set device properties; and event subscription and notification. The transport layers (e.g., transport layers 823, 824 of FIG. 8) may also have mechanisms to allow scalable and low latency communication, such as […] protocol independent format definition allowing for multiple protocol bindings.” The transport layers 823, 824 have protocol-independent format definitions which use bindings (“a set of associations that correlate” the transport protocols) to connect the transport protocol-independent transport definition to the multiple different transport protocols.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Sen to use an API that abstracts details of the underlying transport protocols used by a transport layer and translates to a specific transport protocol used to provide the benefit of seamless and transparent access to remote resources (Sen: [0034]).
Regarding claim 7 of the instant application, claim 5 of the reference application in view of Francini teaches all the limitations except to perform an operation related to the one or more different transport protocols based at least on a corresponding operation of the one or more transport protocols. However, Sen teaches to perform an operation related to the one or more different transport protocols based at least on a corresponding operation of the one or more transport protocols ([0059] – “Referring now to FIG. 6, in use, the computing platform 102 may execute a process 600 for sending an accelerator message to a hardware accelerator 212 or 312. The process 600 begins at operation 602, in which an application on the computing platform 102 determines a message to be sent to a hardware accelerator 212 or 312. […] At operation 604, the application passes the command or function to the accelerator manager 402.”; [0062] – “Referring back to operation 606, if the accelerator manager 402 is to pass the message to a remote hardware accelerator 312, the process 600 proceeds to operation 612, in which the computing platform 102 generates a command capsule based on the message received from the application. […] the command capsule may encapsulate the message in a protocol different from a protocol used by the message.”; [0063] – “At operation 616, the computing platform 102 sends the command capsule to the accelerator sled 104. The computing platform 102 may use any suitable communication protocol, such as TCP, RDMA, RoCE, RoCEvl , RoCEv2, iWARP, etc.” An application on computing platform 102 (a “first wireless computing resource”) passes a message (“corresponding operation”) which uses a protocol (“one or more transport protocols”) – as described in [0062] – to an accelerator manager 402 via an API. The accelerator manager causes the computing platform 102 (the “first wireless computing resource”) to generate (“one or more operations”) a command capsule for the message using a different protocol (“different transport protocols”) and to send the command capsule which uses the different protocol to the accelerator sled 104 (a “second wireless computing resource”).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Sen to use an API that abstracts details of the underlying transport protocols used by a transport layer and translates to a specific transport protocol used to provide the benefit of seamless and transparent access to remote resources (Sen: [0034]).
Regarding claim 13 of the instant application, claim 10 of the reference application in view of Francini teaches all the limitations except wherein an application associated with the first wireless computing resource calls the API to obtain information regarding the one or more different transport protocols supported by the second wireless computing resource. However, Sen teaches However, Sen teaches an application ([0079] – “the initiator 822 is an application hosted by the VM 815”) associated with the first wireless computing resource (FIG. 8, computing platform 102; [0022] – “The network 106 (also referred to as a "network fabric 106" or the like) may be embodied as any type of network capable of communicatively connecting the computing platforms 102 and the accelerator sleds 104. […] wireless”) calls the API ([0085] – “the initiator 822 hosted by a VM 815 of a computing platform 102 executes process 900 for establishing a communication session 830 with target accelerator resource(s) via the accelerator manager 502. […] the various messages discussed a being communicated between the initiator 822 and the accelerator resource(s) may be performed according to processes 600-700 of FIGS. 6-7, respectfully.”; [0059] – “the computing platform 102 may execute a process 600 for sending an accelerator message to a hardware accelerator 212 or 312. The process 600 begins at operation 602, in which an application on the computing platform 102 determines a message to be sent to a hardware accelerator 212 or 312. […] At operation 604, the application passes the command or function to the accelerator manager 402. In the illustrative embodiment, the application passes the command or function with use of an application programming interface”; [0088] – “At operation 906, the initiator 822 generates and sends, to the target accelerator resource(s), a connection establishment request message for a primary connection 831.” Sending a message, e.g., the connection establishment request message, to a target hardware accelerator resource involves an application, e.g., the initiator, passing the message to an API (“calls the API”) as described in process 600.) to obtain information regarding the one or more different transport protocols ([0089] – “In response to the connection establishment request message for the primary connection 831 , at operation 906, the initiator 822 receives a connection establishment response message for the primary connection 831 from the target accelerator resource(s ). For example, where an RDMA-based protocol is used for the primary connection 831, such as RoCEv2, the target accelerator resource(s) may encapsulate an RDMA acknowledgement (ACK) packet within an Ethernet/IP/UDP packet (including either IPv4 or IPv6) and including suitable destination and source addresses based on the connection establishment request message. In embodiments, the connection establishment response message for the primary connection 831 includes a session ID, which may be included in the header or payload section of the message. The session ID is generated by the target accelerator resource(s) and is discussed in more detail infra. Other suitable information may be included in the connection establishment response message, such as an accelerator resource identifier and/or other protocol specific information.”) supported by the second wireless computing resource (FIG. 8, accelerator sled 104 with accelerator(s) 312 which is the “target hardware accelerator resource(s)”; [0033] – “the accelerator library also uses individual transport definitions to abstract transport-specific details into an API that provides the necessary translation to transport-specific interfaces via corresponding transport protocols. […] the transport definition is used to connect the application (e.g., application 820 of FIG. 8) to a remote hardware accelerator 312 resident in the accelerator sled 104. The transport definition allows applications to send commands and data to a target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) and receive data from the target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3)”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Sen to use the initiator application that establishes multiple connections with a remote accelerator using different transport protocols over a wireless network achieves high-availability goals even when one transport protocol is temporarily unusable (Sen: [0073]).
Regarding claim 17 of the instant application, claim 10 of the reference application in view of Francini teaches all the limitations except wherein the first wireless computing resource is a virtual device. However Sen teaches wherein the first wireless computing resource is a virtual device ([0021] – “an application or virtual machine (VM) being executed by a processor 202 of the computing platform 102 (see FIG. 2) may access a hardware accelerator 212 or 312 (see FIGS. 2 and 3)”; [0058] – “The accelerator virtualizer 508 is configured to present one physical hardware accelerator 312 as two or more virtual hardware accelerators 312.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Sen to provide scalability, flexibility, manageability, and utilization, which results in lower operating or overhead costs and/or allow multiple applications to use the same physical hardware accelerator (Sen: [0074] and [0043]).
Regarding claim 22 of the instant application, claim 18 of the reference application in view of Francini teaches all the limitations except wherein the one or more processors are one or more graphics processing units (GPUs). However, Sen teaches wherein the one or more processors are one or more graphics processing units (GPUs) ) ([0025] – “the processor(s) 202 may include Intel® Core™ based processor(s) and/or Xeon® processor(s); Advanced Micro Devices (AMD) Zen® Core Architecture processor(s), such as Epyc ® processor(s), Opteron™ series Accelerated Processing Units (APUs), and/or MxGPUs").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Sen to perform computing tasks more quickly and efficiently (Sen: [0002], [0016]).
Regarding claim 23 of the instant application, claim 9 of the reference application in view of Francini teaches all the limitations except the API causes the transfer of information without modification to the first wireless computing resource. However, Sen teaches the API causes the transfer of information without modification to the first wireless computing resource ([0047] – “in some embodiments, an application may interact with an accelerator manager 402 of a computing platform 102 a first time and a second time. In such an example, for the first interaction, the accelerator manager 402 may facilitate an interface with a local hardware accelerator 212 and, for the second interaction, the accelerator manager 402 may facilitate an interface with a remote hardware accelerator 312, without any change or requirements in how the application interacts with the accelerator manager 402 between the first interaction and the second interaction."; [0059] – “At operation 604, the application passes the command or function to the accelerator manager 402. In the illustrative embodiment, the application passes the command or function with use of an application programming interface such that the details of communication with the hardware accelerator 212 or 312 are hidden from the associated application”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Sen to use an API that abstracts details of the underlying transport protocols used by a transport layer and translates to a specific transport protocol used to provide the benefit of seamless and transparent access to remote resources (Sen: [0034]). Additionally, it would have been obvious to one of ordinary skill in the art to have applied the functions performed by the processor of claim 9 of Application 17/720,196 to the non-transitory computer-readable medium of claim 23.
Regarding claim 31 of the instant application, claim 26 of the reference application in view of Francini teaches all the limitations except transferring the information using an application that maps the API to an operation related to an operation of the one or more different transport protocols, wherein the application is implemented, at least in part, on a hardware accelerator. However, Sen teaches transferring the information using an application that maps the API to an operation related to an operation of the one or more different transport protocols ([0033] – “The accelerator application interface includes an accelerator library used to access accelerator resources. The accelerator library may be an accelerator specific run-time library (e.g., Open Computing Language (OpenCL }, CUDA, Open Programmable Acceleration Engine (OPAE) API, or the like) that provides mapping of application constructs on to a hardware accelerator context. [...] the accelerator library also uses individual transport definitions to abstract transport-specific details into an API that provides the necessary translation to transport-specific interfaces via corresponding transport protocols. [...] Similarly, the transport definition is used to connect the application (e.g., application 820 of FIG. 8) to a remote hardware accelerator 312 resident in the accelerator sled 104. The transport definition allows applications to send commands and data to a target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) and receive data from the target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) without requiring knowledge of the underlying transport protocols (e.g., transport layers 823, 824 of FIG. 8) being used." The API for accessing an accelerator (provided by accelerator manager 402 - see [0047]) translates ("maps") to transport-specific details. [0059] – “Referring now to FIG. 6, in use, the computing platform 102 may execute a process 600 for sending an accelerator message to a hardware accelerator 212 or 312. The process 600 begins at operation 602, in which an application on the computing platform 102 determines a message to be sent to a hardware accelerator 212 or 312. [...] At operation 604, the application passes the command or function to the accelerator manager 402. In the illustrative embodiment, the application passes the command or function with use of an application programming interface such that the details of communication with the hardware accelerator 212 or 312 are hidden from the associated application."; [0062] – “Referring back to operation 606, if the accelerator manager 402 is to pass the message to a remote hardware accelerator 312, the process 600 proceeds to operation 612, in which the computing platform 102 generates a command capsule based on the message received from the application. [...] the command capsule may rearrange or otherwise reorganize the message in preparation for being sent to the accelerator sled 104. In some embodiments the command capsule may encapsulate the message in a protocol different from a protocol used by the message." The accelerator manager (“application”) generates a command capsule corresponding to a transport protocol used by the accelerator sled (“the one or more different transport protocols”), which may be different from the protocol used by the message from the application passed to the accelerator manager using the API.), wherein the application is implemented, at least in part, on a hardware accelerator ([0047] – “The accelerator manager 402 is configured to manage accelerators that an application executed by the processor 202 may interface with. In some embodiments, the accelerator manager 402 may implement an application programming interface for accessing an accelerator''; [0054] - "The accelerator manager 502 is configured to manage the hardware accelerators 312 on the accelerator sled 104 and to allow remote interfacing with the hardware accelerators 312 through the host fabric interface 310. The accelerator manager 502 may process message capsules received from and sent to the computing platform 102 and may, based on the content of the message capsules, execute the relevant necessary operations to interface with the hardware accelerators 312, such as reading data from the hardware accelerator 312, writing data to the hardware accelerator 312, executing commands on the hardware accelerator 312, getting and setting properties of the hardware accelerator 312, receiving and processing events or notifications from the acceleration device 312 (such as sending a message capsule to send an interrupt or set a semaphore on the computing platform 102}, etc." The "application" = accelerator manager 402/502, accelerator manager 502 is implemented on accelerator sled 104. Accelerator manager provides an API for an application to access an accelerator as indicated in [0047]. Additionally, [0046] states environment 400, including accelerator manager 402, may be embodied on any component(s) of computing platform 102, which may include local hardware accelerator 212 as shown in FIG. 2.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Sen to provide the benefit of seamless and transparent access to other computing resources (Sen: [0034]), and perform computing tasks faster and more efficiently (Sen: [0002] and [0016]).
This is a provisional nonstatutory double patenting rejection.
Claims 4, 21, and 30 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 6, 23, and 33 of copending Application No. 17/720,196 (the reference application) in view of Francini, and further in view of Hyder et al. (U.S. Patent. No. 5,983,274), hereinafter Hyder. The claims of the instant application and the claims of the reference application are compared in Table 1 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claims 4 and 21, claims 6 and 23 of the reference application in view of Francini teach all the limitations except one or more drivers of a transport layer, the one or more drivers used to operate the first and second wireless computing resources. However, Hyder teaches one or more drivers of a transport layer (Claim 1 – “a protocol driver; a device driver; an integrating driver that interfaces with the protocol driver and the device driver using defined APIs”; Col. 2, lines 13-20 – “Because there are different types of transport protocols developed over time by different entities for different reasons, there may be different types of transport protocol drivers acting as software components running on a single host computer system in order to provide the necessary networking capabilities for a given installation. Some common transport protocols include TCP/IP, IPX, AppleTalk®, and others.”), the one or more drivers used to operate the first and second wireless computing resources (Col. 1, lines 60-62 – “link layer implemented by a network card device driver, and the transport and network layers implemented as a transport protocol driver”; Col. 7, line 56-Col. 8, line 3 – “For sending network data from the upper layers 106, the transport protocol driver 100 will allocate a packet data structure from the integrating component 102, fill the data structure with network information and control information according to the present invention, and send it down through the integrating component 102 to the network card device driver 104 for transmitting the network data on the network interface card 108. In like manner, for a packet received from the network interface card 108, the network card device driver 104 will allocate a packet data structure from the integrating component 102, fill it with the network data and control information according to the present invention, and send it through the integrating component 102 to the transport protocol driver 100 for communication to the upper layers 106.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Hyder to allow transport protocol drivers and network card drivers to be developed more efficiently and allow communication with any available transport protocol (Hyder: Col. 3, lines 45-65).
Regarding claim 30, claim 33 of the reference application in view of Francini teaches all the limitations except the API is to be called by the first wireless computing resource. However, Hyder teaches the API is to be called by the first wireless computing resource (Col. 1, lines 60-62 – “data link layer implemented by network card device driver, and the transport and network layers implemented as a transport protocol driver”; Col. 7, lines 38-39 – “Application Programming Interface (API) is a set of subroutines provided by one software component”; Col. 10, lines 34-40 – “The transport protocol driver 100 then sends or transfers the packet to the integrating component 102 at step 140 by making a subroutine call […] the integrating component 102 will send of transfer the packet to the network card device driver at step 142”; Claim 1 – “protocol driver; a device driver; an integrating driver that interfaces with the protocol driver and the device driver using defined APIs”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Hyder to allow transport protocol drivers and network card drivers to be developed more efficiently and allow communication with any available transport protocol (Hyder: Col. 3, lines 45-65).
This is a provisional nonstatutory double patenting rejection.
Claims 14 and 32 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 10 and 26 of copending Application No. 17/720,196 (the reference application) in view of Francini, and further in view of Lebin et al. (U.S. Pub. No. 2021/0385252), hereinafter Lebin. The claims of the instant application and the claims of the reference application are compared in Table 1 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claims 14 and 32, claims 10 and 26 of the reference application in view of Francini teach all the limitations except wherein the API is embedded within another API. However, Lebin teaches wherein the API is embedded within another API ([0043] - "Each called API server may in turn call additional APIs, and this execution flow can be nested many levels deep." Called APIs may call other APIs.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Lebin to allow for the isolation of different functionalities of a complex API call to be split into many API calls (Lebin: [0043]).
This is a provisional nonstatutory double patenting rejection.
Claims 16, 25, and 29 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 10, 18 and 26 of copending Application No. 17/720,196 (the reference application) in view of Francini, and further in view of Young (U.S. Pub. No. 2021/0320850). The claims of the instant application and the claims of the reference application are compared in Table 1 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claim 16, claim 10 of the reference application in view of Francini teaches all the limitations except a network orchestrator configured to identify one or more transport profiles supported by the first wireless computing resource, wherein the network orchestrator is to deploy the second wireless computing resource with the first wireless computing resource. However, Young teaches a network orchestrator configured to identify one or more transport profiles supported by the first wireless computing resource ([0039] – “Service orchestration and transport path management section 305 may include a service orchestrator (SO) 325, an analytics engine 330, a network functions virtual orchestrator (NFVO) 335"; [0043] – “SO 325 may generate and send a message to SIDC 345 that instructs SIDC 345 to identify one or more network infrastructures that are candidates for relocating the current transport path to maintain the SLA for the application service. In one implementation, the message may include service requirements including SLA requirements and/or service profiles associated with the application service."; [0044] – “SIDC 345 may include an infrastructure catalog 370 that stores network infrastructure profiles that provide transport paths for a corresponding particular service profile. In one implementation, infrastructure catalog 370 obtains the network infrastructure profiles from an inventory database 375 which orders the network infrastructure profiles based on a deployment preference value associated with each of the network infrastructure profiles." Network infrastructure profiles of network service infrastructures (implemented across 5G-NR "wireless computing resources" such as base stations in RAN 120 - see [0019] and [0026]) provide transport paths, making them "transport profiles".), wherein the network orchestrator is to deploy the second wireless computing resource with the first wireless computing resource ([0044] – “In one implementation, deployment preference values include abstracted "distances " between nodes in a transport path associated with a network infrastructure. For example, PCE 350 may calculate a logical "distance " which may be a function of latency, inversely proportional to bandwidth, inversely proportional to reliability, etc. In one implementation, the calculated distances are based on the build of the network (e.g., size of the circuit}, the current usage (e.g., based on monitoring), and expected usage (e.g., based on projected use of yet-to-be deployed services). [...] PCE 350 provides the alternative network service infrastructures to SIDC 345 as candidates for maintaining service availability at SLA requirements in response to a detected and/or projected outage and/or congested network conditions."; [0046]-[0047] - "SIDC 345 may select one or more alternative network service infrastructures and/or sub-infrastructures based on some or all of the above data. [...] NFVO 335 may, based on instructions received from SO 325, deploy the alternative network service infrastructures and/or sub-infrastructures to orchestrate within data transport section 310." The alternative network service infrastructures ("wireless computing resources") that meet the requirements in the profiles are deployed together.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Young to ensures service availability is maintained in a 5G wireless network (Young: [0044]).
Regarding claims 25 and 29, claims 18 and 26 of the reference application in view of Francini teaches all the limitations except wherein the first wireless computing resource has been configured with a transport profile supported by the second wireless computing resource. However, Young teaches the first wireless computing resource has been configured with a transport profile supported by the second wireless computing resource ([0040] – “SLA database 360 may store and maintain service requirement profiles for network customers (e.g., UE 110). Each service requirement profile describes a particular network customer's network service performance requirements.”; [0044] – “SIDC 345 may include an infrastructure catalog 370 that stores network infrastructure profiles that provide transport paths for a corresponding particular service profile. In one implementation, infrastructure catalog 370 obtains the network infrastructure profiles from an inventory database 375 which orders the network infrastructure profiles based on a deployment preference value associated with each of the network infrastructure profiles. […] In one implementation, deployment preference values include abstracted "distances " between nodes in a transport path associated with a network infrastructure. For example, PCE 350 may calculate a logical "distance " which may be a function of latency, inversely proportional to bandwidth, inversely proportional to reliability, etc. In one implementation, the calculated distances are based on the build of the network (e. g. , size of the circuit), the current usage (e.g., based on monitoring), and expected usage (e.g., based on projected use of yet-to- be deployed services). [...] PCE 350 provides the alternative network service infrastructures to SIDC 345 as candidates for maintaining service availability at SLA requirements in response to a detected and/or projected outage and/or congested network conditions."; [0045] – “SIDC 345 may identify infrastructure design parameters associated with physical and virtual components of a particular network infrastructure. [...] The configuration of the multiple transport networks may include design parameters that detail the physical and virtual configuration of each transport network 320 and how they interconnect."; [0046]-[0047] – “SIDC 345 may select one or more alternative network service infrastructures and/or sub-infrastructures based on some or all of the above data. [...] NFVO 335 may, based on instructions received from SO 325, deploy the alternative network service infrastructures and/or sub-infrastructures to orchestrate within data transport section 310. A transport controller may, based on the instructions from SO 325, initiate configuration of transport networks 320 to support the alternative network service infrastructures and/or sub-infrastructures." Network service infrastructures (implemented across "computing resources" such as base stations in wireless network RAN 120 – see [0008], [0019] and [0026]) which allows UE 110 to wirelessly connect, are configured and deployed to provide transport paths, making them "transport profiles".).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of Young to ensures service availability is maintained in a 5G wireless network (Young: [0044]).
This is a provisional nonstatutory double patenting rejection.
Claim 34 is provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 27 of copending Application No. 17/720,196 (the reference application) in view of Francini, and further in view of ROY et al. (U.S. Pub. No. 2023/0044165), hereinafter ROY, and DONG et al. (U.S. Pub No. 2022/0075731), hereinafter DONG. The claims of the instant application and the claims of the reference application are compared in Table 1 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claim 34, claim 27 of the reference application in view of Francini teaches all the limitations except the API does not cause a reference counter to decrement and the storage is further to be used as part of a zero copy buffer method. However, ROY teaches the storage is further to be used as part of a zero copy buffer method ([0048] - "one or more data transfers may potentially be implemented, partially or entirely, with a zero-copy transfer. In some embodiments, performing a zero copy transfer may involve, for example, transferring data between a target and a memory of a client using a memory access protocol (e.g., RDMA), For example, in some embodiments, one or more data transfers may be implemented with a zero-copy transfer by transferring data directly to a memory of a receiving device (e.g., memory 120 illustrated in FIG. 1 and/or buffer 251 illustrated in FIG. 2).").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of ROY to allow for the transfer of data with relatively low overhead and/or latency (ROY: [0021]).
Claim 27 of the reference application in view of Francini and ROY teach all the limitations except the API does not cause a reference counter to decrement. However, DONG teaches the API does not cause a reference counter to decrement ([0057] – “counter logic 210 is configured to perform conditional counter operations on counters in buckets of arrays for the cache(s) being accessed. Conditional counter operation include, without limitation, incrementing a counter, decrementing a counter, and/or maintaining a value of a counter.").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,196 to incorporate the teachings of DONG to allow accesses to memory to be tracked and allow allocated memory to be used more efficiently (DONG: [0024]-[0025]).
This is a provisional nonstatutory double patenting rejection.
Table 1: Claim comparison of the instant application with reference application 17/720,196
Claim
17/720,201 (instant application)
17/720,196
1
One or more processors, comprising: circuitry to, in response to an application programming interface (API) call:
cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
One or more processors, comprising: circuitry to perform an application programming interface (API) to:
allocate a buffer for transferring data between a first computing resource that uses a first data transport protocol and a second computing resource that uses a second, different data transport protocol, wherein the second computing resource is to perform one or more wireless communication operations using data stored in the allocated buffer; and
use an identifier of one or more functions of the first data transport protocol to identify one or more functions of the second, different data transport protocol that is to cause the transfer of the data between the first computing resource and the second computing resource.
“One or more processors, comprising: circuitry to, in response to an application programming interface (API) call:”
“One or more processors, comprising: circuitry to perform an application programming interface (API) to:”
“the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols;”
“a buffer for transferring data between a first computing resource that uses a first data transport protocol and a second computing resource that uses a second, different data transport protocol, wherein the second computing resource is to perform one or more wireless communication operations using data stored in the allocated buffer”
“cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.”
“one or more functions of the second, different data transport protocol that is to cause the transfer of the data between the first computing resource and the second computing resource.”
2
The one or more processors of claim 1, wherein performance of the API causes the first wireless computing resource to:
send the data stored in the storage to the second wireless computing resource; and
to decrement a reference counter used to indicate when to release the data stored in the storage.
The one or more processors of claim 1, wherein:
the API is further to initialize a reference counter to indicate when to release the buffer.
3
The one or more processors of claim 1, wherein the information is to be transferred based at least on a transport layer used to associate a function of the one or more transport protocols with a corresponding function of the one or more different transport protocols.
The one or more processors of claim 1, wherein:
the API uses the identifier of one or more functions of the first data transport protocol to use one or more libraries to map the one or more functions to the one or more functions of the second, different data transport protocol; and
the one or more functions of the first and second data transport protocols each cause, at least in part, data to be transferred as part of their respective data transport protocols.
4
The one or more processors of claim 1, wherein the circuitry is further to perform the API based at least on one or more drivers of a transport layer, the one or more drivers used to operate the first and second wireless computing resources.
See Claim 6: The one or more processors of claim 1, wherein the API is performed, at least in part, by a third computing resource of a transport layer.
5
The one or more processors of claim 1, wherein performing the API further causes the first wireless computing resource to perform one or more operations associated with the one or more different transport protocols.
The one or more processors of claim 1, wherein performing the API further causes the first computing resource to cause a third computing resource to perform one or more operations associated with the one or more functions of the second, different data transport protocol.
6
The one or more processors of claim 1, wherein at least one of the one or more transport protocols is a transport layer protocol; and to cause the data to be provided from the storage comprises sending the data from the storage.
See Claim 1
7
The processor one or more processors of claim 1, wherein performing the API is further to cause the first wireless computing resource to perform an operation related to the one or more different transport protocols based at least on a corresponding operation of the one or more transport protocols.
See Claim 5: The one or more processors of claim 1, wherein performing the API further causes the first computing resource to cause a third computing resource to perform one or more operations associated with the one or more functions of the second, different data transport protocol.
8
The one or more processors of claim 1, wherein:
the first and second wireless computing resources are associated with a fifth generation new radio (5G-NR) network protocol stack that includes a first layer, a second layer, and a third layer;
the first wireless computing resource associated with the first layer;
the second wireless computing resource associated with the second layer;
the API associated with the third layer; and
the third layer is located between the first and second layers.
The one or more processors of claim 1, wherein:
the first and second computing resources are associated with a 5G-NR network protocol stack that includes a first layer, a second layer, and a third layer;
the first wireless computing resource associated with the first layer;
the second wireless computing resource associated with the second layer;
the API associated with the third layer; and
the third layer is located between the first and second layers.
9
The one or more processors of claim 1, wherein:
the API is further to transfer information between a first layer and a second layer corresponding to a fifth generation new radio (5G-NR) network protocol, wherein the second wireless computing resource associated with the second layer requests an operation associated with the one or more different transport protocols; and
performance of the API causes the first wireless computing resource associated with the first layer to cause performance of an operation associated with the one or more different transport protocols.
The one or more processors of claim 1, wherein:
the API is further to transfer information between a first layer and a second layer corresponding to a 5G-NR network protocol, wherein the first computing resource is associated with the first layer and requests performance of an operation associated with the second, different data transport protocol; and
performance of the API causes the second computing resource to perform the operation.
10
A system, comprising memory to store instructions that, as a result of execution by one or more processors, cause the system, in response to an application programming interface (API) call, to: cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
A system, comprising memory to store instructions that, as a result of execution by one or more processors, cause the system to:
perform an application programming interface (API) to:
allocate a buffer for transferring data between a first computing resource that uses a first data transport protocol and a second computing resource that uses a second, different data transport protocol, wherein the second computing resource is to perform one or more wireless communication operations using data stored in the allocated buffer; and
use an identifier of one or more functions of the first data transport protocol to identify one or more functions of the second, different data transport protocol that is to cause the transfer of the data between the first computing resource and the second computing resource.
11
The system of claim 10, wherein performance of the API is based, at least in part, on an identification of the one or more different transport protocols associated with the first wireless computing resource.
See Claim 10: “perform an application programming interface (API) to: […] use an identifier of one or more functions of the first data transport protocol to identify one or more functions of the second, different data transport protocol”
12
The system of claim 10, wherein:
the information is to be transferred between a first layer and a second layer of a fifth generation new radio (5G-NR) network protocol stack; and
the first layer and second layer are each associated with a different transport protocol.
See Claim 17: The system of claim 10, wherein:
the first and second computing resources are associated with a 5G-NR network protocol stack that includes a first layer, a second layer, and a third layer;
the first computing resource associated with the first layer;
the second computing resource associated with the second layer;
the API associated with the third layer; and
the third layer located between the first and second layers.
From Claim 10: “transferring data between a first computing resource that uses a first data transport protocol and a second computing resource that uses a second, different data transport protocol”
13
The system of claim 10, wherein an application associated with the first wireless computing resource calls the API to obtain information regarding the one or more different transport protocols supported by the second wireless computing resource.
See Claim 10
14
The system of claim 10, wherein the API is embedded within another API.
See Claim 10
15
The system of claim 10, wherein the information is to be transferred using an application that causes calls from one layer associated with one transport protocol to perform operations in a second layer associated with a second transport protocol.
See Claim 9: The one or more processors of claim 1, wherein:
the API is further to transfer information between a first layer and a second layer corresponding to a 5G-NR network protocol, wherein the first computing resource is associated with the first layer and requests performance of an operation associated with the second, different data transport protocol; and
performance of the API causes the second computing resource to perform the operation.
16
The system of claim 10, further comprising:
a network orchestrator configured to identify one or more transport profiles supported by the first wireless computing resource, wherein the network orchestrator is to deploy the second wireless computing resource with the first wireless computing resource.
See claim 10
17
The system of claim 10, wherein the first wireless computing resource is a virtual device.
See claim 10
18
A non-transitory machine-readable medium having stored thereon one or more instructions, which if performed by one or more processors, cause one or more processors to, in response to an application programming interface (API) call, at least:
cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
A machine-readable medium having stored thereon one or more instructions, which if performed by one or more processors, cause one or more processors to at least:
perform an application programming interface (API) to:
allocate a buffer for transferring data between a first computing resource that uses a first data transport protocol and a second computing resource that uses a second, different data transport protocol, wherein the second computing resource is to perform one or more wireless communication operations using data stored in the allocated buffer; and
use an identifier of one or more functions of the first data transport protocol to identify one or more functions of the second, different data transport protocol that is to cause the transfer of the data between the first computing resource and the second computing resource.
19
The non-transitory machine-readable medium of claim 18, wherein performance of the API is based at least on a transport configuration associated with the first wireless computing resource.
See claim 18: “perform an application programming interface (API) to: […] a first computing resource that uses a first data transport protocol”
20
The non-transitory machine-readable medium of claim 18, wherein:
the information is to be transferred between a first layer and a second layer of a fifth generation new radio (5G-NR) network protocol stack using a third layer between the first and second layers that is based, at least in part, on multiple transport protocols.
See Claim 25: The machine-readable medium of claim 18, wherein:
the first and second computing resources are associated with a 5G-NR network protocol stack that includes a first layer, a second layer, and a third layer;
the first computing resource associated with the first layer;
the second computing resource associated with the second layer;
the API associated with the third layer; and
the third layer located between the first and second layers.
From Claim 18: “perform an application programming interface (API) to: […] use an identifier of one or more functions of the first data transport protocol to identify one or more functions of the second, different data transport protocol that is to cause the transfer of the data between the first computing resource and the second computing resource.”
21
The non-transitory machine-readable medium of claim 18, wherein the one or more processors are further to perform the API based at least on one or more drivers of a transport layer, the one or more drivers used to operate the first and second wireless computing resources.
See Claim 23: The machine-readable medium of claim 18, wherein the API is performed, at least in part, by a third computing resource of a transport layer.
22
The non-transitory machine-readable medium of claim 18, wherein the one or more processors are one or more graphics processing units (GPUs).
See Claim 18
23
The non-transitory machine-readable medium of claim 18, wherein:
the first wireless computing resource is to call the API; and
performance of the API causes, at least in part, the first wireless computing resource to transfer information to the second wireless computing resource without modification to the first wireless computing resource.
See claim 9: The one or more processors of claim 1, wherein:
the API is further to transfer information between a first layer and a second layer corresponding to a 5G-NR network protocol, wherein the first computing resource is associated with the first layer and requests performance of an operation associated with the second, different data transport protocol; and
performance of the API causes the second computing resource to perform the operation.
24
The non-transitory machine-readable medium of claim 18, wherein:
the storage selected is an allocated buffer; and
the API is further to decrement a reference counter associated with the allocated buffer; and
if the decremented reference counter holds a value of zero, the allocated buffer is deselected.
See Claim 19: The machine-readable medium of claim 18, the API is further to initialize a reference counter to indicate when to release the buffer.
From claim 18: “allocate a buffer for transferring data”
25
The non-transitory machine-readable medium of claim 18, wherein the first wireless computing resource has been configured with a transport profile supported by the second wireless computing resource.
See Claim 18
26
A method comprising:
in response to an application programming interface (API) call:
causing data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
causing the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
A method comprising:
performing an application programming interface (API) to:
allocate a buffer for transferring data between a first computing resource that uses a first data transport protocol and a second computing resource that uses a second, different data transport protocol, wherein the second computing resource is to perform one or more wireless communication operations using data stored in the allocated buffer; and
use an identifier of one or more functions of the first data transport protocol to identify one or more functions of the second, different data transport protocol that is to cause the transfer of the data between the first computing resource and the second computing resource.
27
The method of claim 26, wherein performance of the API is based at least on a set of associations that correlate one or more functions of the one or more transport protocols with one or more functions of the one or more different transport protocols.
See Claim 28: The method of claim 26, wherein:
the API uses the identifier of one or more functions of the first data transport protocol to use one or more libraries to map the one or more functions to the one or more functions of the second, different data transport protocol; and
the one or more functions of the first and second data transport protocols each cause, at least in part, data to be transferred as part of their respective data transport protocols.
28
The method of claim 26, further comprising identifying the one or more different transport protocols.
See Claim 26: “use an identifier of one or more functions of the first data transport protocol to identify one or more functions of the second, different data transport protocol”
29
The method of claim 26, further comprising:
configuring the first wireless computing resource with transport profiles supported by the second wireless computing resource.
See Claim 26
30
The method of claim 26, wherein the API is to be called by the first wireless computing resource; and
is stored as part of a layer different from another layer comprising the first wireless computing resource.
See Claim 33: The method of claim 26, wherein:
the first and second computing resources are associated with a 5G-NR network protocol stack that includes a first layer, a second layer, and a third layer;
the first computing resource associated with the first layer;
the second computing resource associated with the second layer;
the API associated with the third layer; and
the third layer located between the first and second layers.
31
The method of claim 26, further comprising transferring the information using an application that maps the API to an operation related to an operation of the one or more different transport protocols, wherein the application is implemented, at least in part, on a hardware accelerator.
See Claim 26
32
The method of claim 26, wherein the API is embedded within another API.
See Claim 26
33
The method of claim 26, wherein:
the information is to be transferred between two layers of a fifth generation new radio (5G-NR) network protocol stack, wherein one layer is associated with the one or more transport protocols and the other layer is associate with the one or more different transport protocols; and
the API is located in a third layer.
The method of claim 26, wherein:
the first and second computing resources are associated with a 5G-NR network protocol stack that includes a first layer, a second layer, and a third layer;
the first computing resource associated with the first layer;
the second computing resource associated with the second layer;
the API associated with the third layer; and
the third layer located between the first and second layers.
From claim 26: “transferring data between a first computing resource that uses a first data transport protocol and a second computing resource that uses a second, different data transport protocol”
34
The method of claim 26, wherein:
performance of the API does not cause a reference counter to decrement;
the reference counter is associated with the storage; and
the storage is further to be used as part of a zero copy buffer method.
See Claim 27: The method of claim 26, wherein the API is further to initialize a reference counter to indicate when to release the buffer.
35
The method of claim 26, wherein the information includes different messages each associated with various transport protocols; and
the information is to be transferred using the API.
The method of claim 26, wherein performance of the API further uses information comprising different messages each associated with a different data transport protocol; and
the information is to be transferred between the first and second computing resources using one transport layer.
Claims 1, 5-6, 8-10, 12, 15, 18-20, 26-27, 33, and 35 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 5, 8-10, 17-18, 25-26, 28, and 33-35 of copending Application No. 17/720,199 (the reference application) in view of Francini et al (U.S. Pub. No. 2018/0159965), hereinafter Francini. The claims of the instant application and the claims of the reference application are compared in Table 2 below, with limitations not taught by the reference claims in bold. Unless indicated otherwise, the limitations of the claims of the instant application have been compared to the limitations of the claim of the same number in the reference application.
Regarding claims 1, 10, 18, and 26 of the instant application, claims 1, 10, 18 and 26 of the reference application substantially recite all the limitations of the claims except cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols and cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols. However, Francini teaches to cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols (Fig. 1, buffer 123 = “storage”; FIG. 2B, transport layer 224S = “first wireless computing resource”; [0051] – the transport layer 224S may use protocols such as TCP; [0052] – “The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120.”; [0022] – “The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.”) and cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols (FIG. 2B, link layer 222C connected to MHD 110; [0052] – “The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222c of the WAD 120. The reliable link layer 222C of the WAD 120 receives the primitive messages, places the primitive messages into link layer data structures supported by the reliable link layer 222C […] where the reliable link layer 222C of the WAD 120 is provided using LTE, the reliable link layer 222C may place the primitive messages into PDCP PDUs”; [0050] – “The communication primitives of the client socket API 215 may include primitive rules for controlling manipulation of data (e.g., encapsulation and decapsulation of application data that is sourced by the application layer 216 for transmission toward the server 130, encapsulation and decapsulation of application data that is sourced by server 130 and that is intended for delivery to the application layer 216 of the MHD 110, or the like), primitive messages and associated primitive message formats that are configured to transport data of the application layer 216, or the like, as well as various combinations thereof.” The primitive messages used by the reliable link layer 222C adhere to rules and a format for transporting data (i.e., a “transport protocol”; [0022] – “The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.” The application data (data communicated between the WAD 120 and server 130 which was stored in buffer 123, e.g., when it is received via the TCP socket at the WAD 120) is placed into the primitive messages, which adhere to a transport protocol as described in [0050], and provided to the reliable link layer 222C.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Francini to improve communication between devices using transport layer connections by supporting use of a networked transport layer socket with a API that communicates data between the networked transport layer socket and the application layer of a communication device via the link layer and physical layer of a communication protocol stack (Francini: [0006], [0063]-[0065] and [0068]).
Regarding claim 6, claim 1 of Application 17/720,199 in view of Francini teaches substantially all of the limitations except that at least one of the one or more transport protocols is a transport layer protocol; and to cause the data to be provided from the storage comprises sending the data from the storage. However, Francini teaches at least one of the one or more transport protocols is a transport layer protocol (Francini: [0052] – “The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection). The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120.”); and to cause the data to be provided from the storage comprises sending the data from the storage ([0052] – “The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120. The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222C of the WAD 120. The reliable link layer 222C of the WAD 120 receives the primitive messages”. The reliable link layer 222C receives the application data in the primitive messages, i.e., the client socket API sends the application data (“the data from the storage”), within primitive messages, to the reliable link layer 222C.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Francini to improve communication between devices using transport layer connections by supporting use of a networked transport layer socket with a API that communicates data between the networked transport layer socket and the application layer of a communication device via the link layer and physical layer of a communication protocol stack (Francini: [0006], [0063]-[0065] and [0068]).
Regarding claim 15, claim 34 of Application 17/720,199 teaches substantially all of the limitations except that information is to be transferred using an application. However, Francini teaches information is to be transferred using an application ([0031] – “the client socket API 215 is an application programming interface that is configured to allow application programs to control and use network sockets."; [0052] – “In FIG. 2B, in a direction of transmission from the application layer 236 of the server 130 toward the application layer 216 of the MHD 110, communication of the application data of the application layer 236 of the server 230 may be performed as follows. The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection). The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120. The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222C of the WAD 120. The reliable link layer 222C of the WAD 120 receives the primitive messages, places the primitive messages into link layer data structures supported by the reliable link layer 222C").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Francini to improve communication between devices using transport layer connections by supporting use of a networked transport layer socket with a API that communicates data between the networked transport layer socket and the application layer of a communication device via the link layer and physical layer of a communication protocol stack (Francini: [0006], [0063]-[0065] and [0068]). Additionally, it would have been obvious to one of ordinary skill in the art to have applied the steps of the method of claim 34 of Application 17/720,199 to the system of claim 15.
Claims 5, 8-9, 12, 19-20, 27, 33, and 35 recite additional limitations that are substantially the same or identical to limitations recited in claims 5, 8-10, 17-18, 25-26, 28, 33, and 35 of the reference application as indicated in Table 2, and are rejected as well.
This is a provisional nonstatutory double patenting rejection.
Claims 2 and 24 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 2, and 18-19 of copending Application No. 17/720,199 (the reference application) in view of Francini, and further in view of Fuente et al. (U.S. Pub. No. 2005/0265370), hereinafter Fuente. The claims of the instant application and the claims of the reference application are compared in Table 2 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claim 2 of the instant application, claim 2 of the reference application teaches all of the limitations except the API causes the first wireless computing resource to: send the data stored in the storage to the second wireless computing resource; and to decrement a reference counter. However, Francini teaches the API causes the first wireless computing resource to: send the data stored in the storage to the second wireless computing resource ([0052] – “The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120. The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222C of the WAD 120.”; [0022] – “The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Francini to improve communication between devices using transport layer connections by supporting use of a networked transport layer socket with a API that communicates data between the networked transport layer socket and the application layer of a communication device via the link layer and physical layer of a communication protocol stack (Francini: [0006], [0063]-[0065] and [0068]).
Claim 2 of the reference application in view of Francini teaches all of the limitations except to decrement a reference counter.
However, Fuente teaches to decrement a reference counter ([0025]-[0026] – “Counter (108) maintains a count of the number of references to the buffer memory (104} by the accessors (106, 110}, the count being incremented on each reference and decremented on completion of each accessor's data transmission. [...] Memory manager (114) is adapted to lock buffer memory during write activity, to permit read access to the buffer memory (104) by accessors (106, 110) and to return the buffer memory to a free buffer pool when counter (108) signals that the count has reached zero.").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Fuente in order to allow the buffer memory to be allocated, pinned and freed without preventing read accesses by multiple accessors that transmit the data stored, which further allows for rapid retransmission of the data stored in the buffer when a transmission fails (Fuente: [0032]).
Regarding claim 24 of the instant application, claims 18-19 of the reference application teaches all of the limitations except to decrement a reference counter and if the decremented reference counter holds a value of zero, the allocated buffer is deselected. However, Fuente teaches to decrement a reference counter and if the decremented reference counter holds a value of zero, the allocated buffer is deselected ([0025]-[0026] - "Counter (108) maintains a count of the number of references to the buffer memory (104) by the accessors (106, 110), the count being incremented on each reference and decremented on completion of each accessor's data transmission. [...] Memory manager (114) is adapted to lock buffer memory during write activity, to permit read access to the buffer memory (104) by accessors (106, 110) and to return the buffer memory to a free buffer pool when counter (108) signals that the count has reached zero."; [0032] - "the counter of the preferred embodiment allows the buffer memory to be allocated, "pinned ", and freed"; [0042] - "At step (226), a further test is performed to determine whether the count has reached zero or not. If it has not reached zero, this part of the logic process returns to step (228) to be triggered by the next completion. If on any iteration, the count is determined to have reached zero, the memory manager (114) releases the buffer memory (104).").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Fuente in order to allow the buffer memory to be allocated, pinned and freed without preventing read accesses by multiple accessors that transmit the data stored, which further allows for rapid retransmission of the data stored in the buffer when a transmission fails (Fuente: [0032]).
This is a provisional nonstatutory double patenting rejection.
Claims 3, 7, 11, 13, 17, 22-23, 28 and 31 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 3, 5, 10, 18, 26 and 34 of copending Application No. 17/720,199 (the reference application) in view of Francini, and further in view of Sen et al. (U.S. Pub. No. 2020/0218684), hereinafter Sen. The claims of the instant application and the claims of the reference application are compared in Table 2 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claim 3 of the instant application, claim 3 of the reference application in view of Francini teaches all the limitations except the information is to be transferred based at least on a transport layer used to associate functions of the protocols. However, Sen teaches the information is to be transferred based at least on a transport layer used to associate functions of the protocols ([0033] – “bindings are used to link the device libraries and transport definitions with device drivers and transport protocols, respectively. Bindings involve a process or technique of connecting two or more data elements or entities together. Bindings allows the accelerator library to be bound to multiple protocols, including one or more IX protocols and one or more transport protocols.”; [0034] – “The transport definition is independent of the transport protocols that are used to carry data to remote accelerator resources. […] Each of the transport layers (e.g., transport layers 823, 824 of FIG. 8) may include primitives such as read/write data from/to device; process device command; get/set device properties; and event subscription and notification. The transport layers (e.g., transport layers 823, 824 of FIG. 8) may also have mechanisms to allow scalable and low latency communication, such as […] protocol independent format definition allowing for multiple protocol bindings.” The transport layers 823, 824 have protocol-independent format definitions which use bindings (“a set of associations that correlate” the transport protocols) to connect the transport protocol-independent transport definition to the multiple different transport protocols.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Sen to use an API that abstracts details of the underlying transport protocols used by a transport layer and translates to a specific transport protocol used to provide the benefit of seamless and transparent access to remote resources (Sen: [0034]).
Regarding claims 11 and 28, claims 10, and 26 of the reference application in view of Francini teach all the limitations except the API is based, at least in part, on an identification of the one or more different transport protocols associated with the first wireless computing resource. However, Sen teaches the API is based, at least in part, on an identification of the one or more different transport protocols associated with the first wireless computing resource ([0086] – “initiator 822 identifies or determines one or more transport protocols to be used during a communication session 830 with target accelerator resource(s). Any number of transport protocols may be used”; [0022] – “The network 106 (also referred to as a "network fabric 106" or the like) may be embodied as any type of network capable of communicatively connecting the computing platforms 102 and the accelerator sleds 104. […] wireless”; [0032]-[0033] – The accelerator library uses device libraries to abstract device-specific details into an API that provides the necessary translation to device specific interfaces via corresponding device drivers. According to various embodiments, the accelerator library also uses individual transport definitions to abstract transport-specific details into an API that provides the necessary translation to transport-specific interfaces via corresponding transport protocols. The device library at the computing platform 102 is used to connect the application(s) to one or more local devices, such as a local hardware accelerator 212. Similarly, the transport definition is used to connect the application (e.g., application 820 of FIG. 8) to a remote hardware accelerator 312 resident in the accelerator sled 104. The transport definition allows applications to send commands and data to a target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) and receive data from the target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) without requiring knowledge of the underlying transport protocols (e.g., transport layers 823, 824 of FIG. 8) being used." In FIG. 8, initiator 822, on computing platform 102 (a “first wireless computing resource”), identifies the transport protocols used to communicate with target accelerator resource(s), such as accelerator sled 104 (a “second wireless computing resource”). Since these protocols are used by a second wireless computing resource, they are analogous to the “different transport protocols”.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Sen to provide scalability, flexibility, manageability, and utilization, which results in lower operating or overhead costs and/or allow multiple applications to use the same physical hardware accelerator (Sen: [0074] and [0043]), and to provide the benefit of seamless and transparent access to remote resources by using an API that abstracts the underlying transport protocols (Sen: [0034]).
Regarding claim 7 of the instant application, claim 5 of the reference application in view of Francini teaches all the limitations except to perform an operation related to the one or more different transport protocols based at least on a corresponding operation of the one or more transport protocols. However, Sen teaches to perform an operation related to the one or more different transport protocols based at least on a corresponding operation of the one or more transport protocols ([0059] – “Referring now to FIG. 6, in use, the computing platform 102 may execute a process 600 for sending an accelerator message to a hardware accelerator 212 or 312. The process 600 begins at operation 602, in which an application on the computing platform 102 determines a message to be sent to a hardware accelerator 212 or 312. […] At operation 604, the application passes the command or function to the accelerator manager 402.”; [0062] – “Referring back to operation 606, if the accelerator manager 402 is to pass the message to a remote hardware accelerator 312, the process 600 proceeds to operation 612, in which the computing platform 102 generates a command capsule based on the message received from the application. […] the command capsule may encapsulate the message in a protocol different from a protocol used by the message.”; [0063] – “At operation 616, the computing platform 102 sends the command capsule to the accelerator sled 104. The computing platform 102 may use any suitable communication protocol, such as TCP, RDMA, RoCE, RoCEvl , RoCEv2, iWARP, etc.” An application on computing platform 102 (a “first wireless computing resource”) passes a message (“corresponding operation”) which uses a protocol (“one or more transport protocols”) – as described in [0062] – to an accelerator manager 402 via an API. The accelerator manager causes the computing platform 102 (the “first wireless computing resource”) to generate (“one or more operations”) a command capsule for the message using a different protocol (“different transport protocols”) and to send the command capsule which uses the different protocol to the accelerator sled 104 (a “second wireless computing resource”).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Sen to use an API that abstracts details of the underlying transport protocols used by a transport layer and translates to a specific transport protocol used to provide the benefit of seamless and transparent access to remote resources (Sen: [0034]).
Regarding claim 13 of the instant application, claim 10 of the reference application in view of Francini teaches all the limitations except wherein an application associated with the first wireless computing resource calls the API to obtain information regarding the one or more different transport protocols supported by the second wireless computing resource. However, Sen teaches However, Sen teaches an application ([0079] – “the initiator 822 is an application hosted by the VM 815”) associated with the first wireless computing resource (FIG. 8, computing platform 102; [0022] – “The network 106 (also referred to as a "network fabric 106" or the like) may be embodied as any type of network capable of communicatively connecting the computing platforms 102 and the accelerator sleds 104. […] wireless”) calls the API ([0085] – “the initiator 822 hosted by a VM 815 of a computing platform 102 executes process 900 for establishing a communication session 830 with target accelerator resource(s) via the accelerator manager 502. […] the various messages discussed a being communicated between the initiator 822 and the accelerator resource(s) may be performed according to processes 600-700 of FIGS. 6-7, respectfully.”; [0059] – “the computing platform 102 may execute a process 600 for sending an accelerator message to a hardware accelerator 212 or 312. The process 600 begins at operation 602, in which an application on the computing platform 102 determines a message to be sent to a hardware accelerator 212 or 312. […] At operation 604, the application passes the command or function to the accelerator manager 402. In the illustrative embodiment, the application passes the command or function with use of an application programming interface”; [0088] – “At operation 906, the initiator 822 generates and sends, to the target accelerator resource(s), a connection establishment request message for a primary connection 831.” Sending a message, e.g., the connection establishment request message, to a target hardware accelerator resource involves an application, e.g., the initiator, passing the message to an API (“calls the API”) as described in process 600.) to obtain information regarding the one or more different transport protocols ([0089] – “In response to the connection establishment request message for the primary connection 831 , at operation 906, the initiator 822 receives a connection establishment response message for the primary connection 831 from the target accelerator resource(s ). For example, where an RDMA-based protocol is used for the primary connection 831, such as RoCEv2, the target accelerator resource(s) may encapsulate an RDMA acknowledgement (ACK) packet within an Ethernet/IP/UDP packet (including either IPv4 or IPv6) and including suitable destination and source addresses based on the connection establishment request message. In embodiments, the connection establishment response message for the primary connection 831 includes a session ID, which may be included in the header or payload section of the message. The session ID is generated by the target accelerator resource(s) and is discussed in more detail infra. Other suitable information may be included in the connection establishment response message, such as an accelerator resource identifier and/or other protocol specific information.”) supported by the second wireless computing resource (FIG. 8, accelerator sled 104 with accelerator(s) 312 which is the “target hardware accelerator resource(s)”; [0033] – “the accelerator library also uses individual transport definitions to abstract transport-specific details into an API that provides the necessary translation to transport-specific interfaces via corresponding transport protocols. […] the transport definition is used to connect the application (e.g., application 820 of FIG. 8) to a remote hardware accelerator 312 resident in the accelerator sled 104. The transport definition allows applications to send commands and data to a target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) and receive data from the target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3)”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Sen to use the initiator application that establishes multiple connections with a remote accelerator using different transport protocols over a wireless network achieves high-availability goals even when one transport protocol is temporarily unusable (Sen: [0073]).
Regarding claim 17 of the instant application, claim 10 of the reference application in view of Francini teaches all the limitations except wherein the first wireless computing resource is a virtual device. However Sen teaches wherein the first wireless computing resource is a virtual device ([0021] – “an application or virtual machine (VM) being executed by a processor 202 of the computing platform 102 (see FIG. 2) may access a hardware accelerator 212 or 312 (see FIGS. 2 and 3)”; [0058] – “The accelerator virtualizer 508 is configured to present one physical hardware accelerator 312 as two or more virtual hardware accelerators 312.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Sen to provide scalability, flexibility, manageability, and utilization, which results in lower operating or overhead costs and/or allow multiple applications to use the same physical hardware accelerator (Sen: [0074] and [0043]).
Regarding claim 22 of the instant application, claim 18 of the reference application in view of Francini teaches all the limitations except wherein the one or more processors are one or more graphics processing units (GPUs). However, Sen teaches wherein the one or more processors are one or more graphics processing units (GPUs) ) ([0025] – “the processor(s) 202 may include Intel® Core™ based processor(s) and/or Xeon® processor(s); Advanced Micro Devices (AMD) Zen® Core Architecture processor(s), such as Epyc ® processor(s), Opteron™ series Accelerated Processing Units (APUs), and/or MxGPUs").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Sen to perform computing tasks more quickly and efficiently (Sen: [0002], [0016]).
Regarding claim 23 of the instant application, claim 34 of the reference application in view of Francini teaches all the limitations except the API causes the transfer of information without modification to the first wireless computing resource. However, Sen teaches the API causes the transfer of information without modification to the first wireless computing resource ([0047] – “in some embodiments, an application may interact with an accelerator manager 402 of a computing platform 102 a first time and a second time. In such an example, for the first interaction, the accelerator manager 402 may facilitate an interface with a local hardware accelerator 212 and, for the second interaction, the accelerator manager 402 may facilitate an interface with a remote hardware accelerator 312, without any change or requirements in how the application interacts with the accelerator manager 402 between the first interaction and the second interaction."; [0059] – “At operation 604, the application passes the command or function to the accelerator manager 402. In the illustrative embodiment, the application passes the command or function with use of an application programming interface such that the details of communication with the hardware accelerator 212 or 312 are hidden from the associated application”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Sen to use an API that abstracts details of the underlying transport protocols used by a transport layer and translates to a specific transport protocol used to provide the benefit of seamless and transparent access to remote resources (Sen: [0034]). Additionally, it would have been obvious to one of ordinary skill in the art to have applied the steps of the method of claim 34 of Application 17/720,199 to the non-transitory computer-readable medium of claim 23.
Regarding claim 31 of the instant application, claim 26 of the reference application in view of Francini teaches all the limitations except transferring the information using an application that maps the API to an operation related to an operation of the one or more different transport protocols, wherein the application is implemented, at least in part, on a hardware accelerator. However, Sen teaches transferring the information using an application that maps the API to an operation related to an operation of the one or more different transport protocols ([0033] – “The accelerator application interface includes an accelerator library used to access accelerator resources. The accelerator library may be an accelerator specific run-time library (e.g., Open Computing Language (OpenCL }, CUDA, Open Programmable Acceleration Engine (OPAE) API, or the like) that provides mapping of application constructs on to a hardware accelerator context. [...] the accelerator library also uses individual transport definitions to abstract transport-specific details into an API that provides the necessary translation to transport-specific interfaces via corresponding transport protocols. [...] Similarly, the transport definition is used to connect the application (e.g., application 820 of FIG. 8) to a remote hardware accelerator 312 resident in the accelerator sled 104. The transport definition allows applications to send commands and data to a target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) and receive data from the target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) without requiring knowledge of the underlying transport protocols (e.g., transport layers 823, 824 of FIG. 8) being used." The API for accessing an accelerator (provided by accelerator manager 402 - see [0047]) translates ("maps") to transport-specific details. [0059] – “Referring now to FIG. 6, in use, the computing platform 102 may execute a process 600 for sending an accelerator message to a hardware accelerator 212 or 312. The process 600 begins at operation 602, in which an application on the computing platform 102 determines a message to be sent to a hardware accelerator 212 or 312. [...] At operation 604, the application passes the command or function to the accelerator manager 402. In the illustrative embodiment, the application passes the command or function with use of an application programming interface such that the details of communication with the hardware accelerator 212 or 312 are hidden from the associated application."; [0062] – “Referring back to operation 606, if the accelerator manager 402 is to pass the message to a remote hardware accelerator 312, the process 600 proceeds to operation 612, in which the computing platform 102 generates a command capsule based on the message received from the application. [...] the command capsule may rearrange or otherwise reorganize the message in preparation for being sent to the accelerator sled 104. In some embodiments the command capsule may encapsulate the message in a protocol different from a protocol used by the message." The accelerator manager (“application”) generates a command capsule corresponding to a transport protocol used by the accelerator sled (“the one or more different transport protocols”), which may be different from the protocol used by the message from the application passed to the accelerator manager using the API.), wherein the application is implemented, at least in part, on a hardware accelerator ([0047] – “The accelerator manager 402 is configured to manage accelerators that an application executed by the processor 202 may interface with. In some embodiments, the accelerator manager 402 may implement an application programming interface for accessing an accelerator''; [0054] - "The accelerator manager 502 is configured to manage the hardware accelerators 312 on the accelerator sled 104 and to allow remote interfacing with the hardware accelerators 312 through the host fabric interface 310. The accelerator manager 502 may process message capsules received from and sent to the computing platform 102 and may, based on the content of the message capsules, execute the relevant necessary operations to interface with the hardware accelerators 312, such as reading data from the hardware accelerator 312, writing data to the hardware accelerator 312, executing commands on the hardware accelerator 312, getting and setting properties of the hardware accelerator 312, receiving and processing events or notifications from the acceleration device 312 (such as sending a message capsule to send an interrupt or set a semaphore on the computing platform 102}, etc." The "application" = accelerator manager 402/502, accelerator manager 502 is implemented on accelerator sled 104. Accelerator manager provides an API for an application to access an accelerator as indicated in [0047]. Additionally, [0046] states environment 400, including accelerator manager 402, may be embodied on any component(s) of computing platform 102, which may include local hardware accelerator 212 as shown in FIG. 2.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Sen to provide the benefit of seamless and transparent access to other computing resources (Sen: [0034]), and perform computing tasks faster and more efficiently (Sen: [0002] and [0016]).
This is a provisional nonstatutory double patenting rejection.
Claims 4, 21, and 30 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 6, 23, and 33 of copending Application No. 17/720,199 (the reference application) in view of Francini, and further in view of Hyder et al. (U.S. Patent. No. 5,983,274), hereinafter Hyder. The claims of the instant application and the claims of the reference application are compared in Table 2 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claims 4 and 21, claims 6 and 23 of the reference application in view of Francini teach all the limitations except one or more drivers of a transport layer, the one or more drivers used to operate the first and second wireless computing resources. However, Hyder teaches one or more drivers of a transport layer (Claim 1 – “a protocol driver; a device driver; an integrating driver that interfaces with the protocol driver and the device driver using defined APIs”; Col. 2, lines 13-20 – “Because there are different types of transport protocols developed over time by different entities for different reasons, there may be different types of transport protocol drivers acting as software components running on a single host computer system in order to provide the necessary networking capabilities for a given installation. Some common transport protocols include TCP/IP, IPX, AppleTalk®, and others.”), the one or more drivers used to operate the first and second wireless computing resources (Col. 1, lines 60-62 – “link layer implemented by a network card device driver, and the transport and network layers implemented as a transport protocol driver”; Col. 7, line 56-Col. 8, line 3 – “For sending network data from the upper layers 106, the transport protocol driver 100 will allocate a packet data structure from the integrating component 102, fill the data structure with network information and control information according to the present invention, and send it down through the integrating component 102 to the network card device driver 104 for transmitting the network data on the network interface card 108. In like manner, for a packet received from the network interface card 108, the network card device driver 104 will allocate a packet data structure from the integrating component 102, fill it with the network data and control information according to the present invention, and send it through the integrating component 102 to the transport protocol driver 100 for communication to the upper layers 106.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Hyder to allow transport protocol drivers and network card drivers to be developed more efficiently and allow communication with any available transport protocol (Hyder: Col. 3, lines 45-65).
Regarding claim 30, claim 33 of the reference application in view of Francini teaches all the limitations except the API is to be called by the first wireless computing resource. However, Hyder teaches the API is to be called by the first wireless computing resource (Col. 1, lines 60-62 – “data link layer implemented by network card device driver, and the transport and network layers implemented as a transport protocol driver”; Col. 7, lines 38-39 – “Application Programming Interface (API) is a set of subroutines provided by one software component”; Col. 10, lines 34-40 – “The transport protocol driver 100 then sends or transfers the packet to the integrating component 102 at step 140 by making a subroutine call […] the integrating component 102 will send of transfer the packet to the network card device driver at step 142”; Claim 1 – “protocol driver; a device driver; an integrating driver that interfaces with the protocol driver and the device driver using defined APIs”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Hyder to allow transport protocol drivers and network card drivers to be developed more efficiently and allow communication with any available transport protocol (Hyder: Col. 3, lines 45-65).
This is a provisional nonstatutory double patenting rejection.
Claims 14 and 32 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 10 and 26 of copending Application No. 17/720,199 (the reference application) in view of Francini, and further in view of Lebin et al. (U.S. Pub. No. 2021/0385252), hereinafter Lebin. The claims of the instant application and the claims of the reference application are compared in Table 2 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claims 14 and 32, claims 10 and 26 of the reference application in view of Francini teach all the limitations except wherein the API is embedded within another API. However, Lebin teaches wherein the API is embedded within another API ([0043] - "Each called API server may in turn call additional APIs, and this execution flow can be nested many levels deep." Called APIs may call other APIs.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Lebin to allow for the isolation of different functionalities of a complex API call to be split into many API calls (Lebin: [0043]).
This is a provisional nonstatutory double patenting rejection.
Claims 16, 25, and 29 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 10, 18 and 26 of copending Application No. 17/720,199 (the reference application) in view of Francini, and further in view of Young (U.S. Pub. No. 2021/0320850). The claims of the instant application and the claims of the reference application are compared in Table 2 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claim 16, claim 10 of the reference application in view of Francini teaches all the limitations except a network orchestrator configured to identify one or more transport profiles supported by the first wireless computing resource, wherein the network orchestrator is to deploy the second wireless computing resource with the first wireless computing resource. However, Young teaches a network orchestrator configured to identify one or more transport profiles supported by the first wireless computing resource ([0039] - “Service orchestration and transport path management section 305 may include a service orchestrator (SO) 325, an analytics engine 330, a network functions virtual orchestrator (NFVO) 335"; [0043] – “SO 325 may generate and send a message to SIDC 345 that instructs SIDC 345 to identify one or more network infrastructures that are candidates for relocating the current transport path to maintain the SLA for the application service. In one implementation, the message may include service requirements including SLA requirements and/or service profiles associated with the application service."; [0044] – “SIDC 345 may include an infrastructure catalog 370 that stores network infrastructure profiles that provide transport paths for a corresponding particular service profile. In one implementation, infrastructure catalog 370 obtains the network infrastructure profiles from an inventory database 375 which orders the network infrastructure profiles based on a deployment preference value associated with each of the network infrastructure profiles." Network infrastructure profiles of network service infrastructures (implemented across 5G-NR "wireless computing resources" such as base stations in RAN 120 - see [0019] and [0026]) provide transport paths, making them "transport profiles".), wherein the network orchestrator is to deploy the second wireless computing resource with the first wireless computing resource ([0044] – “In one implementation, deployment preference values include abstracted "distances " between nodes in a transport path associated with a network infrastructure. For example, PCE 350 may calculate a logical "distance " which may be a function of latency, inversely proportional to bandwidth, inversely proportional to reliability, etc. In one implementation, the calculated distances are based on the build of the network (e.g., size of the circuit}, the current usage (e.g., based on monitoring), and expected usage (e.g., based on projected use of yet-to-be deployed services). [...] PCE 350 provides the alternative network service infrastructures to SIDC 345 as candidates for maintaining service availability at SLA requirements in response to a detected and/or projected outage and/or congested network conditions."; [0046]-[0047] - "SIDC 345 may select one or more alternative network service infrastructures and/or sub-infrastructures based on some or all of the above data. [...] NFVO 335 may, based on instructions received from SO 325, deploy the alternative network service infrastructures and/or sub-infrastructures to orchestrate within data transport section 310." The alternative network service infrastructures ("wireless computing resources") that meet the requirements in the profiles are deployed together.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Young to ensures service availability is maintained in a 5G wireless network (Young: [0044]).
Regarding claims 25 and 29, claims 18 and 26 of the reference application in view of Francini teaches all the limitations except wherein the first wireless computing resource has been configured with a transport profile supported by the second wireless computing resource. However, Young teaches the first wireless computing resource has been configured with a transport profile supported by the second wireless computing resource ([0040] – “SLA database 360 may store and maintain service requirement profiles for network customers (e.g., UE 110). Each service requirement profile describes a particular network customer's network service performance requirements.”; [0044] – “SIDC 345 may include an infrastructure catalog 370 that stores network infrastructure profiles that provide transport paths for a corresponding particular service profile. In one implementation, infrastructure catalog 370 obtains the network infrastructure profiles from an inventory database 375 which orders the network infrastructure profiles based on a deployment preference value associated with each of the network infrastructure profiles. […] In one implementation, deployment preference values include abstracted "distances " between nodes in a transport path associated with a network infrastructure. For example, PCE 350 may calculate a logical "distance " which may be a function of latency, inversely proportional to bandwidth, inversely proportional to reliability, etc. In one implementation, the calculated distances are based on the build of the network (e. g. , size of the circuit), the current usage (e.g., based on monitoring), and expected usage (e.g., based on projected use of yet-to- be deployed services). [...] PCE 350 provides the alternative network service infrastructures to SIDC 345 as candidates for maintaining service availability at SLA requirements in response to a detected and/or projected outage and/or congested network conditions."; [0045] – “SIDC 345 may identify infrastructure design parameters associated with physical and virtual components of a particular network infrastructure. [...] The configuration of the multiple transport networks may include design parameters that detail the physical and virtual configuration of each transport network 320 and how they interconnect."; [0046]-[0047] – “SIDC 345 may select one or more alternative network service infrastructures and/or sub-infrastructures based on some or all of the above data. [...] NFVO 335 may, based on instructions received from SO 325, deploy the alternative network service infrastructures and/or sub-infrastructures to orchestrate within data transport section 310. A transport controller may, based on the instructions from SO 325, initiate configuration of transport networks 320 to support the alternative network service infrastructures and/or sub-infrastructures." Network service infrastructures (implemented across "computing resources" such as base stations in wireless network RAN 120 – see [0008], [0019] and [0026]) which allows UE 110 to wirelessly connect, are configured and deployed to provide transport paths, making them "transport profiles".).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Young to ensures service availability is maintained in a 5G wireless network (Young: [0044]).
This is a provisional nonstatutory double patenting rejection.
Claim 34 is provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 27 of copending Application No. 17/720,199 (the reference application) in view of Francini, and further in view of ROY et al. (U.S. Pub. No. 2023/0044165), hereinafter ROY, and DONG et al. (U.S. Pub No. 2022/0075731), hereinafter DONG. The claims of the instant application and the claims of the reference application are compared in Table 2 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claim 34, claim 27 of the reference application in view of Francini teaches all the limitations except the API does not cause a reference counter to decrement and the storage is further to be used as part of a zero copy buffer method. However, ROY teaches the storage is further to be used as part of a zero copy buffer method ([0048] - "one or more data transfers may potentially be implemented, partially or entirely, with a zero-copy transfer. In some embodiments, performing a zero copy transfer may involve, for example, transferring data between a target and a memory of a client using a memory access protocol (e.g., RDMA), For example, in some embodiments, one or more data transfers may be implemented with a zero-copy transfer by transferring data directly to a memory of a receiving device (e.g., memory 120 illustrated in FIG. 1 and/or buffer 251 illustrated in FIG. 2).").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of ROY to allow for the transfer of data with relatively low overhead and/or latency (ROY: [0021]).
Claim 27 of the reference application in view of Francini and ROY teach all the limitations except the API does not cause a reference counter to decrement. However, DONG teaches the API does not cause a reference counter to decrement ([0057] – “counter logic 210 is configured to perform conditional counter operations on counters in buckets of arrays for the cache(s) being accessed. Conditional counter operation include, without limitation, incrementing a counter, decrementing a counter, and/or maintaining a value of a counter.").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of DONG to allow accesses to memory to be tracked and allow allocated memory to be used more efficiently (DONG: [0024]-[0025]).
This is a provisional nonstatutory double patenting rejection.
Table 2: Claim comparison of the instant application with reference application 17/720,199
Claim
17/720,201 (instant application)
17/720,199
1
One or more processors, comprising: circuitry to, in response to an application programming interface (API) call: cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
One or more processors comprising: circuitry to perform an application programming interface (API) to:
receive an identifier of a buffer, the buffer to be used to transfer data between a first computing resource using a first data transport protocol and a second computing resource using a second, different data transport protocol, wherein the second computing resource is to perform one or more wireless communication operations using data stored in the buffer; and
prevent, using the identifier, deallocation of the memory allocated to the buffer.
“One or more processors, comprising: circuitry to, in response to an application programming interface (API) call:”
“One or more processors comprising: circuitry to perform an application programming interface (API) to:”
“the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols;”
“the buffer to be used to transfer data between a first computing resource using a first data transport protocol and a second computing resource using a second, different data transport protocol, wherein the second computing resource is to perform one or more wireless communication operations using data stored in the buffer;”
2
The one or more processors of claim 1, wherein performance of the API causes the first wireless computing resource to:
send the data stored in the storage to the second wireless computing resource; and
to decrement a reference counter used to indicate when to release the data stored in the storage.
The processor of claim 1, wherein:
the API is further to initialize a reference counter to indicate when to release the buffer.
3
The one or more processors of claim 1, wherein the information is to be transferred based at least on a transport layer used to associate a function of the one or more transport protocols with a corresponding function of the one or more different transport protocols.
The processor of claim 1, wherein:
the API uses one or more functions of the first data transport protocol to use one or more libraries to map the one or more functions to one or more functions of the second, different data transport protocol; and
the one or more functions of the first and second data transport protocols to each cause, at least in part, a buffer to be allocated as part of their respective data transport protocol.
4
The one or more processors of claim 1, wherein the circuitry is further to perform the API based at least on one or more drivers of a transport layer, the one or more drivers used to operate the first and second wireless computing resources.
See Claim 6: The processor of claim 1, wherein the API is performed, at least in part, by a third computing resource of a transport layer.
5
The one or more processors of claim 1, wherein performing the API further causes the first wireless computing resource to perform one or more operations associated with the one or more different transport protocols.
The processor of claim 1, wherein performing the API further causes the first computing resource to cause a third computing resource to perform one or more wireless communication operations associated with one or more functions of the second, different data transport protocol.
6
The one or more processors of claim 1, wherein at least one of the one or more transport protocols is a transport layer protocol; and to cause the data to be provided from the storage comprises sending the data from the storage.
See Claim 1
7
The one or more processors of claim 1, wherein performing the API is further to cause the first wireless computing resource to perform an operation related to the one or more different transport protocols based at least on a corresponding operation of the one or more transport protocols.
See Claim 5: The processor of claim 1, wherein performing the API further causes the first computing resource to cause a third computing resource to perform one or more wireless communication operations associated with one or more functions of the second, different data transport protocol.
8
The one or more processors of claim 1, wherein:
the first and second wireless computing resources are associated with a fifth generation new radio (5G-NR) network protocol stack that includes a first layer, a second layer, and a third layer;
the first wireless computing resource associated with the first layer;
the second wireless computing resource associated with the second layer;
the API associated with the third layer; and
the third layer is located between the first and second layers.
The processor of claim 1, wherein:
the first and second computing resources are associated with a 5G-NR network protocol stack that includes a first layer, a second layer, and a third layer;
the first computing resource associated with the first layer;
the second computing resource associated with the second layer;
the API associated with the third layer; and
the third layer located between the first and second layers.
9
The one or more processors of claim 1, wherein:
the API is further to transfer information between a first layer and a second layer corresponding to a fifth
generation new radio (5G-NR) network protocol, wherein the second wireless computing resource associated with the second layer requests an operation associated with the one or more different transport protocols; and
performance of the API causes the first wireless computing resource associated with the first layer to cause performance of the operation associated with the one or more different transport protocols.
The processor of claim 1, wherein:
the API is further to transfer information between a first layer and a second layer corresponding to a 5G-NR network protocol, wherein the first computing resource is associated with the first layer and requests performance of the wireless communication operation associated with the second, different data transport protocol; and
performance of the API causes the second computing resource the operation.
10
A system, comprising memory to store instructions that, as a result of execution by one or more processors, cause the system, in response to an application programming interface (API) call, to:
cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
A system, comprising memory to store instructions that, as a result of execution by one or more processors, cause the system to:
perform an application programming interface (API) to:
receive an identifier of a buffer, the buffer to be used to transfer data between a first computing resource using a first data transport protocol and a second computing resource using a second, different data transport protocol, wherein the second computing resource is to perform one or more wireless communication operations using data stored in the buffer; and
prevent, using the identifier, deallocation of the memory allocated to the buffer.
11
The system of claim 10, wherein performance of the API is based, at least in part, on an identification of the one or more different transport protocols associated with the first wireless computing resource.
See Claim 10
12
The system of claim 10, wherein:
the information is to be transferred between a first layer and a second layer of a fifth generation new radio (5G-NR) network protocol stack; and
the first layer and second layer are each associated with a different transport protocol.
See Claim 17: The system of claim 10, wherein:
the first and second computing resources are associated with a 5G-NR network protocol stack that includes a first layer, a second layer, and a third layer;
the first computing resource associated with the first layer;
the second computing resource associated with the second layer;
the API associated with the third layer; and
the third layer located between the first and second layers.
From Claim 10: “to transfer data between a first computing resource using a first data transport protocol and a second computing resource using a second, different data transport protocol”
13
The system of claim 10, wherein an application associated with the first wireless computing resource calls the API to obtain information regarding the one or more different transport protocols supported by the second wireless computing resource.
See Claim 10
14
The system of claim 10, wherein the API is embedded within another API.
See Claim 10
15
The system of claim 10, wherein the information is to be transferred using an application that causes calls from one layer associated with one transport protocol to perform operations in a second layer associated with a second transport protocol.
See Claim 34: The method of claim 26, wherein:
performing the API is further to transfer information between a first layer and a second layer corresponding to a 5G-NR network protocol, wherein the first computing resource is associated with the first layer and requests performance of the wireless communication operation associated with the second, different data transport protocol; and
performance of the API causes the second computing resource to perform the wireless communication operation.
16
The system of claim 10, further comprising:
a network orchestrator configured to identify one or more transport profiles supported by the first wireless computing resource, wherein the network orchestrator is to deploy the second wireless computing resource with the first wireless computing resource.
See Claim 10
17
The system of claim 10, wherein the first wireless computing resource is a virtual device.
See Claim 10
18
A non-transitory machine-readable medium having stored thereon one or more instructions, which if performed by one or more processors, cause one or more processors to, in response to an application programming interface (API) call, at least:
cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
A machine-readable medium having stored thereon one or more instructions, which if performed by one or more processors, cause one or more processors to at least:
perform an application programming interface (API) to:
receive an identifier of a buffer, the buffer to be used to transfer data between a first computing resource using a first data transport protocol and a second computing resource using a second, different data transport protocol, wherein the second computing resource is to perform one or more wireless communication operations using data stored in the buffer; and
prevent, using the identifier, deallocation of the memory allocated to the buffer
19
The non-transitory machine-readable medium of claim 18, wherein performance of the API is based at least on a transport configuration associated with the first wireless computing resource.
See Claim 18: “perform an application programming interface (API) to: […] a first computing resource using a first data transport protocol”
20
The non-transitory machine-readable medium of claim 18, wherein:
the information is to be transferred between a first layer and a second layer of a fifth generation new radio (5G-NR) network protocol stack using a third layer between the first and second layers that is based, at least in part, on multiple transport protocols.
See Claim 25: The machine-readable medium of claim 18, wherein:
the first and second computing resources are associated with a 5G-NR network protocol stack that includes a first layer, a second layer, and a third layer;
the first computing resource associated with the first layer;
the second computing resource associated with the second layer;
the API associated with the third layer; and
the third layer located between the first and second layers
From Claim 18: “perform an application programming interface (API) to: […] to transfer data between a first computing resource using a first data transport protocol and a second computing resource using a second, different data transport protocol”
21
The non-transitory machine-readable medium of claim 18, wherein the one or more processors are further to perform the API based at least on one or more drivers of a transport layer, the one or more drivers used to operate the first and second wireless computing resources.
See Claim 23: The machine-readable medium of claim 18, wherein the API is performed, at least in part, by a third computing resource of a transport layer.
22
The non-transitory machine-readable medium of claim 18, wherein the one or more processors are one or more graphics processing units (GPUs).
See Claim 18
23
The non-transitory machine-readable medium of claim 18, wherein:
the first wireless computing resource is to call the API; and
performance of the API causes, at least in part, the first wireless computing resource to transfer information to the second wireless computing resource without modification to the first wireless computing resource.
See Claim 34: The method of claim 26, wherein:
performing the API is further to transfer information between a first layer and a
second layer corresponding to a 5G-NR network protocol, wherein the first computing
resource is associated with the first layer and requests performance of the wireless
communication operation associated with the second, different data transport protocol;
and
performance of the API causes the second computing resource to perform the
wireless communication operation.
24
The non-transitory machine-readable medium of claim 18, wherein:
the storage selected is an allocated buffer; and
the API is further to decrement a reference counter associated with the allocated buffer; and
if the decremented reference counter holds a value of zero, the allocated buffer is deselected.
See Claim 19: The machine-readable medium of claim 18, wherein the API is further to initialize a reference counter to indicate when to release the buffer.
From Claim 18: “the buffer to be used to transfer data […] memory allocated to the buffer”
25
The non-transitory machine-readable medium of claim 18, wherein the first wireless computing resource has been configured with a transport profile supported by the second wireless computing resource.
See Claim 18
26
A method comprising:
in response to an application programming interface (API) call:
causing data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
causing the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
A method comprising:
perform an application programming interface (API) to:
receive an identifier of a buffer, the buffer to be used to transfer data between a first computing resource using a first data transport protocol and a second computing resource using a second, different data transport protocol, wherein the second computing resource is to perform one or more wireless communication operations using data stored in the buffer; and
prevent, using the identifier, deallocation of the memory allocated to the buffer.
27
The method of claim 26, wherein performance of the API is based at least on a set of associations that correlate one or more functions of the one or more transport protocols with one or more functions of the one or more different transport protocols.
See Claim 28: The method of claim 26, wherein: the API uses one or more functions of the first data transport protocol to use one or more libraries to map the one or more functions to one or more functions of the second, different data transport protocol; and
the one or more functions of the first and second data transport protocols to each cause, at least in part, a buffer to be allocated as part of their respective data transport protocol.
28
The method of claim 26, further comprising identifying the one or more different transport protocols.
See Claim 26
29
The method of claim 26, further comprising:
configuring the first wireless computing resource with transport profiles supported by the second wireless computing resource.
See Claim 26
30
The method of claim 26, wherein the API is to be called by the first wireless computing resource; and
is stored as part of a layer different from another layer comprising the first wireless computing resource.
See Claim 33: The method of claim 26, wherein:
the first and second computing resources are associated with a 5G-NR network protocol stack that includes a first layer, a second layer, and a third layer;
the first computing resource associated with the first layer;
the second computing resource associated with the second layer;
the API associated with the third layer; and
the third layer located between the first and second layers.
31
The method of claim 26, further comprising transferring the information using an application that maps the API to an operation related to an operation of the one or more different transport protocols, wherein the application is implemented, at least in part, on a hardware accelerator.
See Claim 26
32
The method of claim 26, wherein the API is embedded within another API.
See Claim 26
33
The method of claim 26, wherein:
the information is to be transferred between two layers of a fifth generation new radio (5G-NR) network protocol stack, wherein one layer is associated with the one or more transport protocols and the other layer is associate with the one or more different transport protocols; and
the API is located in a third layer.
The method of claim 26, wherein:
the first and second computing resources are associated with a 5G-NR network protocol stack that includes a first layer, a second layer, and a third layer;
the first computing resource associated with the first layer;
the second computing resource associated with the second layer;
the API associated with the third layer; and
the third layer located between the first and second layers.
From Claim 26: “to transfer data between a first computing resource using a first data transport protocol and a second computing resource using a second, different data transport protocol”
34
The method of claim 26, wherein:
performance of the API does not cause a reference counter to decrement;
the reference counter is associated with the storage; and
the storage is further to be used as part of a zero copy buffer method.
See Claim 27: The method of claim 26, wherein the API is further to initialize a reference counter to indicate when to release the buffer.
35
The method of claim 26, wherein the information includes different messages each associated with various transport protocols; and
the information is to be transferred using the API.
The method of claim 26, wherein performance of the API further uses information comprising different messages each associated with a different data transport protocol; and
the information is to be transferred between the first and second computing resources
using one transport layer.
From Claim 26: “perform an application programming interface (API) to: […] to transfer data”
Claims 1-3, 5-12, 14-20, 22-26, 28-29, 31-33, and 35 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 5-10, 12, 14-20, 22-26, 31-33, and 35 of copending Application No. 17/720,203 (the reference application) in view of Francini et al (U.S. Pub. No. 2018/0159965), hereinafter Francini. The claims of the instant application and the claims of the reference application are compared in Table 3 below, with limitations not taught by the reference claims in bold. Unless indicated otherwise, the limitations of the claims of the instant application have been compared to the limitations of the claim of the same number in the reference application.
Regarding claims 1, 10, 18, and 26 of the instant application, claims 1, 10, 18 and 26 of the reference application substantially recite all the limitations of the claims except cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols and cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols. However, Francini teaches to cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols (Fig. 1, buffer 123 = “storage”; FIG. 2B, transport layer 224S = “first wireless computing resource”; [0051] – the transport layer 224S may use protocols such as TCP; [0052] – “The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120.”; [0022] – “The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.”) and cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols (FIG. 2B, link layer 222C connected to MHD 110; [0052] – “The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222c of the WAD 120. The reliable link layer 222C of the WAD 120 receives the primitive messages, places the primitive messages into link layer data structures supported by the reliable link layer 222C […] where the reliable link layer 222C of the WAD 120 is provided using LTE, the reliable link layer 222C may place the primitive messages into PDCP PDUs”; [0050] – “The communication primitives of the client socket API 215 may include primitive rules for controlling manipulation of data (e.g., encapsulation and decapsulation of application data that is sourced by the application layer 216 for transmission toward the server 130, encapsulation and decapsulation of application data that is sourced by server 130 and that is intended for delivery to the application layer 216 of the MHD 110, or the like), primitive messages and associated primitive message formats that are configured to transport data of the application layer 216, or the like, as well as various combinations thereof.” The primitive messages used by the reliable link layer 222C adhere to rules and a format for transporting data (i.e., a “transport protocol”; [0022] – “The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.” The application data (data communicated between the WAD 120 and server 130 which was stored in buffer 123, e.g., when it is received via the TCP socket at the WAD 120) is placed into the primitive messages, which adhere to a transport protocol as described in [0050], and provided to the reliable link layer 222C.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,203 to incorporate the teachings of Francini to improve communication between devices using transport layer connections by supporting use of a networked transport layer socket with a API that communicates data between the networked transport layer socket and the application layer of a communication device via the link layer and physical layer of a communication protocol stack (Francini: [0006], [0063]-[0065] and [0068]).
Regarding claim 2 of the instant application, claim 24 of the reference application teaches all the limitations except the API causes the first wireless computing resource to: send the data stored in the storage to the second wireless computing resource. However, Francini teaches the API causes the first wireless computing resource to: send the data stored in the storage to the second wireless computing resource ([0052] – “The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120. The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222C of the WAD 120.”; [0022] – “The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,203 to incorporate the teachings of Francini to improve communication between devices using transport layer connections by supporting use of a networked transport layer socket with a API that communicates data between the networked transport layer socket and the application layer of a communication device via the link layer and physical layer of a communication protocol stack (Francini: [0006], [0063]-[0065] and [0068]). Additionally, it would have been obvious to one of ordinary skill in the art to have applied the functions of the non-transitory machine-readable medium of claim 24 to the processors of claim 2.
Regarding claim 6, claim 1 of Application 17/720,203 in view of Francini teaches substantially all of the limitations except that at least one of the one or more transport protocols is a transport layer protocol; and to cause the data to be provided from the storage comprises sending the data from the storage. However, Francini teaches at least one of the one or more transport protocols is a transport layer protocol (Francini: [0052] – “The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection). The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120.”); and to cause the data to be provided from the storage comprises sending the data from the storage ([0052] – “The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120. The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222C of the WAD 120. The reliable link layer 222C of the WAD 120 receives the primitive messages”. The reliable link layer 222C receives the application data in the primitive messages, i.e., the client socket API sends the application data (“the data from the storage”), within primitive messages, to the reliable link layer 222C.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,203 to incorporate the teachings of Francini to improve communication between devices using transport layer connections by supporting use of a networked transport layer socket with a API that communicates data between the networked transport layer socket and the application layer of a communication device via the link layer and physical layer of a communication protocol stack (Francini: [0006], [0063]-[0065] and [0068]).
Regarding claim 11, it would have been obvious to one of ordinary skill in the art to have applied the functions performed by the processor of claim 6 to the system of claim 11.
Regarding claim 28, it would have been obvious to one of ordinary skill in the art to have applied the functions performed by the processor of claim 6 to the method of claim 28.
Regarding claim 29, it would have been obvious to one of ordinary skill in the art to have applied the functions of the non-transitory machine-readable medium of claim 25 to the method of claim 29.
Claims 3, 5, 7-9, 12, 14-17, 19-20, 22-25, 31-33, and 35 recite additional limitations that are substantially the same or identical to limitations recited in claims 1-2, 5-9, 12, 14-17, 19-20, 22-25, 31-33, and 35 of the reference application as indicated in Table 3, and are rejected as well.
This is a provisional nonstatutory double patenting rejection.
Claims 13 and 27 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 13 and 26 of copending Application No. 17/720,203 (the reference application) in view of Francini, and further in view of Sen et al. (U.S. Pub. No. 2020/0218684), hereinafter Sen. The claims of the instant application and the claims of the reference application are compared in Table 3 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claim 13 of the instant application, claim 13 of the reference application in view of Francini teaches all the limitations except an application calls the API to obtain information regarding the one or more different transport protocols. However, Sen teaches an application ([0079] – initiator 822) associated with first wireless computing resource (FIG. 8, computing platform 102) calls the API ([0059], [0085] and [0088] – Sending a message, e.g., the connection establishment request message, to a target hardware accelerator resource involves an application, e.g., the initiator, passing the message to an API (“calls the API”) as described in process 600.) to obtain information regarding the one or more different transport protocols ([0089] – “In response to the connection establishment request message for the primary connection 831 , at operation 906, the initiator 822 receives a connection establishment response message for the primary connection 831 from the target accelerator resource(s). For example, where an RDMA-based protocol is used for the primary connection 831, such as RoCEv2, the target accelerator resource(s) may encapsulate an RDMA acknowledgement (ACK) packet within an Ethernet/IP/UDP packet (including either IPv4 or IPv6) and including suitable destination and source addresses based on the connection establishment request message. In embodiments, the connection establishment response message for the primary connection 831 includes a session ID, which may be included in the header or payload section of the message. The session ID is generated by the target accelerator resource(s) and is discussed in more detail infra. Other suitable information may be included in the connection establishment response message, such as an accelerator resource identifier and/or other protocol specific information.”) supported by the second wireless computing resource (FIG. 8, accelerator sled 104 with accelerator(s) 312 which is the “target hardware accelerator resource(s)” as described in [0033]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,203 to incorporate the teachings of Sen to use the initiator application that establishes multiple connections with a remote accelerator using different transport protocols over a wireless network achieves high-availability goals even when one transport protocol is temporarily unusable (Sen: [0073]).
Regarding claim 27 of the instant application, claim 26 of the reference application teaches all the limitations except wherein performance of the API is based at least on a set of associations that correlate one or more functions of the one or more transport protocols with one or more functions of the one or more different transport protocols. However, Sen teaches performance of the API is based at least on a set of associations that correlate one or more functions of the one or more transport protocols with one or more functions of the one or more different transport protocols ([0033] – “The accelerator application interface includes an accelerator library used to access accelerator resources. […] The accelerator library uses device libraries to abstract device-specific details into an API that provides the necessary translation to device-specific interfaces via corresponding device drivers. According to various embodiments, the accelerator library also uses individual transport definitions to abstract transport-specific details into an API that provides the necessary translation to transport-specific interfaces via corresponding transport protocols. The device library at the computing platform 102 is used to connect the application(s) to one or more local devices, such as a local hardware accelerator 212. Similarly, the transport definition is used to connect the application (e.g., application 820 of FIG. 8) to a remote hardware accelerator 312 resident in the accelerator sled 104. The transport definition allows applications to send commands and data to a target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) and receive data from the target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) without requiring knowledge of the underlying transport protocols (e.g., transport layers 823, 824 of FIG. 8) being used. In some embodiments, bindings are used to link the device libraries and transport definitions with device drivers and transport protocols, respectively. Bindings involve a process or technique of connecting two or more data elements or entities together. Bindings allows the accelerator library to be bound to multiple protocols, including one or more IX protocols and one or more transport protocols.”; [0034] – “The transport definition is independent of the transport protocols that are used to carry data to remote accelerator resources. […] Each of the transport layers (e.g., transport layers 823, 824 of FIG. 8) may include primitives such as read/write data from/to device; process device command; get/set device properties; and event subscription and notification. The transport layers (e.g., transport layers 823, 824 of FIG. 8) may also have mechanisms to allow scalable and low latency communication, such as […] protocol independent format definition allowing for multiple protocol bindings.” The accelerator library comprising an API uses bindings (“a set of associations that correlate” the transport protocols) which connect the accelerator library to multiple different protocols, and specifically connect the transport definition to the multiple different transport protocols.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,203 to incorporate the teachings of Sen to use an API that abstracts details of the underlying transport protocols and translates to a specific transport protocol to provide the benefit of seamless and transparent access to remote resources (Sen: [0034]).
This is a provisional nonstatutory double patenting rejection.
Claims 4, 21, and 30 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 18, and 8 of copending Application No. 17/720,203 (the reference application) in view of Francini, and further in view of Hyder et al. (U.S. Patent. No. 5,983,274), hereinafter Hyder. The claims of the instant application and the claims of the reference application are compared in Table 3 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claims 4 and 21, claims 1 and 18 of the reference application in view of Francini teach all the limitations except perform the API based at least on one or more drivers of a transport layer, the one or more drivers used to operate the first and second wireless computing resources. However, Hyder teaches to perform the API based at least on one or more drivers of a transport layer (Claim 1 – “a protocol driver; a device driver; an integrating driver that interfaces with the protocol driver and the device driver using defined APIs”; Col. 2, lines 13-20 – “Because there are different types of transport protocols developed over time by different entities for different reasons, there may be different types of transport protocol drivers acting as software components running on a single host computer system in order to provide the necessary networking capabilities for a given installation. Some common transport protocols include TCP/IP, IPX, AppleTalk®, and others.”), the one or more drivers used to operate the first and second wireless computing resources (Col. 1, lines 60-62 – “link layer implemented by a network card device driver, and the transport and network layers implemented as a transport protocol driver”; Col. 2, lines 20-24 – “Each transport protocol driver will communicate with one or more individual network card device drivers in order to send network data over a communications network and receive incoming packets from the communications network.”; Col. 1, lines 30-37 – “Data that is shared between computers is sent in packets across the physical network connection and read by destination computers. […] As used herein, the term "network data" refers to data or information that is actually transmitted over the communications network between different computers.”; Col. 6, lines 19-20 – “cellular, and other wireless technologies, etc. provide ripe opportunities for exploiting the present invention.”; Col. 7, line 56-Col. 8, line 3 – “For sending network data from the upper layers 106, the transport protocol driver 100 will allocate a packet data structure from the integrating component 102, fill the data structure with network information and control information according to the present invention, and send it down through the integrating component 102 to the network card device driver 104 for transmitting the network data on the network interface card 108. In like manner, for a packet received from the network interface card 108, the network card device driver 104 will allocate a packet data structure from the integrating component 102, fill it with the network data and control information according to the present invention, and send it through the integrating component 102 to the transport protocol driver 100 for communication to the upper layers 106.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,203 to incorporate the teachings of Hyder to allow transport protocol drivers and network card drivers to be developed more efficiently and allow communication with any available transport protocol (Hyder: Col. 3, lines 45-65).
Regarding claim 30, claim 8 of the reference application in view of Francini teaches all the limitations except wherein the API is to be called by the first wireless computing resource. However, Hyder teaches wherein the API is to be called by the first wireless computing resource (Col. 1, lines 60-62 – “data link layer implemented by network card device driver, and the transport and network layers implemented as a transport protocol driver”; Col. 7, lines 38-39 – “Application Programming Interface (API) is a set of subroutines provided by one software component”; Col. 10, lines 34-40 – “The transport protocol driver 100 then sends or transfers the packet to the integrating component 102 at step 140 by making a subroutine call […] the integrating component 102 will send of transfer the packet to the network card device driver at step 142”; Claim 1 – “protocol driver; a device driver; an integrating driver that interfaces with the protocol driver and the device driver using defined APIs”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,203 to incorporate the teachings of Hyder to allow transport protocol drivers and network card drivers to be developed more efficiently and allow communication with any available transport protocol (Hyder: Col. 3, lines 45-65). Additionally, it would have been obvious to one of ordinary skill in the art to have applied the functions of the processor of claim 8 to the method of claim 30.
This is a provisional nonstatutory double patenting rejection.
Claim 34 is provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 34 of copending Application No. 17/720,203 (the reference application) in view of Francini, and further in view of DONG et al. (U.S. Pub No. 2022/0075731), hereinafter DONG. The claims of the instant application and the claims of the reference application are compared in Table 3 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claim 34, claim 34 of the reference application in view of Francini teaches all the limitations except the API does not cause a reference counter to decrement. However, DONG teaches the API does not cause a reference counter to decrement ([0057] – “counter logic 210 is configured to perform conditional counter operations on counters in buckets of arrays for the cache(s) being accessed. Conditional counter operation include, without limitation, incrementing a counter, decrementing a counter, and/or maintaining a value of a counter.").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,203 to incorporate the teachings of DONG to allow accesses to memory to be tracked and allow allocated memory to be used more efficiently (DONG: [0024]-[0025]).
This is a provisional nonstatutory double patenting rejection.
Table 3: Claim comparison of the instant application and reference application 17/720,203
Claim
17/720,201 (instant application)
17/720,203
1
One or more processors, comprising: circuitry to, in response to an application programming interface (API) call:
cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
One or more processors comprising: circuitry to, in response to an application programming interface (API) call:
identify a function from a library of a data transport protocol to cause a buffer to be deallocated into a pool from which the buffer was allocated; and
cause the function to be performed by a wireless network computing resource, where the buffer was allocated to transfer information between the wireless network computing resource using the data transport protocol and a different wireless network computing resource using a different data transport protocol.
“One or more processors, comprising: circuitry to, in response to an application programming interface (API) call:”
“One or more processors, comprising: circuitry to, in response to an application programming interface (API) call:”
“the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols;”
“the buffer was allocated to transfer information between the wireless network computing resource using the data transport protocol and a different wireless network computing resource using a different data transport protocol.”
2
The one or more processors of claim 1, wherein performance of the API causes the first wireless computing resource to:
send the data stored in the storage to the second wireless computing resource; and
to decrement a reference counter used to indicate when to release the data stored in the storage.
See Claim 24: The non-transitory machine-readable medium of claim 18, wherein:
memory selected is an allocated buffer; and
performance of the API causes the wireless network computing resource to decrement a reference counter associated with the allocated buffer; and
if the decremented reference counter holds a value of zero, the allocated buffer is deselected.
3
The one or more processors of claim 1, wherein the information is to be transferred based at least on a transport layer used to associate a function of the one or more transport protocols with a corresponding function of the one or more different transport protocols.
See Claim 2: The one or more processors of claim 1, wherein the circuitry is to perform the API by using a transport layer to map commands between wireless computing resources.
From Claim 1: “the buffer was allocated to transfer information between the wireless network computing resource using the data transport protocol and a different wireless network computing resource using a different data transport protocol.”
4
The one or more processors of claim 1, wherein the circuitry is further to perform the API based at least on one or more drivers of a transport layer, the one or more drivers used to operate the first and second wireless computing resources.
See Claim 1
5
The one or more processors of claim 1, wherein performing the API further causes the first wireless computing resource to perform one or more operations associated with the one or more different transport protocols.
The one or more processors of claim 1, wherein performing the API further causes resources the
wireless network computing resource to perform one or more operations associated with a plurality of information transmission types associated with different wireless network computing resource.
From Claim 1: “a different wireless network computing resource using a different data
transport protocol.”
For clarity of the record, one of ordinary skill in the art would recognize the “different data transport protocol” recited in claim 1 to be one of the “information transmission types” recited in claim 5.
6
The one or more processors of claim 1, wherein at least one of the one or more transport protocols is a transport layer protocol; and
to cause the data to be provided from the storage comprises sending the data from the storage.
See claim 1
7
The processor one or more processors of claim 1, wherein performing the API is further to cause the first wireless computing resource to perform an operation related to the one or more different transport protocols based at least on a corresponding operation of the one or more transport protocols.
The one or more processors of claim 1, wherein performing the API is further to cause of the wireless network computing resource to perform an operation related to a first information transmission type based, at least in part, on a corresponding operation related to a second information transmission type.
For clarity of the record, one of ordinary skill in the art would recognize the “data transport protocol” and “different data transport protocol” recited in claim 1 to be the first and second “information transmission type” recited in claim 7.
8
The one or more processors of claim 1, wherein:
the first and second wireless computing resources are associated with a fifth generation new radio (5G-NR) network protocol stack that includes a first layer, a second layer, and a third layer;
the first wireless computing resource associated with the first layer;
the second wireless computing resource associated with the second layer;
the API associated with the third layer; and
the third layer is located between the first and second layers.
The one or more processors of claim 1, wherein:
the wireless network computing resource and the different wireless network computing resource are associated with a 5G-NR network protocol stack that includes a first layer, a second layer, and a third layer;
a first 5G-NR computing resource of the wireless network computing resource and the different wireless network computing resource is associated with the first layer;
a second 5G-NR computing resource of the wireless network computing resource and the different wireless network computing resource is associated with the second layer;
the API is associated with the third layer; and
the third layer is located between the first and second layers.
9
The one or more processors of claim 1, wherein:
the API is further to transfer information between a first layer and a second layer corresponding to a fifth generation new radio (5G-NR) network protocol, wherein the second wireless computing resource associated with the second layer requests an operation associated with the one or more different transport protocols; and
performance of the API causes the first wireless computing resource associated with the first layer to cause performance of the operation associated with the one or more different transport protocols.
The one or more processors of claim 1, wherein:
the API is further to transfer information between a first layer and a second layer
corresponding to a 5G-NR network protocol, wherein the wireless network computing resource is associated with the second layer and requests an operation associated with a first information transmission type; and
performance of the API causes the different wireless network computing resource associated with the first layer to perform an operation associated with a second transmission type.
For clarity of the record, one of ordinary skill in the art would recognize the “data transport protocol” and “different data transport protocol” recited in claim 1 to be the first and second “information transmission type” recited in claim 9.
10
A system, comprising: memory to store instructions that, as a result of execution by one or more processors, cause the system, in response to an application programming interface (API) call, to:
cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
A system, comprising memory to store instructions that, as a result of execution by one or more processors, cause the system to: in response to an application programming interface (API)call:
Identify a function from a library of a data transport protocol to cause a buffer to be deallocated into a pool from which the buffer was allocated; and
cause the function to be performed by a wireless network computing resource, where the buffer was allocated to transfer information between the wireless network computing resource using the data transport protocol and a different wireless network computing resource using a different data transport protocol.
11
The system of claim 10, wherein performance of the API is based, at least in part, on an identification of the one or more different transport protocols associated with the first wireless computing resource.
See Claim 6: The one or more processors of claim 1, wherein the API is based, at least in part, on information identifying an information transmission type associated with the wireless network computing resource or the different wireless network computing resource.
For clarity of the record, one of ordinary skill in the art would recognize the “different data transport protocol” recited in claim 1 to be the “information transmission type” recited in claim 6.
12
The system of claim 10, wherein:
the information is to be transferred between a first layer and a second layer of a fifth generation new radio (5G-NR) network protocol stack; and
the first layer and second layer are each associated with a different transport protocol.
The system of claim 10, wherein:
the information is to be transferred between a first layer and a second layer of a 5G-NR network protocol stack; and
the first layer and second layer are each associated with a different transport protocol.
13
The system of claim 10, wherein an application associated with the first wireless computing resource calls the API to obtain information regarding the one or more different transport protocols supported by the second wireless computing resource.
The system of claim 10, wherein an application
associated with resources the wireless network
computing resource calls the API and does not have information regarding any transport protocol supported by the different wireless network computing resource.
14
The system of claim 10, wherein the API is embedded within another API.
The system of claim 10, wherein the API is embedded within another API.
15
The system of claim 10, wherein the information is to be transferred using an application that causes calls from one layer associated with one transport protocol to perform operations in a second layer associated with a second transport protocol.
The system of claim 10, wherein the information is to be transferred using an application that causes calls from one layer associated with one transport protocol to perform operations in a second layer associated with a second transport protocol.
16
The system of claim 10, further comprising:
a network orchestrator configured to identify one or more transport profiles supported by the first wireless computing resource, wherein the network orchestrator is to deploy the second wireless computing resource with the first wireless computing resource.
The system of claim 10, further comprising:
a network orchestrator configured to identify one or more transport profiles supported by one of the wireless network computing resource, and the network orchestrator is to deploy the different wireless network computing resource with the wireless network computing resource configured with a transport profile supported by the different wireless computing resource.
17
The system of claim 10, wherein the first wireless computing resource is a virtual device.
The system of claim 10, wherein at least one of the wireless network computing resource and the different wireless network computing resource is a virtual device.
18
A non-transitory machine-readable medium having stored thereon one or more instructions, which if performed by one or more processors, cause one or more processors to, in response to an application programming interface (API) call, at least:
cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
A non-transitory machine-readable medium having stored thereon one or more instructions, which if performed by one or more processors, cause one or more processors to at least:
In response to an application programming interface (API) call:
Identify a function from a library of a data transport protocol to cause a buffer to be deallocated into a pool from which the buffer was allocated; and
cause the function to be performed by a wireless network computing resource, where the buffer was allocated to transfer information between the wireless network computing resource using the data transport protocol and a different wireless network computing resource using a different data transport protocol.
19
The non-transitory machine-readable medium of claim 18, wherein performance of the API is based at least on a transport configuration associated with the first wireless computing resource.
The non-transitory machine-readable medium of claim 18, wherein performance of the API is based, at least in part, on a transport configuration associated with one of the wireless network computing resource and the different wireless network computing resource.
20
The non-transitory machine-readable medium of claim 18, wherein:
the information is to be transferred between a first layer and a second layer of a fifth generation new radio (5G-NR) network protocol stack using a third layer between the first and second layers that is based, at least in part, on multiple transport protocols.
The non-transitory machine-readable medium of claim 18, wherein:
the information is to be transferred between a first layer and a second layer of a 5G-NR network protocol stack using a third layer between the first and second layers that is based, at least in part, on multiple transport protocols.
21
The non-transitory machine-readable medium of claim 18, wherein the one or more processors are further to perform the API based at least on one or more drivers of a transport layer, the one or more drivers used to operate the first and second wireless computing resources.
See Claim 18
22
The non-transitory machine-readable medium of claim 18, wherein the one or more processors are one or more graphics processing units (GPUs).
The non-transitory machine-readable medium of claim 18, wherein the one or more processors are one or more graphics processing units (GPUs).
23
The non-transitory machine-readable medium of claim 18, wherein:
the first wireless computing resource is to call the API; and
performance of the API causes, at least in part, the first wireless computing resource to transfer information to the second wireless computing resource without modification to the first wireless computing resource.
The non-transitory machine-readable medium of claim 18, wherein:
one of the wireless network computing resource is to call the API; and
performance of the API causes, at least in part, the one 5G-NR computing resource to transfer information to the different wireless network computing resource that supports a different transport protocols without modification to the wireless network computing resource.
24
The non-transitory machine-readable medium of claim 18, wherein:
the storage selected is an allocated buffer; and
the API is further to decrement a reference counter associated with the allocated buffer; and
if the decremented reference counter holds a value of zero, the allocated buffer is deselected.
The non-transitory machine-readable medium of claim 18, wherein:
memory selected is an allocated buffer; and
performance of the API causes the wireless network computing resource to decrement a reference counter associated with the allocated buffer; and
if the decremented reference counter holds a value of zero, the allocated buffer is deselected.
25
The non-transitory machine-readable medium of claim 18, wherein the first wireless computing resource has been configured with a transport profile supported by the second wireless computing resource.
The non-transitory machine-readable medium of claim 18, wherein one of the wireless network computing resource and the different wireless network computing resource has been configured with a transport profile supported by a second of the wireless network computing resource and the different wireless network computing resource.
26
A method comprising:
in response to an application programming interface (API) call:
causing data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
causing the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
A method comprising:
in response to an application programming interface (API) call:
identifying a function from a library of a data transport protocol to cause a buffer to be deallocated into a pool from which the buffer was allocated; and
causing the function to be performed by a wireless network computing resource, where the buffer was allocated to transfer information between the wireless network computing resource using the data transport protocol and a different wireless network computing resource using a different data transport protocol.
27
The method of claim 26, wherein performance of the API is based at least on a set of associations that correlate one or more functions of the one or more transport protocols with one or more functions of the one or more different transport protocols.
See Claim 26
28
The method of claim 26, further comprising identifying the one or more different transport protocols.
See Claim 6: The one or more processors of claim 1, wherein the API is based, at least in part, on information identifying an information transmission type associated with the wireless network computing resource or the different wireless network computing resource.
For clarity of the record, one of ordinary skill in the art would recognize the “different data transport protocol” recited in claim 1 to be the “information transmission type” recited in claim 6.
29
The method of claim 26, further comprising:
configuring the first wireless computing resource with transport profiles supported by the second wireless computing resource.
See Claim 25: The non-transitory machine-readable medium of claim 18, wherein one of the wireless network computing resource and the different wireless network computing resource has been configured with a transport profile supported by a second of the wireless network computing resource and the different wireless network computing resource.
30
The method of claim 26, wherein the API is to be called by the first wireless computing resource; and
is stored as part of a layer different from another layer comprising the first wireless computing resource.
See Claim 8: The one or more processors of claim 1, wherein:
the wireless network computing resource and the different wireless network computing resource are associated with a 5G-NR network protocol stack that includes a first layer, a second layer, and a third layer;
a first 5G-NR computing resource of the wireless network computing resource and the different wireless network computing resource is associated with the first layer;
a second 5G-NR computing resource of the wireless network computing resource and the different wireless network computing resource is associated with the second layer;
the API is associated with the third layer; and
the third layer is located between the first and second layers.
31
The method of claim 26, further comprising transferring the information using an application that maps the API to an operation related to an operation of the one or more different transport protocols, wherein the application is implemented, at least in part, on a hardware accelerator.
The method of claim 26, further comprising transferring the information using an application that maps the API to an operation related to a transport protocol, wherein the application is implemented, at least in part, on a hardware accelerator.
From claim 26: “the wireless network computing resource using the data transport protocol and a different wireless network computing resource using a different data transport protocol.”
For clarity of the record, one of ordinary skill in the art would recognize the “transport protocol” of claim 31 could be the “different data transport protocol” recited in claim 26.
32
The method of claim 26, wherein the API is embedded within another API.
The method of claim 26, wherein the API is embedded within another API.
33
The method of claim 26, wherein:
the information is to be transferred between two layers of a fifth generation new radio (5G-NR) network protocol stack, wherein one layer is associated with the one or more transport protocols and the other layer is associate with the one or more different transport protocols; and
the API is located in a third layer.
The method of claim 26, wherein:
the information is to be transferred between two layers of a 5G-NR network protocol stack, wherein each layer is associated with a different transport protocol; and
the API is located in a third layer.
34
The method of claim 26, wherein:
performance of the API does not cause a reference counter to decrement;
the reference counter is associated with the storage; and
the storage is further to be used as part of a zero copy buffer method.
The method of claim 26, wherein:
performance of the API causes one or more 5G-NR computing resources to decrement a reference counter;
the reference counter is associated with the buffer; and
the buffer selected is further to be used as part of a zero copy buffer method.
35
The method of claim 26, wherein the information includes different messages each associated with various transport protocols; and
the information is to be transferred using the API.
The method of claim 26, wherein the information includes different messages each associated with a different information transmission type; and
the information is to be transferred between the wireless network computing resource and the different wireless network computing resource using one data transport protocol.
From claim 26: “performing a transport layer application programming interface (API) to […] transfer information between a wireless network computing resource using a
data transport protocol and a different wireless network computing resource using a
different data transport protocol.”
For clarity of the record, one of ordinary skill in the art would recognize the “data transport protocol” and “different data transport protocol” recited in claim 26 to be the “different information transmission type” recited in claim 35.
Claims 1, 6-7, 10-11, 14-15, 17-19, 22-23, 25-26, 29, 31-32, and 35 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3, 5, 7, 10, 14-15, 17-19, 22-23, 25-26, 31-32, and 35 of copending Application No. 17/720,205 (the reference application) in view of Francini et al (U.S. Pub. No. 2018/0159965), hereinafter Francini. The claims of the instant application and the claims of the reference application are compared in Table 4 below, with limitations not taught by the reference claims in bold. Unless indicated otherwise, the limitations of the claims of the instant application have been compared to the limitations of the claim of the same number in the reference application.
Regarding claims 1, 10, 18, and 26 of the instant application, claims 1, 10, 18 and 26 of the reference application substantially recite all the limitations of the claims except cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols and cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols. However, Francini teaches to cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols (Fig. 1, buffer 123 = “storage”; FIG. 2B, transport layer 224S = “first wireless computing resource”; [0051] – the transport layer 224S may use protocols such as TCP; [0052] – “The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120.”; [0022] – “The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.”) and cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols (FIG. 2B, link layer 222C connected to MHD 110; [0052] – “The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222c of the WAD 120. The reliable link layer 222C of the WAD 120 receives the primitive messages, places the primitive messages into link layer data structures supported by the reliable link layer 222C […] where the reliable link layer 222C of the WAD 120 is provided using LTE, the reliable link layer 222C may place the primitive messages into PDCP PDUs”; [0050] – “The communication primitives of the client socket API 215 may include primitive rules for controlling manipulation of data (e.g., encapsulation and decapsulation of application data that is sourced by the application layer 216 for transmission toward the server 130, encapsulation and decapsulation of application data that is sourced by server 130 and that is intended for delivery to the application layer 216 of the MHD 110, or the like), primitive messages and associated primitive message formats that are configured to transport data of the application layer 216, or the like, as well as various combinations thereof.” The primitive messages used by the reliable link layer 222C adhere to rules and a format for transporting data (i.e., a “transport protocol”; [0022] – “The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.” The application data (data communicated between the WAD 120 and server 130 which was stored in buffer 123, e.g., when it is received via the TCP socket at the WAD 120) is placed into the primitive messages, which adhere to a transport protocol as described in [0050], and provided to the reliable link layer 222C.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,205 to incorporate the teachings of Francini to improve communication between devices using transport layer connections by supporting use of a networked transport layer socket with a API that communicates data between the networked transport layer socket and the application layer of a communication device via the link layer and physical layer of a communication protocol stack (Francini: [0006], [0063]-[0065] and [0068]).
Regarding claim 6, claim 2 of Application 17/720,199 in view of Francini teaches substantially all of the limitations except that at least one of the one or more transport protocols is a transport layer protocol. However, Francini teaches at least one of the one or more transport protocols is a transport layer protocol (Francini: [0052] – “The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection). The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Francini to improve communication between devices using transport layer connections by supporting use of a networked transport layer socket with a API that communicates data between the networked transport layer socket and the application layer of a communication device via the link layer and physical layer of a communication protocol stack (Francini: [0006], [0063]-[0065] and [0068]).
Regarding claim 11, it would have been obvious to one of ordinary skill in the art to have applied the functions performed by the one or more processors of claim 5 to the system of claim 11.
Regarding claim 29, it would have been obvious to one of ordinary skill in the art to have applied the functions of the non-transitory machine-readable medium of claim 25 to the method of claim 29.
Claims 7, 14-15, 17, 19, 22-23, 25, 31-32, and 35 recite additional limitations that are substantially the same or identical to limitations recited in claims 3, 14-15, 17, 19, 22-23, 25, 31-32, and 35 of the reference application as indicated in Table 4, and are rejected as well.
This is a provisional nonstatutory double patenting rejection.
Claims 2 and 24 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 2 and 24 of copending Application No. 17/720,205 (the reference application) in view of Francini, and further in view of Fuente et al. (U.S. Pub. No. 2005/0265370), hereinafter Fuente. The claims of the instant application and the claims of the reference application are compared in Table 4 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claim 2 of the instant application, claim 2 of the reference application teaches all of the limitations except to decrement a reference counter used to indicate when to release the data stored in the storage. However, Fuente teaches to decrement a reference counter used to indicate when to release the data stored in the storage ([0025]-[0026] – “Counter (108) maintains a count of the number of references to the buffer memory (104} by the accessors (106, 110}, the count being incremented on each reference and decremented on completion of each accessor's data transmission. [...] Memory manager (114) is adapted to lock buffer memory during write activity, to permit read access to the buffer memory (104) by accessors (106, 110) and to return the buffer memory to a free buffer pool when counter (108) signals that the count has reached zero.").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,205 to incorporate the teachings of Fuente in order to allow the buffer memory to be allocated, pinned and freed without preventing read accesses by multiple accessors that transmit the data stored, which further allows for rapid retransmission of the data stored in the buffer when a transmission fails (Fuente: [0032]).
Regarding claim 24 of the instant application, claims 18-19 of the reference application teaches all of the limitations except to decrement a reference counter and if the decremented reference counter holds a value of zero, the allocated buffer is deselected. However, Fuente teaches to decrement a reference counter and if the decremented reference counter holds a value of zero, the allocated buffer is deselected ([0025]-[0026] - "Counter (108) maintains a count of the number of references to the buffer memory (104) by the accessors (106, 110), the count being incremented on each reference and decremented on completion of each accessor's data transmission. [...] Memory manager (114) is adapted to lock buffer memory during write activity, to permit read access to the buffer memory (104) by accessors (106, 110) and to return the buffer memory to a free buffer pool when counter (108) signals that the count has reached zero."; [0032] - "the counter of the preferred embodiment allows the buffer memory to be allocated, "pinned ", and freed"; [0042] - "At step (226), a further test is performed to determine whether the count has reached zero or not. If it has not reached zero, this part of the logic process returns to step (228) to be triggered by the next completion. If on any iteration, the count is determined to have reached zero, the memory manager (114) releases the buffer memory (104).").
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,205 to incorporate the teachings of Fuente in order to allow the buffer memory to be allocated, pinned and freed without preventing read accesses by multiple accessors that transmit the data stored, which further allows for rapid retransmission of the data stored in the buffer when a transmission fails (Fuente: [0032]).
This is a provisional nonstatutory double patenting rejection.
Claims 3, 5, 13 and 27-28 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 13 and 26 of copending Application No. 17/720,205 (the reference application) in view of Francini, and further in view of Sen et al. (U.S. Pub. No. 2020/0218684), hereinafter Sen. The claims of the instant application and the claims of the reference application are compared in Table 4 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claim 3 of the instant application, claim 1 of the reference application in view of Francini teaches all the limitations except wherein the information is to be transferred based at least on a transport layer used to associate a function of the one or more transport protocols with a corresponding function of the one or more different transport protocols. However, Sen teaches the information is to be transferred based at least on a transport layer used to associate a function of the one or more transport protocols with a corresponding function of the one or more different transport protocols. ([0033] – “The accelerator application interface includes an accelerator library used to access accelerator resources. […] The accelerator library uses device libraries to abstract device-specific details into an API that provides the necessary translation to device-specific interfaces via corresponding device drivers. According to various embodiments, the accelerator library also uses individual transport definitions to abstract transport-specific details into an API that provides the necessary translation to transport-specific interfaces via corresponding transport protocols. The device library at the computing platform 102 is used to connect the application(s) to one or more local devices, such as a local hardware accelerator 212. Similarly, the transport definition is used to connect the application (e.g., application 820 of FIG. 8) to a remote hardware accelerator 312 resident in the accelerator sled 104. The transport definition allows applications to send commands and data to a target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) and receive data from the target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) without requiring knowledge of the underlying transport protocols (e.g., transport layers 823, 824 of FIG. 8) being used. In some embodiments, bindings are used to link the device libraries and transport definitions with device drivers and transport protocols, respectively. Bindings involve a process or technique of connecting two or more data elements or entities together. Bindings allows the accelerator library to be bound to multiple protocols, including one or more IX protocols and one or more transport protocols.”; [0034] – “The transport definition is independent of the transport protocols that are used to carry data to remote accelerator resources. […] Each of the transport layers (e.g., transport layers 823, 824 of FIG. 8) may include primitives such as read/write data from/to device; process device command; get/set device properties; and event subscription and notification. The transport layers (e.g., transport layers 823, 824 of FIG. 8) may also have mechanisms to allow scalable and low latency communication, such as […] protocol independent format definition allowing for multiple protocol bindings.” The transport layers 823, 824 have protocol-independent format definitions which use bindings (“a set of associations that correlate” the transport protocols) to connect the transport protocol-independent transport definition to the multiple different transport protocols.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,205 to incorporate the teachings of Sen to use an API that abstracts details of the underlying transport protocols of a transport layer and translates to a specific transport protocol to provide the benefit of seamless and transparent access to remote resources (Sen: [0034]).
Regarding claim 5 of the instant application, claim 1 of the reference application in view of Francini teaches all the limitations except wherein performing the API further causes the first wireless computing resource to perform one or more operations associated with the one or more different transport protocols. However, Sen teaches wherein performing the API further causes the first wireless computing resource to perform one or more operations associated with the one or more different transport protocols ([0022] – “The network 106 (also referred to as a "network fabric 106" or the like) may be embodied as any type of network capable of communicatively connecting the computing platforms 102 and the accelerator sleds 104. […] wireless”; [0059] – “Referring now to FIG. 6, in use, the computing platform 102 may execute a process 600 for sending an accelerator message to a hardware accelerator 212 or 312. The process 600 begins at operation 602, in which an application on the computing platform 102 determines a message to be sent to a hardware accelerator 212 or 312. The message may be embodied as an instruction to read or write data, a command to execute a certain function, an instruction to get or set a setting on an accelerator, a control command such as a query regarding the capability of an accelerator queue, and/or any other suitable message. At operation 604, the application passes the command or function to the accelerator manager 402. In the illustrative embodiment, the application passes the command or function with use of an application programming interface such that the details of communication with the hardware accelerator 212 or 312 are hidden from the associated application.”; [0062] – “Referring back to operation 606, if the accelerator manager 402 is to pass the message to a remote hardware accelerator 312, the process 600 proceeds to operation 612, in which the computing platform 102 generates a command capsule based on the message received from the application. […] the command capsule may encapsulate the message in a protocol different from a protocol used by the message.”; [0063] – “At operation 616, the computing platform 102 sends the command capsule to the accelerator sled 104. The computing platform 102 may use any suitable communication protocol, such as TCP, RDMA, RoCE, RoCEvl , RoCEv2, iWARP, etc.” An application on computing platform 102 (a “first wireless computing resource”) passes a message which uses a protocol – as described in [0062] – to an accelerator manager 402 via an API. The accelerator manager causes the computing platform 102 (the “first wireless computing resource”) to generate (“one or more operations”) a command capsule for the message using a different protocol (“different transport protocols”) and to send the command capsule which uses the different protocol to the accelerator sled 104 (a “second wireless computing resource”).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,205 to incorporate the teachings of Sen to use an API that abstracts details of the underlying transport protocols and translates to a specific transport protocol to provide the benefit of seamless and transparent access to remote resources (Sen: [0034]).
Regarding claim 13 of the instant application, claim 13 of the reference application in view of Francini teaches all the limitations except an application calls the API to obtain information regarding the one or more different transport protocols. However, Sen teaches an application ([0079] – initiator 822) associated with first wireless computing resource (FIG. 8, computing platform 102) calls the API ([0059], [0085] and [0088] – Sending a message, e.g., the connection establishment request message, to a target hardware accelerator resource involves an application, e.g., the initiator, passing the message to an API (“calls the API”) as described in process 600.) to obtain information regarding the one or more different transport protocols ([0089] – “In response to the connection establishment request message for the primary connection 831 , at operation 906, the initiator 822 receives a connection establishment response message for the primary connection 831 from the target accelerator resource(s). For example, where an RDMA-based protocol is used for the primary connection 831, such as RoCEv2, the target accelerator resource(s) may encapsulate an RDMA acknowledgement (ACK) packet within an Ethernet/IP/UDP packet (including either IPv4 or IPv6) and including suitable destination and source addresses based on the connection establishment request message. In embodiments, the connection establishment response message for the primary connection 831 includes a session ID, which may be included in the header or payload section of the message. The session ID is generated by the target accelerator resource(s) and is discussed in more detail infra. Other suitable information may be included in the connection establishment response message, such as an accelerator resource identifier and/or other protocol specific information.”) supported by the second wireless computing resource (FIG. 8, accelerator sled 104 with accelerator(s) 312 which is the “target hardware accelerator resource(s)” as described in [0033]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,205 to incorporate the teachings of Sen to use the initiator application that establishes multiple connections with a remote accelerator using different transport protocols over a wireless network achieves high-availability goals even when one transport protocol is temporarily unusable (Sen: [0073]).
Regarding claim 27 of the instant application, claim 26 of the reference application teaches all the limitations except wherein performance of the API is based at least on a set of associations that correlate one or more functions of the one or more transport protocols with one or more functions of the one or more different transport protocols. However, Sen teaches performance of the API is based at least on a set of associations that correlate one or more functions of the one or more transport protocols with one or more functions of the one or more different transport protocols ([0033] – “The accelerator application interface includes an accelerator library used to access accelerator resources. […] The accelerator library uses device libraries to abstract device-specific details into an API that provides the necessary translation to device-specific interfaces via corresponding device drivers. According to various embodiments, the accelerator library also uses individual transport definitions to abstract transport-specific details into an API that provides the necessary translation to transport-specific interfaces via corresponding transport protocols. The device library at the computing platform 102 is used to connect the application(s) to one or more local devices, such as a local hardware accelerator 212. Similarly, the transport definition is used to connect the application (e.g., application 820 of FIG. 8) to a remote hardware accelerator 312 resident in the accelerator sled 104. The transport definition allows applications to send commands and data to a target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) and receive data from the target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) without requiring knowledge of the underlying transport protocols (e.g., transport layers 823, 824 of FIG. 8) being used. In some embodiments, bindings are used to link the device libraries and transport definitions with device drivers and transport protocols, respectively. Bindings involve a process or technique of connecting two or more data elements or entities together. Bindings allows the accelerator library to be bound to multiple protocols, including one or more IX protocols and one or more transport protocols.”; [0034] – “The transport definition is independent of the transport protocols that are used to carry data to remote accelerator resources. […] Each of the transport layers (e.g., transport layers 823, 824 of FIG. 8) may include primitives such as read/write data from/to device; process device command; get/set device properties; and event subscription and notification. The transport layers (e.g., transport layers 823, 824 of FIG. 8) may also have mechanisms to allow scalable and low latency communication, such as […] protocol independent format definition allowing for multiple protocol bindings.” The accelerator library comprising an API uses bindings (“a set of associations that correlate” the transport protocols) which connect the accelerator library to multiple different protocols, and specifically connect the transport definition to the multiple different transport protocols.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,205 to incorporate the teachings of Sen to use an API that abstracts details of the underlying transport protocols and translates to a specific transport protocol to provide the benefit of seamless and transparent access to remote resources (Sen: [0034]).
Regarding claim 28 of the instant application, claim 26 of the reference application teaches all the limitations except identifying the one or more different transport protocols. However, Sen teaches identifying the one or more different transport protocols ([0086] – “initiator 822 identifies or determines one or more transport protocols to be used during a communication session 830 with target accelerator resource(s). Any number of transport protocols may be used”; [0022] – “The network 106 (also referred to as a "network fabric 106" or the like) may be embodied as any type of network capable of communicatively connecting the computing platforms 102 and the accelerator sleds 104. […] wireless”; [0032]-[0033] – applications on computing platforms 102 use an API to send and receive data from accelerator resources, such as accelerator sled 104, without underlying knowledge of the transport protocols. In FIG. 8, initiator 822, on computing platform 102 (a “first wireless computing resource”), identifies the transport protocols used to communicate with target accelerator resource(s), such as accelerator sled 104 (a “second wireless computing resource”). Since these protocols are used by a second wireless computing resource, they are analogous to the “different transport protocols”. ).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,205 to incorporate the teachings of Sen to use an API that abstracts details of the underlying transport protocols and translates to a specific transport protocol to provide the benefit of seamless and transparent access to remote resources (Sen: [0034]).
This is a provisional nonstatutory double patenting rejection.
Claims 4, 21, and 30 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 18, and 33 of copending Application No. 17/720,205 (the reference application) in view of Francini, and further in view of Hyder et al. (U.S. Patent. No. 5,983,274), hereinafter Hyder. The claims of the instant application and the claims of the reference application are compared in Table 4 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claims 4 and 21, claims 1 and 18 of the reference application in view of Francini teach all the limitations except perform the API based at least on one or more drivers of a transport layer, the one or more drivers used to operate the first and second wireless computing resources. However, Hyder teaches to perform the API based at least on one or more drivers of a transport layer (Claim 1 – “a protocol driver; a device driver; an integrating driver that interfaces with the protocol driver and the device driver using defined APIs”; Col. 2, lines 13-20 – “Because there are different types of transport protocols developed over time by different entities for different reasons, there may be different types of transport protocol drivers acting as software components running on a single host computer system in order to provide the necessary networking capabilities for a given installation. Some common transport protocols include TCP/IP, IPX, AppleTalk®, and others.”), the one or more drivers used to operate the first and second wireless computing resources (Col. 1, lines 60-62 – “link layer implemented by a network card device driver, and the transport and network layers implemented as a transport protocol driver”; Col. 2, lines 20-24 – “Each transport protocol driver will communicate with one or more individual network card device drivers in order to send network data over a communications network and receive incoming packets from the communications network.”; Col. 1, lines 30-37 – “Data that is shared between computers is sent in packets across the physical network connection and read by destination computers. […] As used herein, the term "network data" refers to data or information that is actually transmitted over the communications network between different computers.”; Col. 6, lines 19-20 – “cellular, and other wireless technologies, etc. provide ripe opportunities for exploiting the present invention.”; Col. 7, line 56-Col. 8, line 3 – “For sending network data from the upper layers 106, the transport protocol driver 100 will allocate a packet data structure from the integrating component 102, fill the data structure with network information and control information according to the present invention, and send it down through the integrating component 102 to the network card device driver 104 for transmitting the network data on the network interface card 108. In like manner, for a packet received from the network interface card 108, the network card device driver 104 will allocate a packet data structure from the integrating component 102, fill it with the network data and control information according to the present invention, and send it through the integrating component 102 to the transport protocol driver 100 for communication to the upper layers 106.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,205 to incorporate the teachings of Hyder to allow transport protocol drivers and network card drivers to be developed more efficiently and allow communication with any available transport protocol (Hyder: Col. 3, lines 45-65).
Regarding claim 30, claim 33 of the reference application in view of Francini teaches all the limitations except wherein the API is to be called by the first wireless computing resource. However, Hyder teaches wherein the API is to be called by the first wireless computing resource (Col. 1, lines 60-62 – “data link layer implemented by network card device driver, and the transport and network layers implemented as a transport protocol driver”; Col. 7, lines 38-39 – “Application Programming Interface (API) is a set of subroutines provided by one software component”; Col. 10, lines 34-40 – “The transport protocol driver 100 then sends or transfers the packet to the integrating component 102 at step 140 by making a subroutine call […] the integrating component 102 will send of transfer the packet to the network card device driver at step 142”; Claim 1 – “protocol driver; a device driver; an integrating driver that interfaces with the protocol driver and the device driver using defined APIs”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,205 to incorporate the teachings of Hyder to allow transport protocol drivers and network card drivers to be developed more efficiently and allow communication with any available transport protocol (Hyder: Col. 3, lines 45-65). Additionally, it would have been obvious to one of ordinary skill in the art to have applied the functions of the processor of claim 8 to the method of claim 30.
This is a provisional nonstatutory double patenting rejection.
Claims 8-9, 12, 20, and 33 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 8-9, 12, 20, and 33 of copending Application No. 17/720,205 (the reference application) in view of Francini, and further in view of Kaltenberger et al. (NPL Document: OpenAirInterface: Democratizing innovation in the 5G Era), hereinafter Kaltenberger. The claims of the instant application and the claims of the reference application are compared in Table 4 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claims 8-9, 12, 20, and 33 of the instant application, claims 8-9, 12, 20, and 33 of the reference application in view of Francini teaches all the limitations except that the wireless network protocol stack is 5G-NR. However, Kaltenberger teaches the wireless network protocol stack is 5G-NR (Page 2 – “5G is also known by Release 15 of 3GPP. This release includes a brand new core network and radio interface, called 5G New Radio (5G- NR). The network has been designed from ground up to support enhanced Mobile BroadBand (eMBB), Ultra-Reliable Low-Latency Communications (URLLC), as well as Massive Machine Type Communications (mMTC) enabling new use cases for a large variety of industries.”; Page 6 – “Control Plane Network Functions in the 5G system architecture are based on the service based architecture. A NF service is one type of capability exposed by a NF (NF Service Producer or Server) to other authorized NF (NF Service Consumer or Client) through a service based inter- face. In other words, NFs communicate with each other via SBI. The protocol stack for the service based interfaces is Application/HTTP2/TLS/TCP/IP/L2”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,205 to incorporate the teachings of Kaltenberger to enable new use cases for a variety of industries (Kaltenberger: Page 2).
This is a provisional nonstatutory double patenting rejection.
Claim 16 is provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 16 of copending Application No. 17/720,205 (the reference application) in view of Francini, and further in view of Young (U.S. Pub. No. 2021/0320850). The claims of the instant application and the claims of the reference application are compared in Table 4 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claim 16, claim 16 of the reference application in view of Francini teaches all the limitations except wherein the network orchestrator is to deploy the second wireless computing resource with the first wireless computing resource. However, Young teaches wherein the network orchestrator is to deploy the second wireless computing resource with the first wireless computing resource ([0039] - “Service orchestration and transport path management section 305 may include a service orchestrator (SO) 325, an analytics engine 330, a network functions virtual orchestrator (NFVO) 335"; [0043] – “SO 325 may generate and send a message to SIDC 345 that instructs SIDC 345 to identify one or more network infrastructures that are candidates for relocating the current transport path to maintain the SLA for the application service. In one implementation, the message may include service requirements including SLA requirements and/or service profiles associated with the application service."; [0044] – “In one implementation, deployment preference values include abstracted "distances " between nodes in a transport path associated with a network infrastructure. For example, PCE 350 may calculate a logical "distance " which may be a function of latency, inversely proportional to bandwidth, inversely proportional to reliability, etc. In one implementation, the calculated distances are based on the build of the network (e.g., size of the circuit}, the current usage (e.g., based on monitoring), and expected usage (e.g., based on projected use of yet-to-be deployed services). [...] PCE 350 provides the alternative network service infrastructures to SIDC 345 as candidates for maintaining service availability at SLA requirements in response to a detected and/or projected outage and/or congested network conditions."; [0046]-[0047] - "SIDC 345 may select one or more alternative network service infrastructures and/or sub-infrastructures based on some or all of the above data. [...] NFVO 335 may, based on instructions received from SO 325, deploy the alternative network service infrastructures and/or sub-infrastructures to orchestrate within data transport section 310." The alternative network service infrastructures ("wireless computing resources") that meet the requirements in the profiles are deployed together.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,199 to incorporate the teachings of Young to ensures service availability is maintained in a 5G wireless network (Young: [0044]).
This is a provisional nonstatutory double patenting rejection.
Claim 34 is provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 34 of copending Application No. 17/720,205 (the reference application) in view of Francini, and further in view of DONG et al. (U.S. Pub No. 2022/0075731), hereinafter DONG. The claims of the instant application and the claims of the reference application are compared in Table 4 below, with limitations not taught by the claims of the reference application in bold. Unless indicated otherwise, the claims of the instant application have been compared to the claim of the same number in the reference application.
Regarding claim 34, claim 34 of the reference application in view of Francini teaches all the limitations except the API does not cause a reference counter to decrement; the reference counter is associated with the storage. However, DONG teaches the API does not cause a reference counter to decrement; the reference counter is associated with the storage ([0038] – “Computing system 202 also includes one or more application programming interfaces (APIs) 220 configured to increment and decrement counters in buckets of buffers in a lock-free manner, including for multi-threading accesses"; [0057] – “counter logic 210 is configured to perform conditional counter operations on counters in buckets of arrays for the cache(s) being accessed. Conditional counter operation include, without limitation, incrementing a counter, decrementing a counter, and/or maintaining a value of a counter."; [0064] – “Buckets 702 include respective counters 708, each having a counter, stored and maintained memory 206 of FIG. 2, and that is incremented or decremented by counter logic 210, e.g., via a call to an API". Counters associated with cache ("storage") activity may be incremented and/or maintained based on counter logic which makes API calls to affect the counter increments. Decrementing is listed as an alternative and is therefore not required.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Application No. 17/720,205 to incorporate the teachings of DONG to allow accesses to memory to be tracked and allow allocated memory to be used more efficiently (DONG: [0024]-[0025]).
This is a provisional nonstatutory double patenting rejection.
Table 4: Claim comparison of the instant application and reference application 17/720,205
Claim
17/720,201 (instant application)
17/720,205
1
One or more processors, comprising circuitry to, in response to an application programming interface (API) call:
cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
One or more processors comprising: circuitry to, in response to an application programming interface (API) call:
identify one or more functions, from a library of a data transport protocol, corresponding to the API by at least using a mapping between the API and the one or more functions; and
cause performance of the one or more functions to obtain data from a buffer used to transfer information between a plurality wireless computing resources, wherein at least one of the plurality of wireless network computing resources uses a processor that uses the data transport protocol different from at least one other data transport protocol used by a processor of at least one other wireless network computing resource.
“One or more processors, comprising circuitry to, in response to an application programming interface (API) call:”
“One or more processors comprising: circuitry to : circuitry to, in response to an application programming interface (API) call:”
“the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols;”
“a buffer used to transfer information between a plurality wireless computing resources, wherein at least one of the plurality of wireless network computing resources uses a processor that uses the data transport protocol different from at least one other data transport protocol used by a processor of at least one other wireless network computing resource;
2
The one or more processors of claim 1, wherein performance of the API causes the first wireless computing resource to:
send the data stored in the storage to the second wireless computing resource; and
to decrement a reference counter used to indicate when to release the data stored in the storage.
The one or more processors of claim 1, wherein the circuitry is further to, in response to the API call, cause the at least one other wireless computing resource to receive the data stored in the buffer from the at least one of the plurality of wireless computing resources without the at least one other wireless computing resource having information about the data transport protocol used by the at least one of the plurality of wireless computing resources.
For clarity of the record, one of ordinary skill in the art would recognize that the “at least one of the plurality of wireless computing resources” would have to send the data in order for the “at least one other wireless computing resource” to receive the data as recited.
3
The one or more processors of claim 1, wherein the information is to be transferred based at least on a transport layer used to associate a function of the one or more transport protocols with a corresponding function of the one or more different transport protocols.
See Claim 1
4
The one or more processors of claim 1, wherein the circuitry is further to perform the API based at least on one or more drivers of a transport layer, the one or more drivers used to operate the first and second wireless computing resources.
See Claim 1
5
The one or more processors of claim 1, wherein performing the API further causes the first wireless computing resource to perform one or more operations associated with the one or more different transport protocols.
See Claim 1
6
The one or more processors of claim 1, wherein at least one of the one or more transport protocols is a transport layer protocol; and
to cause the data to be provided from the storage comprises sending the data from the storage.
See claim 2: The one or more processors of claim 1, wherein the circuitry is further to, in response to the API call, cause the at least one other wireless computing resource to receive the data stored in the buffer from the at least one of the plurality of wireless computing resources without the at least one other wireless computing resource having information about the data transport protocol used by the at least one of the plurality of wireless computing resources.
For clarity of the record, one of ordinary skill in the art would recognize that the “at least one of the plurality of wireless computing resources” would have to send the data in order for the “at least one other wireless computing resource” to receive the data as recited.
7
The processor one or more processors of claim 1, wherein performing the API is further to cause the first wireless computing resource to perform an operation related to the one or more different transport protocols based at least on a corresponding operation of the one or more transport protocols.
See claim 3: The one or more processors of claim 1, wherein:
the one or more functions correspond to the data transport protocol; and
the API corresponds to the at least one other data transport protocol.
From claim 1: “identify one or more functions, from a library of a data transport protocol, corresponding to the API by at least using a mapping between the API and the one or more functions; and cause performance of the one or more functions to obtain data from a buffer used to transfer information between a plurality wireless computing resources, wherein at least one of the plurality of wireless network computing resources uses a processor that uses the data transport protocol different from at least one other data transport protocol used by a processor of at least one other wireless network computing resource”
8
The one or more processors of claim 1, wherein:
the first and second wireless computing resources are associated with a fifth generation new radio (5G-NR) network protocol stack that includes a first layer, a second layer, and a third layer;
the first wireless computing resource associated with the first layer;
the second wireless computing resource associated with the second layer;
the API associated with the third layer; and
the third layer is located between the first and second layers.
The one or more processors of claim 1, wherein:
the plurality of wireless computing resources are associated with a wireless network protocol stack that includes a first layer, a second layer, and a third layer;
a first wireless computing resource of the plurality of wireless computing resources
is associated with the first layer;
a second wireless computing resource of the plurality of wireless computing resources is associated with the second layer;
the API is associated with the third layer; and
the third layer is located between the first and second layers.
9
The one or more processors of claim 1, wherein:
the API is further to transfer information between a first layer and a second layer corresponding to a fifth generation new radio (5G-NR) network protocol, wherein the second wireless computing resource associated with the second layer requests an operation associated with the one or more different transport protocols; and
performance of the API causes the first wireless computing resource associated with the first layer to cause performance of the operation associated with the one or more different transport protocols.
The one or more processors of claim 1, wherein the circuitry is further to, in response to the API call:
transfer information between a first layer and a second layer corresponding to a wireless network protocol, wherein one or more of the plurality of wireless computing resources associated with the second layer requests an operation associated with the data transport protocol; and
cause one or more of the plurality of wireless computing resources associated with the first layer to perform an operation associated with the data transport protocol.
10
A system, comprising memory to store instructions that, as a result of execution by one or more processors, cause the system, in response to application programming interface (API) call, to:
cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
A system, comprising memory to store instructions that, as a result of execution by one or more processors, cause the system to, in response to application programming interface (API) call:
identify one or more functions, from a library of a data transport protocol, corresponding to the API by at least using a mapping between the API and the one or more functions; and
cause performance of the one or more functions to obtain data from a buffer used to transfer information between a plurality wireless computing resources, wherein at least one of the plurality of wireless network computing resources uses a processor that uses the data transport protocol different from at least one other data transport protocol used by a processor of at least one other wireless network computing resource.
11
The system of claim 10, wherein performance of the API is based, at least in part, on an identification of the one or more different transport protocols associated with the first wireless computing resource.
See Claim 5: The one or more processors of claim 1, wherein the circuitry is further, in response to the API call, identify, from a plurality of data transport protocols, the data transport protocol.
From Claim 1: “at least one of the plurality of wireless network computing resources uses a processor that uses the data transport protocol”
12
The system of claim 10, wherein:
the information is to be transferred between a first layer and a second layer of a fifth generation new radio (5G-NR) network protocol stack; and
the first layer and second layer are each associated with a different transport protocol.
The system of claim 10, wherein:
the information is to be transferred between a first layer and a second layer of a wireless network protocol stack; and
the first layer and second layer are each associated with a different transport protocol.
13
The system of claim 10, wherein an application associated with the first wireless computing resource calls the API to obtain information regarding the one or more different transport protocols supported by the second wireless computing resource.
The system of claim 10, wherein an application associated with the one or more of the plurality of wireless computing resources calls the API and does not have information regarding any transport protocol supported by the at least one other wireless computing resource.
14
The system of claim 10, wherein the API is embedded within another API.
The system of claim 10, wherein the API is embedded within another API.
15
The system of claim 10, wherein the information is to be transferred using an application that causes calls from one layer associated with one transport protocol to perform operations in a second layer associated with a second transport protocol.
The system of claim 10, wherein the information is to be transferred using an application that causes calls from one layer associated with one transport protocol to perform operations in a second layer associated with a second transport protocol.
16
The system of claim 10, further comprising:
a network orchestrator configured to identify one or more transport profiles supported by the first wireless computing resource, wherein the network orchestrator is to deploy the second wireless computing resource with the first wireless computing resource.
The system of claim 10, further comprising:
a network orchestrator configured to identify one or more transport profiles supported by the at least one of the plurality of wireless computing resources, wherein the network orchestrator is to deploy the at least one other wireless network computing resource configured with a transport profile supported by the at least one of the plurality of wireless computing resources.
17
The system of claim 10, wherein the first wireless computing resource is a virtual device.
The system of claim 10, wherein one or more of the plurality of wireless computing resources is a virtual device.
18
A non-transitory machine-readable medium having stored thereon one or more instructions, which if performed by one or more processors, cause one or more processors to, in response to an application programming interface (API) call, at least:
cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
A non-transitory machine-readable medium having stored thereon one or more instructions, which if performed by one or more processors, cause the one or more processors to at least, in response to an application programming interface (API) call:
identify one or more functions, from a library of a data transport protocol, corresponding to the API by at least using a mapping between the API and the one or more functions; and
cause performance of the one or more functions to obtain data from a buffer used to transfer information between a plurality wireless computing resources, wherein at least one of the plurality of wireless network computing resources uses a processor that uses the data transport protocol different from at least one other data transport protocol used by a processor of at least one other wireless network computing resource.
19
The non-transitory machine-readable medium of claim 18, wherein performance of the API is based at least on a transport configuration associated with the first wireless computing resource.
The non-transitory machine-readable medium of claim 18, wherein performance of the API is based, at least in part, on a transport configuration associated with one of the plurality of wireless computing resources.
20
The non-transitory machine-readable medium of claim 18, wherein:
the information is to be transferred between a first layer and a second layer of a fifth generation new radio (5G-NR) network protocol stack using a third layer between the first and second layers that is based, at least in part, on multiple transport protocols.
The non-transitory machine-readable medium of claim 18, wherein:
the information is to be transferred between a first layer and a second layer of a wireless network protocol stack using a third layer between the first and second layers that is based, at least in part, on multiple transport protocols.
21
The non-transitory machine-readable medium of claim 18, wherein the one or more processors are further to perform the API based at least on one or more drivers of a transport layer, the one or more drivers used to operate the first and second wireless computing resources.
See Claim 18
22
The non-transitory machine-readable medium of claim 18, wherein the one or more processors are one or more graphics processing units (GPUs).
The non-transitory machine-readable medium of claim 18, wherein the one or more processors are one or more graphics processing units (GPUs).
23
The non-transitory machine-readable medium of claim 18, wherein:
the first wireless computing resource is to call the API; and
performance of the API causes, at least in part, the first wireless computing resource to transfer information to the second wireless computing resource without modification to the first wireless computing resource.
The non-transitory machine-readable medium of claim 18, wherein:
the at least one other wireless computing resource is to call the API; and
performance of the API causes, at least in part, the at least one of the plurality of wireless computing resources to transfer information to the at least one other wireless computing resource that supports different transport protocols without modification to the at least one of the plurality of wireless computing resources.
24
The non-transitory machine-readable medium of claim 18, wherein:
the storage selected is an allocated buffer; and
the API is further to decrement a reference counter associated with the allocated buffer; and
if the decremented reference counter holds a value of zero, the allocated buffer is deselected.
The non-transitory machine-readable medium of claim 18, wherein:
the buffer selected is an allocated buffer; and
a reference counter associated with the allocated buffer remains unchanged after performance of the API.
25
The non-transitory machine-readable medium of claim 18, wherein the first wireless computing resource has been configured with a transport profile supported by the second wireless computing resource.
The non-transitory machine-readable medium of claim 18, wherein the at least one other wireless computing resource has been configured with a transport profile supported by the at least one of the plurality of wireless computing resources.
26
A method comprising:
in response to an application programming interface (API) call:
causing data to be stored in storage of a first wireless computing resource according to one or more transport protocols, the storage selected to be used to transfer information between the first wireless computing resource using the one or more transport protocols and a second wireless computing resource using one or more different transport protocols; and
causing the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols.
A method comprising:
in response to an application programming interface (API) call:
identifying one or more functions, from a library of a data transport protocol, corresponding to the API by at least using a mapping between the API and the one or more functions; and
causing performance of the one or more functions to obtain data from a buffer used to transfer information between a plurality wireless computing resources, wherein at least one of the plurality of wireless network computing resources uses a processor that uses the data transport protocol different from at least one other data transport protocol used by a processor of at least one other wireless network computing resource.
27
The method of claim 26, wherein performance of the API is based at least on a set of associations that correlate one or more functions of the one or more transport protocols with one or more functions of the one or more different transport protocols.
See Claim 26
28
The method of claim 26, further comprising identifying the one or more different transport protocols.
See Claim 26
29
The method of claim 26, further comprising:
configuring the first wireless computing resource with transport profiles supported by the second wireless computing resource.
See Claim 25: The non-transitory machine-readable medium of claim 18, wherein the at least one other wireless computing resource has been configured with a transport profile supported by the at least one of the plurality of wireless computing resources.
30
The method of claim 26, wherein the API is to be called by the first wireless computing resource; and
is stored as part of a layer different from another layer comprising the first wireless computing resource.
See Claim 33: The method of claim 26, wherein:
the information is to be transferred between two layers of a wireless network protocol stack, wherein each layer is associated with a different transport protocol; and
the API is located in a third layer.
From Claim 26: “to transfer information between a plurality wireless computing resources, wherein at least one of the plurality of wireless network computing resources uses a processor that uses the data transport protocol different from at least one other data transport protocol used by a processor of at least one other wireless network computing resource”
31
The method of claim 26, further comprising transferring the information using an application that maps the API to an operation related to an operation of the one or more different transport protocols, wherein the application is implemented, at least in part, on a hardware accelerator.
The method of claim 26, further comprising transferring the information using an application that maps the API to an operation related to a transport protocol, wherein the application is implemented, at least in part, on a hardware accelerator.
32
The method of claim 26, wherein the API is embedded within another API.
The method of claim 26, wherein the API is embedded within another API.
33
The method of claim 26, wherein:
the information is to be transferred between two layers of a fifth generation new radio (5G-NR) network protocol stack, wherein one layer is associated with the one or more transport protocols and the other layer is associate with the one or more different transport protocols; and
the API is located in a third layer.
The method of claim 26, wherein:
the information is to be transferred between two layers of a wireless network
protocol stack, wherein each layer is associated with a different transport protocol; and
the API is located in a third layer.
34
The method of claim 26, wherein:
performance of the API does not cause a reference counter to decrement;
the reference counter is associated with the storage; and
the storage is further to be used as part of a zero copy buffer method.
The method of claim 26, wherein the API is to be used as part of a zero copy buffer method.
35
The method of claim 26, wherein the information includes different messages each associated with various transport protocols; and
the information is to be transferred using the API.
The method of claim 26, wherein the information includes different messages each associated with a different information transmission type; and
the information is to be transferred between two of the plurality of wireless computing resources using one transport.
From claim 26: “to transfer information between a plurality wireless computing resources, wherein at least one of the plurality of wireless network computing resources uses a processor that uses the data transport protocol different from at least one other data transport protocol used by a processor of at least one other wireless network computing resource;”
For clarity of the record, one of ordinary skill in the art would recognize the “data transport protocol” and “other data transport protocol” recited in claim 26 to be the “different information transmission type” recited in claim 35.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2, 4-5, 7-9, 11, 13-14, 19, 21, 23-24, 27, 30-35 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 2, 9, 11, 19, 23, 27, and 34 recite the limitation “performance of the API”, claims 4 and 21 recite the limitations “perform the API”, and claims 5 and 7 recite “performing the API”. This phrasing is unclear because an API is a typically a library of functions, of which one may be called. Therefore it is unclear what is meant by the phrases “performance of the API”, “perform the API” and “performing the API”. Clarification and correction is required.
Claims 2, 4-5, 7-9, 11, 13-14, 19, 21, 23-24, 27, 30-35 recites the limitation "the API". There is insufficient antecedent basis for this limitation in the claim. Independent claims 1, 10, 18, and 26, from which these claims depend recite “an application programming interface (API) call”; however, it is unclear whether “the API” is referring to the same API call previously recited in the independent claims, a different API call, or the API itself to which the previously recited API call was made.
Claim 11 is also rejected for reciting the limitation "the one or more different transport protocols associated with first wireless computing resource" in lines 2-4. There is insufficient antecedent basis for this limitation in the claim. The one or more different transport protocols recited claim 10 were associated with the second wireless computing resource.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 6, 10, 15, 18-19, 26, and 35 are rejected under 35 U.S.C. 103 as being unpatentable over Francini et al (U.S. Pub. No. 2018/0159965), hereinafter Francini, in view of Bach et al. (U.S. Patent No. 5,619,650), hereinafter Bach.
Regarding claim 1, Francini teaches one or more processors, comprising: circuitry to ([0069]-[0070] – “FIG. 5 depicts a high-level block diagram of a computer suitable for use in performing various functions described herein. The computer 500 includes a processor 502 (e.g., a central processing unit (CPU), a processor having a set of processor cores, a processor core of a processor, or the like)”; [0075] – “circuitry that cooperates with the processor to perform various functions.”), (FIG. 2B, server socket API 235 and/or client socket API 215; [0052] – “In FIG. 2B, in a direction of transmission from the application layer 236 of the server 130 toward the application layer 216 of the MHD 110, communication of the application data of the application layer 236 of the server 230 may be performed as follows. The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection).”; [0061] – “The client socket API of the wireless access device communicates data, between the networked client transport layer socket for the mobile host device and the application layer of the mobile host device, based on interaction with the link layer (e.g., receiving data provided by the link layer for communications sourced by the application layer of the mobile host device and providing data for the link layer for communications intended for delivery to the application layer of the mobile host device). The client socket API of the wireless access device provides an interface between the networked client transport layer socket of the server-facing transport layer and the link layer at the wireless access device”. The following operations are performed in response to the application layer 236 of the server 130, comprising server socket API 235, sending application data (see [0052]), via the transport layer connection, to the WAD 120; further, the operations are performed using client socket API 215S on the WAD 120.)
cause data to be stored in storage of a first wireless computing resource according to one or more transport protocols (Fig. 1, buffer 123; FIG. 2B, transport layer 224S associated with server 130 is analogous to the “first wireless computing resource”; [0022] – “The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.”; [0051] – the transport layer 224S may use protocols such as TCP (“one or more transport protocols”); [0052] – “The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection). The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120.” In response to the application layer 236 of server 110 sending application data, the WAD 120 receives data via a TCP connection, which is stored in buffer 123. Receiving the data, e.g., via a TCP socket, necessarily teaches “causing the data to be stored”, e.g. in the buffer 123, at the WAD.), the storage selected to be used to transfer information between ([0022] – “The memory 122 includes a buffer 123 and a communication protocol stack 124, both of which are configured to support communications by the WAD 120. The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.”) the first wireless computing resource using the one or more transport protocols (FIG. 2B, transport layer 224S connected to server 130; [0051] – “networked client transport layer socket (transport layer 224S) of the WAD 120 for transport to the server 130 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection)”; [0031] – “the transport layer and associated transport layer socket are denoted using TCP as it is assumed that the transport layer is based on TCP (although it will be appreciated that other transport layer protocols, such as UDP or the like, may be used).”) and a second wireless computing resource using one or more different transport protocols (FIG. 2B, link layer 222C connected to MHD 110; [0052] – “The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222c of the WAD 120. The reliable link layer 222C of the WAD 120 receives the primitive messages, places the primitive messages into link layer data structures supported by the reliable link layer 222C […] where the reliable link layer 222C of the WAD 120 is provided using LTE, the reliable link layer 222C may place the primitive messages into PDCP PDUs”; [0050] – “The communication primitives of the client socket API 215 may include primitive rules for controlling manipulation of data (e.g., encapsulation and decapsulation of application data that is sourced by the application layer 216 for transmission toward the server 130, encapsulation and decapsulation of application data that is sourced by server 130 and that is intended for delivery to the application layer 216 of the MHD 110, or the like), primitive messages and associated primitive message formats that are configured to transport data of the application layer 216, or the like, as well as various combinations thereof.” The primitive messages used by the reliable link layer 222C include rules and a format for transporting data (i.e., a “transport protocol”).); and
cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols ([0052] – “The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection). The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120. The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222C of the WAD 120. The reliable link layer 222C of the WAD 120 receives the primitive messages, places the primitive messages into link layer data structures supported by the reliable link layer 222C”; [0022] – “The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.” The application data (data communicated between the WAD 120 and server 130 which was stored in buffer 123, e.g., when it is received via the TCP socket at the WAD 120) is placed into the primitive messages, which adhere to a transport protocol as described in [0050], and provided to the reliable link layer 222C.).
Francini fails to expressly teach the operations occurring in response to an application programming interface (API) call.
However Bach teaches a server sending data towards a client in response to a socket application programming interface (API) call (Col. 9, lines 2-24 – “FIG. 10 illustrates the communication flow when a socket "write()" API command is issued by an application in a host processor. The "write()" API is issued by either a client or server process to pass data across the network to be read by the connected process. […] The application issues a "write()" socket API command with associated data 1006. […] DSM builds the API call and causes a "write()" with the necessary data to be placed on the TCP/IP protocol stack.”; Col. 7, lines 11-13 – “The standard sockets API has a set of commands to establish the application to application socket connection and to transfer data between the connected applications.”). Further, Bach explicitly teaches a socket API uses a protocol for exchanging data (Col. 2, lines 49-65 – “application program 108 communicates via the network using an Application programming interface (API). The sockets protocol is one of the more prevalent application to application APIs. […] The sockets API defines the format and parameter content of the commands an application program uses to establish communications with another application. It defines the API for both client and server applications and for connection-less and connection-based links. The defined API functions cause the operating system to issue the necessary commands to establish a communications link and to exchange data over that link.”; Col. 1, lines 41-44 – “Communications between client and server application takes place according to a defined network protocol. A protocol is a set of rules and conventions used by the applications participating in a conversation.”).
Francini and Bach are considered to be analogous art to the claimed invention because they are reasonably pertinent to the problem faced by the inventor of using an API to transfer information between computing resources using one or more transport protocols. As noted in the cited portions of Francini, paragraph [0052] teaches the claimed operations occurring in response to the application layer 236 on the server 130 sending application data to the WAD 120 via a transport layer connection, where a server socket API 235 sits between the application server in the application layer 236 and the transport layer 234 (Francini: FIG. 2B). Further, the operations on the WAD 120, including providing the application data received from the transport layer connection to the reliable link layer in primitive messages of the API, are performed using the client socket API-south 215S on the WAD 120 (Francini: [0052]). As taught by Bach, a socket API may be used by an application on a server to send application data through a call to a socket API, e.g., write() (Bach: Col. 9, lines 2-24). Therefore, it would have been obvious to one of ordinary skill that the operations performed in Francini using a server socket API and/or a client socket API would be performed in response to an API call to the server socket API and/or client socket API as taught by Bach. It is well-known in the art that an application programming interface, such as the prevalent sockets API, is used by calling functions or commands of the API, as evidenced by Bach (Bach: Col. 2, lines 49-65). Further, using standard socket API commands as taught by Bach allows programs previously written using the prevalent sockets API to be used without requiring modification (Bach: Col. 5, line 44-47). Additionally, the methods of Bach provides the benefit of improved performance in large computer systems engaging in network communications (Bach: Col. 3, line 46-Col. 4, line 5).
Regarding claim 6, the combination of Francini in view of Bach teaches The one or more processors of claim 1, wherein:
at least one of the one or more transport protocols is a transport layer protocol (Francini: [0052] – “The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection). The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120.”); and
to cause the data to be provided from the storage comprises sending the data from the storage ([0052] – “The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120. The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222C of the WAD 120. The reliable link layer 222C of the WAD 120 receives the primitive messages”. The reliable link layer 222C receives the application data in the primitive messages, i.e., the client socket API sends the application data (“the data from the storage”), within primitive messages, to the reliable link layer 222C.).
Regarding claim 10, Francini teaches a system, comprising memory to store instructions that, as a result of execution by one or more processors (FIG. 1, communication system 100 comprises wireless access device 120 with processor 121 and memory 122; [0069]-[0071] – “FIG. 5 depicts a high-level block diagram of a computer suitable for use in performing various functions described herein. The computer 500 includes a processor 502 (e.g., a central processing unit (CPU), a processor having a set of processor cores, a processor core of a processor, or the like) and a memory 504 (e.g., a random access memory (RAM), a read only memory (ROM), or the like). The processor 502 and the memory 504 are communicatively connected. The computer 500 also may include a cooperating element 505. [...] The cooperating element 505 may be a process or set of instructions that can be loaded into the memory 504 and executed by the processor 502 to implement functions as discussed herein."), cause the system, in response to an application programming interface (API) call, to: perform the active functions performed by the one or more processors of claim 1. Accordingly claim 10 is rejected as being unpatentable over Francini in view of Bach for the same reasons presented with respect to claim 1.
Regarding claim 15, the combination of Francini in view of Bach teaches the system of claim 10, wherein the information is to be transferred using an application that causes calls from one layer associated with one transport protocol to perform operations in a second layer associated with a second transport protocol (Francini: [0031] – “the client socket API 215 is an application programming interface that is configured to allow application programs to control and use network sockets."; [0052] – “In FIG. 2B, in a direction of transmission from the application layer 236 of the server 130 toward the application layer 216 of the MHD 110, communication of the application data of the application layer 236 of the server 230 may be performed as follows. The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection). The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120. The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222C of the WAD 120. The reliable link layer 222C of the WAD 120 receives the primitive messages, places the primitive messages into link layer data structures supported by the reliable link layer 222C"; [0051] - "the reliable link layer 212 may place the primitive messages into PDCP protocol data units (PDUs)"; An application server (“application”) in application layer 236 of the server 130 causes application data to be transmitted to the WAD 120 via the transport layer (TCP connection - a first protocol), so that the client socket API-south 215 causes operations (placing data into PDCP PDUs - a second protocol) in the link layer.).
It would have been obvious to one of ordinary skill that the operations performed in Francini using a server socket API and/or a client socket API would be performed in response to an API call to the server socket API and/or client socket API as taught by Bach. It is well-known in the art that an application programming interface, such as the prevalent sockets API, is used by calling functions or commands of the API, as evidenced by Bach (Bach: Col. 2, lines 49-65). Further, using standard socket API commands as taught by Bach allows programs previously written using the prevalent sockets API to be used without requiring modification (Bach: Col. 5, line 44-47). Additionally, the methods of Bach provides the benefit of improved performance in large computer systems engaging in network communications (Bach: Col. 3, line 46-Col. 4, line 5).
Regarding claim 18, Francini teaches a non-transitory machine-readable medium having stored thereon one or more instructions, which if performed by one or more processors (FIG. 1, communication system 100 comprises wireless access device 120 with processor 121 and memory 122; [0069]-[0071] – “FIG. 5 depicts a high-level block diagram of a computer suitable for use in performing various functions described herein. The computer 500 includes a processor 502 (e.g., a central processing unit (CPU), a processor having a set of processor cores, a processor core of a processor, or the like) and a memory 504 (e.g., a random access memory (RAM), a read only memory (ROM), or the like). The processor 502 and the memory 504 are communicatively connected. The computer 500 also may include a cooperating element 505. [...] The cooperating element 505 may be a process or set of instructions that can be loaded into the memory 504 and executed by the processor 502 to implement functions as discussed herein (in which case, for example, the cooperating element 505 (including associated data structures) can be stored on a non-transitory computer-readable storage medium, such as a storage device or other storage element (e.g., a magnetic drive, an optical drive, or the like)."), cause one or more processors to, in response to an application programming interface call, at least: perform the active functions performed by the one or more processors of claim 1. Accordingly claim 18 is rejected as being unpatentable over Francini in view of Bach for the same reasons presented with respect to claim 1.
Regarding claim 19, the combination of Francini in view of Bach teaches the non-transitory machine-readable medium of claim 18, wherein performance of the API is based at least on a transport configuration associated with the first wireless computing resource (Francini: [0052] – “The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection). The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120.”; Claim 11 – “networked transport layer socket comprises a Transmission Control Protocol (TCP) socket or a User Datagram Protocol (UDP) socket." The client socket API 215S receives application data from the server 130 via the transport layer connection (e.g., a transmission control protocol (TCP) connection) in the transport layer 224S, which is analogous to the “first wireless computing resource”. Therefore the client socket API on the WAD 120 is performed based, at least in part, on the transport protocol associated with/supported by the transport layer 224S.
For clarity of the record, the Examiner would like to point to [0083] of the specification of the instant application which recites "In at least one embodiment, transport refers to a method and/or protocol for sending data from one computing resource to another. In at least one embodiment, computing resources can support a transport protocol, and when a computing resource is configured to support said transport protocol, said computing resource is referred to as having a transport configuration.").
Regarding claim 26, Francini teaches a method comprising: the active functions performed by the one or more processors of claim 1. Accordingly claim 26 is rejected as being unpatentable over Francini in view of Bach for the same reasons presented with respect to claim 1.
Regarding claim 35, the combination of Francini in view of Bach teaches the method of claim 26, wherein the information includes different messages each associated with various transport protocols (Francini: [0022] – “The memory 122 includes a buffer 123 and a communication protocol stack 124, both of which are configured to support communications by the WAD 120. The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.” The buffers store data to be transmitted (“information”) in various forms supported at different layers of communication protocol stack 124 ("messages each associated with various transmission protocols"). [0061] – “the wireless access device runs a communication protocol stack. The communication protocol stack includes a transport layer, a client socket API, a link layer (e.g., a reliable link layer), and a physical layer.”; [0030] – “the transport layer and associated transport layer socket are denoted using TCP as it is assumed that the transport layer is based on TCP (although it will be appreciated that other transport layer protocols, such as UDP or the like, may be used)."; [0045] - “the reliable link layer (namely, the reliable link layer 212 of MHD 110 and the reliable link layer 222C of WAD 120) may be divided into three sub-layers which are referred to as the Medium Access Control (MAC) sub-layer, the Radio Link Control (RLC) sub-layer, and the Packet Data Convergence Protocol (PDCP) sub-layer. The PDCP sub-layer is the highest of the sub-layers (i.e., closest to the application layer/farthest from the physical layer)"); and
the information is to be transferred using the API (Francini: [0061] – “The client socket API of the wireless access device communicates data, between the networked client transport layer socket for the mobile host device and the application layer of the mobile host device, based on interaction with the link layer (e.g., receiving data provided by the link layer for communications sourced by the application layer of the mobile host device and providing data for the link layer for communications intended for delivery to the application layer of the mobile host device).”).
Claims 2 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Francini in view of Bach as applied to claims 1 and 18, and further in view of Fuente et al. (U.S. Pub. No. 2005/0265370), hereinafter Fuente.
Regarding claim 2, the combination of Francini in view of Bach teaches the one or more processors of claim 1, wherein performance of the API causes the first wireless computing resource to:
send the data stored in the storage to the second wireless computing resource (Francini: [0052] – “The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection). The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120. The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222C of the WAD 120. The reliable link layer 222C of the WAD 120 receives the primitive messages, places the primitive messages into link layer data structures supported by the reliable link layer 222C (which, as previously discussed, may vary for different types of link layers, such as cellular, WiFi, or the like), and passes the link layer data structures including the primitive messages to the physical layer 221 of the WAD 120. […] The physical layer 221 of the WAD 120 receives the link layer data structures from the reliable link layer 222C of the WAD 120 and transmits the link layer data structures from the WAD 120 toward MHD 110 wirelessly via the wireless communication link 140.”; [0022] – “The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.”).
Francini in view of Bach fails to teach to decrement a reference counter used to indicate when to release the data stored in the storage.
However, Fuente teaches to decrement a reference counter used to indicate when to release the data stored in the storage ([0025]-[0026] – “Counter (108) maintains a count of the number of references to the buffer memory (104} by the accessors (106, 110}, the count being incremented on each reference and decremented on completion of each accessor's data transmission. [...] Memory manager (114) is adapted to lock buffer memory during write activity, to permit read access to the buffer memory (104) by accessors (106, 110) and to return the buffer memory to a free buffer pool when counter (108) signals that the count has reached zero."
For clarity of the record, the Examiner would like to point to paragraphs [0086] and [0093] of the Specification of the instant application, which recite “buffer_send() causes transport abstraction implementor to decrement a reference counter by one for example, if transport abstraction implementor 230 decrements a reference counter to zero (e.g., ref_count = 0) in response to buffer send(), then transport abstraction implementor 230 releases an allocated buffer” and “buffer_send() API 352, which causes transport abstraction API implementor to send data over a transport, decrement a reference counter by one, and release an allocated buffer back to a buffer pool 366”, respectively. Thus, as best understood in light of the specification, a reference counter which teaches releasing the buffer storing the data back to a buffer pool when the counter = 0 as taught by Fuente is analogous to a “reference counter used to indicate when to release the data stored in the storage” as recited by the claims. Further, it would have been obvious to one ordinary skill in the art that releasing the buffer is also releasing the data stored in that buffer.).
Fuente is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using a buffer to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the socket API of Francini in view of Bach such that when the socket API sends the data, a reference counter used to indicate when to release the buffer is decremented as taught by Fuente. Using a reference counter allows the buffer memory to be allocated, pinned and freed without preventing read accesses by multiple accessors that transmit the data stored, which further allows for rapid retransmission of the data stored in the buffer when a transmission fails (Fuente: [0032]).
Regarding claim 24, the combination of Francini in view of Bach teaches the non-transitory machine-readable medium of claim 18, wherein:
the storage selected is an allocated buffer (Francini: FIG. 1, buffer 123 on wireless access device 120; [0022] - "The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.").
Francini in view of Bach fails to teach the API is further to decrement a reference counter associated with the allocated buffer; and if the decremented reference counter holds a value of zero, the allocated buffer is deselected.
However, Fuente teaches to decrement a reference counter associated with the allocated buffer; and if the decremented reference counter holds a value of zero, the allocated buffer is deselected ([0025]-[0026] - "Counter (108) maintains a count of the number of references to the buffer memory (104) by the accessors (106, 110), the count being incremented on each reference and decremented on completion of each accessor's data transmission. [...] Memory manager (114) is adapted to lock buffer memory during write activity, to permit read access to the buffer memory (104) by accessors (106, 110) and to return the buffer memory to a free buffer pool when counter (108) signals that the count has reached zero."; [0032] - "the counter of the preferred embodiment allows the buffer memory to be allocated, "pinned ", and freed"; [0042] - "At step (226), a further test is performed to determine whether the count has reached zero or not. If it has not reached zero, this part of the logic process returns to step (228) to be triggered by the next completion. If on any iteration, the count is determined to have reached zero, the memory manager (114) releases the buffer memory (104).".
Fuente is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using a buffer to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the socket API of Francini in view of Bach such that the socket API taught by Francini in view of Bach manages a reference counter for the buffer used to store data to be transmitted as taught by Fuente. Using a reference counter allows the buffer memory to be allocated, pinned and freed without preventing read accesses by multiple accessors that transmit the data stored, which further allows for rapid retransmission of the data stored in the buffer when a transmission fails (Fuente: [0032]).
Claims 3, 5, 7, 11, 13, 17, 22-23, 27-28, and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Francini in view of Bach as applied to claims 1, 10, 18, and 26, and further in view of Sen et al. (U.S. Pub. No. 2020/0218684), hereinafter Sen.
Regarding claim 3, the combination of Francini in view of Bach teaches the one or more processors of claim 1, but fails to expressly teach wherein the information is to be transferred based at least on a transport layer used to associate a function of the one or more transport protocols with a corresponding function of the one or more different transport protocols.
However, Sen teaches the information is to be transferred based at least on a transport layer used to associate a function of the one or more transport protocols with a corresponding function of the one or more different transport protocols ([0033] – “The accelerator application interface includes an accelerator library used to access accelerator resources. […] The accelerator library uses device libraries to abstract device-specific details into an API that provides the necessary translation to device-specific interfaces via corresponding device drivers. According to various embodiments, the accelerator library also uses individual transport definitions to abstract transport-specific details into an API that provides the necessary translation to transport-specific interfaces via corresponding transport protocols. The device library at the computing platform 102 is used to connect the application(s) to one or more local devices, such as a local hardware accelerator 212. Similarly, the transport definition is used to connect the application (e.g., application 820 of FIG. 8) to a remote hardware accelerator 312 resident in the accelerator sled 104. The transport definition allows applications to send commands and data to a target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) and receive data from the target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) without requiring knowledge of the underlying transport protocols (e.g., transport layers 823, 824 of FIG. 8) being used. In some embodiments, bindings are used to link the device libraries and transport definitions with device drivers and transport protocols, respectively. Bindings involve a process or technique of connecting two or more data elements or entities together. Bindings allows the accelerator library to be bound to multiple protocols, including one or more IX protocols and one or more transport protocols.”; [0034] – “The transport definition is independent of the transport protocols that are used to carry data to remote accelerator resources. […] Each of the transport layers (e.g., transport layers 823, 824 of FIG. 8) may include primitives such as read/write data from/to device; process device command; get/set device properties; and event subscription and notification. The transport layers (e.g., transport layers 823, 824 of FIG. 8) may also have mechanisms to allow scalable and low latency communication, such as […] protocol independent format definition allowing for multiple protocol bindings.” The transport layers 823, 824 have protocol-independent format definitions which use bindings (“a set of associations that correlate” the transport protocols) to connect the transport protocol-independent transport definition to the multiple different transport protocols.).
Sen is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the methods taught by Francini in view of Bach to include a set of associations that correlate functions of the transport protocols with functions of the different transport protocols as taught by Sen. Using an API that abstracts details of the underlying transport protocols and translates to a specific transport protocol used provides the benefit of seamless and transparent access to remote resources (Sen: [0034]).
Regarding claim 5, the combination of Francini in view of Bach teaches the one or more processors of claim 1, but fails to teach wherein performing the API further causes the first wireless computing resource to perform one or more operations associated with the one or more different transport protocols.
However, Sen teaches wherein performing the API further causes the first wireless computing resource to perform one or more operations associated with the one or more different transport protocols ([0022] – “The network 106 (also referred to as a "network fabric 106" or the like) may be embodied as any type of network capable of communicatively connecting the computing platforms 102 and the accelerator sleds 104. […] wireless”; [0059] – “Referring now to FIG. 6, in use, the computing platform 102 may execute a process 600 for sending an accelerator message to a hardware accelerator 212 or 312. The process 600 begins at operation 602, in which an application on the computing platform 102 determines a message to be sent to a hardware accelerator 212 or 312. The message may be embodied as an instruction to read or write data, a command to execute a certain function, an instruction to get or set a setting on an accelerator, a control command such as a query regarding the capability of an accelerator queue, and/or any other suitable message. At operation 604, the application passes the command or function to the accelerator manager 402. In the illustrative embodiment, the application passes the command or function with use of an application programming interface such that the details of communication with the hardware accelerator 212 or 312 are hidden from the associated application.”; [0062] – “Referring back to operation 606, if the accelerator manager 402 is to pass the message to a remote hardware accelerator 312, the process 600 proceeds to operation 612, in which the computing platform 102 generates a command capsule based on the message received from the application. […] the command capsule may encapsulate the message in a protocol different from a protocol used by the message.”; [0063] – “At operation 616, the computing platform 102 sends the command capsule to the accelerator sled 104. The computing platform 102 may use any suitable communication protocol, such as TCP, RDMA, RoCE, RoCEvl , RoCEv2, iWARP, etc.” An application on computing platform 102 (a “first wireless computing resource”) passes a message which uses a protocol – as described in [0062] – to an accelerator manager 402 via an API. The accelerator manager causes the computing platform 102 (the “first wireless computing resource”) to generate (“one or more operations”) a command capsule for the message using a different protocol (“different transport protocols”) and to send the command capsule which uses the different protocol to the accelerator sled 104 (a “second wireless computing resource”).).
Sen is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the API taught by Francini in view of Bach such that the API causes the first wireless computing resource to perform one or more operations associated with the one or more different transport protocols used by the second wireless computing resource as taught by Sen. Using an API that abstracts details of the underlying transport protocols and translates to a specific transport protocol provides the benefit of seamless and transparent access to remote resources (Sen: [0034]).
Regarding claim 7, the combination of Francini in view of Bach teaches the one or more processors of claim 1, but fails to expressly teach wherein performing the API is further to cause the first wireless computing resource to perform an operation related to the one or more different transport protocols based at least on a corresponding operation of the one or more transport protocols.
However, Sen teaches wherein performing the API is further to cause the first wireless computing resource to perform an operation related to the one or more different transport protocols based at least on a corresponding operation of the one or more transport protocols ([0022] – “The network 106 (also referred to as a "network fabric 106" or the like) may be embodied as any type of network capable of communicatively connecting the computing platforms 102 and the accelerator sleds 104. […] wireless”; [0059] – “Referring now to FIG. 6, in use, the computing platform 102 may execute a process 600 for sending an accelerator message to a hardware accelerator 212 or 312. The process 600 begins at operation 602, in which an application on the computing platform 102 determines a message to be sent to a hardware accelerator 212 or 312. The message may be embodied as an instruction to read or write data, a command to execute a certain function, an instruction to get or set a setting on an accelerator, a control command such as a query regarding the capability of an accelerator queue, and/or any other suitable message. At operation 604, the application passes the command or function to the accelerator manager 402. In the illustrative embodiment, the application passes the command or function with use of an application programming interface such that the details of communication with the hardware accelerator 212 or 312 are hidden from the associated application.”; [0062] – “Referring back to operation 606, if the accelerator manager 402 is to pass the message to a remote hardware accelerator 312, the process 600 proceeds to operation 612, in which the computing platform 102 generates a command capsule based on the message received from the application. […] the command capsule may encapsulate the message in a protocol different from a protocol used by the message.”; [0063] – “At operation 616, the computing platform 102 sends the command capsule to the accelerator sled 104. The computing platform 102 may use any suitable communication protocol, such as TCP, RDMA, RoCE, RoCEvl , RoCEv2, iWARP, etc.” An application on computing platform 102 (a “first wireless computing resource”) passes a message (“corresponding operation”) which uses a protocol (“one or more transport protocols”) – as described in [0062] – to an accelerator manager 402 via an API. The accelerator manager causes the computing platform 102 (the “first wireless computing resource”) to generate (“one or more operations”) a command capsule for the message using a different protocol (“different transport protocols”) and to send the command capsule which uses the different protocol to the accelerator sled 104 (a “second wireless computing resource”).).
Sen is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the API taught by Francini in view of Bach such that the API causes the first wireless computing resource to perform one or more operations associated with the one or more different transport protocols used by the second wireless computing resource as taught by Sen. Using an API that abstracts details of the underlying transport protocols and translates to a specific transport protocol provides the benefit of seamless and transparent access to remote resources (Sen: [0034]).
Regarding claim 11, the combination of Francini in view of Bach teaches the system of claim 10, but fails to expressly teach wherein performance of the API is based, at least in part, on an identification of the one or more different transport protocols associated with the first wireless computing resource.
However, Sen teaches wherein performance of the API is based, at least in part, on an identification of the one or more different transport protocols associated with the first wireless computing resource ([0086] – “initiator 822 identifies or determines one or more transport protocols to be used during a communication session 830 with target accelerator resource(s). Any number of transport protocols may be used”; [0022] – “The network 106 (also referred to as a "network fabric 106" or the like) may be embodied as any type of network capable of communicatively connecting the computing platforms 102 and the accelerator sleds 104. […] wireless”; In FIG. 8, initiator 822, on computing platform 102 (a “first wireless computing resource”), identifies the transport protocols used to communicate with target accelerator resource(s), such as accelerator sled 104 (a “second wireless computing resource”). Since these protocols are used by a second wireless computing resource, they are analogous to the “different transport protocols”. [0032]-[0033] - "a unified architecture or protocol stack is created that allows an application (e.g., application 820 of FIG. 8) hosted by the computing platform 102 to use both the locally attached accelerator 212 and remotely attached accelerators (e.g., one or more of hardware accelerators 312 of FIG. 3) (collectively referred to as "accelerator resources " or the like) over a network fabric 106. In these embodiments, the application (e.g., application 820 of FIG. 8) may access accelerator resources using an accelerator application interface (e. g. , an accelerator API) that provides abstractions for accelerator resources in a compute environment, such as a VM (e.g., VM 815 of FIG. 8). The accelerator application interface includes an accelerator library used to access accelerator resources. The accelerator library may be an accelerator specific run-time library (e. g., Open Computing Language (OpenCL), CUDA, Open Programmable Acceleration Engine (OPAE) API, or the like) that provides mapping of application constructs on to a hardware accelerator context. The accelerator library uses device libraries to abstract device-specific details into an API that provides the necessary translation to device specific interfaces via corresponding device drivers. According to various embodiments, the accelerator library also uses individual transport definitions to abstract transport-specific details into an API that provides the necessary translation to transport-specific interfaces via corresponding transport protocols. The device library at the computing platform 102 is used to connect the application(s) to one or more local devices, such as a local hardware accelerator 212. Similarly, the transport definition is used to connect the application (e.g., application 820 of FIG. 8) to a remote hardware accelerator 312 resident in the accelerator sled 104. The transport definition allows applications to send commands and data to a target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) and receive data from the target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) without requiring knowledge of the underlying transport protocols (e.g., transport layers 823, 824 of FIG. 8) being used." The accelerator application interface (analogous to the claimed "API") comprises an accelerator library containing transport definitions which abstract transport-specific details for communicating with an accelerator into an API. The API provides translation from a protocol-independent transport definition to a transport-specific interface via a corresponding transport protocol (one of the “different transport protocols”) used to send data to the accelerator over a wireless network.).
Sen is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the API taught by Francini in view of Bach such that the API is based on information identifying the different transport protocols associated with one of the wireless computing resources as taught by Sen. Using an API that abstracts details of the underlying transport protocols and translates to a specific transport protocol provides the benefit of seamless and transparent access to remote resources (Sen: [0034]).
Regarding claim 13, the combination of Francini in view of Bach teaches the system of claim 10, but fails to expressly teach wherein an application associated with the first wireless computing resource calls the API to obtain information regarding the one or more different transport protocols supported by the second wireless computing resource.
However, Sen teaches an application ([0079] – “the initiator 822 is an application hosted by the VM 815”) associated with the first wireless computing resource (FIG. 8, computing platform 102; [0022] – “The network 106 (also referred to as a "network fabric 106" or the like) may be embodied as any type of network capable of communicatively connecting the computing platforms 102 and the accelerator sleds 104. […] wireless”) calls the API ([0085] – “the initiator 822 hosted by a VM 815 of a computing platform 102 executes process 900 for establishing a communication session 830 with target accelerator resource(s) via the accelerator manager 502. […] the various messages discussed a being communicated between the initiator 822 and the accelerator resource(s) may be performed according to processes 600-700 of FIGS. 6-7, respectfully.”; [0059] – “the computing platform 102 may execute a process 600 for sending an accelerator message to a hardware accelerator 212 or 312. The process 600 begins at operation 602, in which an application on the computing platform 102 determines a message to be sent to a hardware accelerator 212 or 312. […] At operation 604, the application passes the command or function to the accelerator manager 402. In the illustrative embodiment, the application passes the command or function with use of an application programming interface”; [0088] – “At operation 906, the initiator 822 generates and sends, to the target accelerator resource(s), a connection establishment request message for a primary connection 831.” Sending a message, e.g., the connection establishment request message, to a target hardware accelerator resource involves an application, e.g., the initiator, passing the message to an API (“calls the API”) as described in process 600.) to obtain information regarding the one or more different transport protocols ([0089] – “In response to the connection establishment request message for the primary connection 831 , at operation 906, the initiator 822 receives a connection establishment response message for the primary connection 831 from the target accelerator resource(s ). For example, where an RDMA-based protocol is used for the primary connection 831, such as RoCEv2, the target accelerator resource(s) may encapsulate an RDMA acknowledgement (ACK) packet within an Ethernet/IP/UDP packet (including either IPv4 or IPv6) and including suitable destination and source addresses based on the connection establishment request message. In embodiments, the connection establishment response message for the primary connection 831 includes a session ID, which may be included in the header or payload section of the message. The session ID is generated by the target accelerator resource(s) and is discussed in more detail infra. Other suitable information may be included in the connection establishment response message, such as an accelerator resource identifier and/or other protocol specific information.”) supported by the second wireless computing resource (FIG. 8, accelerator sled 104 with accelerator(s) 312 which is the “target hardware accelerator resource(s)”; [0033] – “the accelerator library also uses individual transport definitions to abstract transport-specific details into an API that provides the necessary translation to transport-specific interfaces via corresponding transport protocols. […] the transport definition is used to connect the application (e.g., application 820 of FIG. 8) to a remote hardware accelerator 312 resident in the accelerator sled 104. The transport definition allows applications to send commands and data to a target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) and receive data from the target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3)”).
Sen is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the teachings of Francini in view of Bach such that an application of the first wireless computing resource calls the API to obtain information regarding the one or more different transport protocols supported by the second wireless computing resource as taught by Sen. Using the initiator application of Sen that establishes multiple connections with a remote accelerator using different transport protocols over a wireless network achieves high-availability goals even when one transport protocol is temporarily unusable (Sen: [0073]).
Regarding claim 17, the combination of Francini in view of Bach teaches the system of claim 10, but fails to expressly teach wherein the first wireless computing resource is a virtual device.
However, Sen teaches wherein the first wireless computing resource is a virtual device ([0021] – “an application or virtual machine (VM) being executed by a processor 202 of the computing platform 102 (see FIG. 2) may access a hardware accelerator 212 or 312 (see FIGS. 2 and 3) in a manner that is transparent to the application or VM. For example, the application may access an application program interface (API), and the API is used to transparently perform the requested function on either a local hardware accelerator 212 or a remote hardware accelerator 312 without requiring any involvement from the underlying application.”; [0022] – “any type of network capable of communicatively connecting the computing platforms 102 and the accelerator sleds 104. […] wireless”; [0033] – “The accelerator application interface includes an accelerator library […] the accelerator library also uses individual transport definitions to abstract transport-specific details into an API that provides the necessary translation to transport-specific interfaces via corresponding transport protocols. […] The transport definition allows applications to send commands and data to a target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) and receive data from the target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) without requiring knowledge of the underlying transport protocols (e.g., transport layers 823, 824 of FIG. 8) being used.”; [0058] – “The accelerator virtualizer 508 is configured to present one physical hardware accelerator 312 as two or more virtual hardware accelerators 312. The accelerator virtualizer 508 may allow for two computing platforms 102 or two processors 202 or threads on the same computing platform 102 to access the same hardware accelerator 312 without any configuration necessary on the part of the computing platform 102. For example, the accelerator manager 502 may send an indication to a computing platform 102 that the accelerator sled 104 has two hardware accelerators 312 available, which are in fact two virtual hardware accelerator 312 that correspond to one physical hardware accelerator 312. The computing platform 102 may provide messages to each of the two virtual hardware accelerators 312, which are processed by the physical hardware accelerator 312 in such a way as to provide the same response as if the commands were being processed on two physical accelerators 312").
Sen is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the first wireless computing resource taught by Francini in view of Bach such that it is a virtual device such as virtual machine or virtual hardware accelerator as taught by Sen. Sen teaches data may be transferred between an application or virtual machine (a “wireless computing resource”) to remote hardware accelerators (also “wireless computing resource”) on an accelerator sled using an API that abstracts the transport protocols such that tasks may be performed faster and more efficiently (Sen: [0016]), and virtualized hardware accelerators on the accelerator sled allow multiple applications to use the same physical hardware accelerator (Sen: [0043]). Further, virtual machines provide scalability, flexibility, manageability, and utilization, which results in lower operating or overhead costs (Sen: [0074]).
Regarding claim 22, the combination of Francini in view of Bach teaches the non-transitory machine-readable medium of claim 18, but fails to teach wherein the one or more processors are one or more graphics processing units (GPUs).
However, Sen teaches wherein the one or more processors are one or more graphics processing units (GPUs) ([0025] – “the processor(s) 202 may include Intel® Core™ based processor(s) and/or Xeon® processor(s); Advanced Micro Devices (AMD) Zen® Core Architecture processor(s), such as Epyc ® processor(s), Opteron™ series Accelerated Processing Units (APUs), and/or MxGPUs"; [0046]-[0047] – “It should be appreciated that, in such embodiments the accelerator manager circuit 402, the local accelerator manager circuit 404, and/or the remote accelerator manager circuit 406, etc., may form a portion of one or more of the processor 202 [...] The accelerator manager 402 is configured to manage accelerators that an application executed by the processor 202 may interface with. In some embodiments, the accelerator manager 402 may implement an application programming interface for accessing an accelerator". The API for accessing accelerator resources is implemented by accelerator manager 402, which may for part of processor 202. As stated in [0025], processor 202 may be an AMD MxGPU.).
Sen is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the teachings of Francini in view of Bach such that the one or more processors that execute instructions to perform an API are one or more GPUs as taught by Sen, since some computing tasks may be performed more quickly and/or efficiently by a GPU (Sen: [0002], [0016]).
Regarding claim 23, the combination of Francini in view of Bach teaches the non-transitory machine-readable medium of claim 18, but fails to expressly teach wherein: the first wireless computing resource is to call the API; and performance of the API causes, at least in part, the first wireless computing resource to transfer information to the second wireless computing resource without modification to the first wireless computing resource.
However, Sen teaches the first wireless computing resource is to call the API ([0022] – “The network 106 (also referred to as a "network fabric 106" or the like) may be embodied as any type of network capable of communicatively connecting the computing platforms 102 and the accelerator sleds 104. […] Network connectivity may be provided to/from the compute devices 102 and accelerator sleds 104 via respective network interface connectors using physical connections, which may be electrical (commonly referred to as a "copper interconnect"), optical, or wireless.”; [0059] - "Referring now to FIG. 6, in use, the computing platform 102 may execute a process 600 for sending an accelerator message to a hardware accelerator 212 or 312. The process 600 begins at operation 602, in which an application on the computing platform 102 determines a message to be sent to a hardware accelerator 212 or 312. [...] At operation 604, the application passes the command or function to the accelerator manager 402. In the illustrative embodiment, the application passes the command or function with use of an application programming interface such that the details of communication with the hardware accelerator 212 or 312 are hidden from the associated application." An application on computing platform 102 (a “first wireless computing resource”) passes information in a message ("call") to the accelerator manager 402 using an API.); and performance of the API causes, at least in part, the first wireless computing resource to transfer information to the second wireless computing resource ([0060] – “At operation 606, if the accelerator manager 402 is to pass the message to a local hardware accelerator 212, the process 600 proceeds to operation 608, in which the accelerator manager 402 passes the message to the local hardware accelerator 212. The accelerator manager 402 may pass the message to the hardware accelerator 212 in any suitable manner, such as by sending the message over a bus such as a PCie bus, a QuickPath interconnect (QPI}, a HyperTransport interconnect, etc. The accelerator manager 402 may select a local hardware accelerator 212 or a remote hardware accelerator 312 based on any number of factors"; [0062]-[0063] - "Referring back to operation 606, if the accelerator manager 402 is to pass the message to a remote hardware accelerator 312, the process 600 proceeds to operation 612, in which the computing platform 102 generates a command capsule based on the message received from the application. [...] the command capsule may encapsulate the message in a protocol different from a protocol used by the message. [...] At operation 616, the computing platform 102 sends the command capsule to the accelerator sled 104. The computing platform 102 may use any suitable communication protocol, such as TCP, ROMA RoCE, RoCEvl , RoCEv2, i WARP, etc." In response to receiving the message using an API from the application on computing platform 102 (“first wireless computing resource”), the application manager 402 passes the message ("transfers information") to a local or remote hardware accelerator (“second wireless computing resource”) using any suitable transport protocol. As stated in [0062], the protocol used to send the message to a remote hardware accelerator may be different from a protocol used by the original message.) without modification to the first wireless computing resource ([0047] – “in some embodiments, an application may interact with an accelerator manager 402 of a computing platform 102 a first time and a second time. In such an example, for the first interaction, the accelerator manager 402 may facilitate an interface with a local hardware accelerator 212 and, for the second interaction, the accelerator manager 402 may facilitate an interface with a remote hardware accelerator 312, without any change or requirements in how the application interacts with the accelerator manager 402 between the first interaction and the second interaction." The application is not modified even when the application manager is transferring information to hardware accelerators that use different transport protocols (local vs. remote hardware accelerators).).
Sen is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the API taught by Francini in view of Bach such that the API is called by a computing resource, and performing the API causes the calling resource to transfer information to other computing resources that support different transport protocols without modification to the computing resource which called the API as taught by Sen. Using an API that abstracts details of the underlying transmission types/transport protocols provides the benefit of seamless and transparent access to other computing resources (Sen: [0034]).
Regarding claim 27, the combination of Francini in view of Bach teaches the method of claim 26, but fails to expressly teach wherein performance of the API is based at least on a set of associations that correlate one or more functions of the one or more transport protocols with one or more functions of the one or more different transport protocols.
However, Sen teaches performance of the API is based at least on a set of associations that correlate one or more functions of the one or more transport protocols with one or more functions of the one or more different transport protocols ([0033] – “The accelerator application interface includes an accelerator library used to access accelerator resources. […] The accelerator library uses device libraries to abstract device-specific details into an API that provides the necessary translation to device-specific interfaces via corresponding device drivers. According to various embodiments, the accelerator library also uses individual transport definitions to abstract transport-specific details into an API that provides the necessary translation to transport-specific interfaces via corresponding transport protocols. The device library at the computing platform 102 is used to connect the application(s) to one or more local devices, such as a local hardware accelerator 212. Similarly, the transport definition is used to connect the application (e.g., application 820 of FIG. 8) to a remote hardware accelerator 312 resident in the accelerator sled 104. The transport definition allows applications to send commands and data to a target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) and receive data from the target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) without requiring knowledge of the underlying transport protocols (e.g., transport layers 823, 824 of FIG. 8) being used. In some embodiments, bindings are used to link the device libraries and transport definitions with device drivers and transport protocols, respectively. Bindings involve a process or technique of connecting two or more data elements or entities together. Bindings allows the accelerator library to be bound to multiple protocols, including one or more IX protocols and one or more transport protocols.”; [0034] – “The transport definition is independent of the transport protocols that are used to carry data to remote accelerator resources. […] Each of the transport layers (e.g., transport layers 823, 824 of FIG. 8) may include primitives such as read/write data from/to device; process device command; get/set device properties; and event subscription and notification. The transport layers (e.g., transport layers 823, 824 of FIG. 8) may also have mechanisms to allow scalable and low latency communication, such as […] protocol independent format definition allowing for multiple protocol bindings.” The accelerator library comprising an API uses bindings (“a set of associations that correlate” the transport protocols) which connect the accelerator library to multiple different protocols, and specifically connect the transport definition to the multiple different transport protocols.).
Sen is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the method taught by Francini in view of Bach to include a set of associations that correlate functions of the transport protocols with functions of the different transport protocols as taught by Sen. Using an API that abstracts details of the underlying transport protocols and translates to a specific transport protocol used provides the benefit of seamless and transparent access to remote resources (Sen: [0034]).
Regarding claim 28, the combination of Francini in view of Bach teaches the method of claim 26, but fails to expressly teach further comprising identifying the one or more different transport protocols.
However, Sen teaches identifying the one or more different transport protocols ([0086] – “initiator 822 identifies or determines one or more transport protocols to be used during a communication session 830 with target accelerator resource(s). Any number of transport protocols may be used”; [0022] – “The network 106 (also referred to as a "network fabric 106" or the like) may be embodied as any type of network capable of communicatively connecting the computing platforms 102 and the accelerator sleds 104. […] wireless”; [0032]-[0033] – applications on computing platforms 102 use an API to send and receive data from accelerator resources, such as accelerator sled 104, without underlying knowledge of the transport protocols. In FIG. 8, initiator 822, on computing platform 102 (a “first wireless computing resource”), identifies the transport protocols used to communicate with target accelerator resource(s), such as accelerator sled 104 (a “second wireless computing resource”). Since these protocols are used by a second wireless computing resource, they are analogous to the “different transport protocols”. ).
Sen is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the method taught by Francini in view of Bach to include identifying the different transport protocols as taught by Sen. Using an API that abstracts details of the underlying transport protocols and translates to a specific transport protocol used provides the benefit of seamless and transparent access to remote resources (Sen: [0034]).
Regarding claim 31, the combination of Francini in view of Bach teaches the method of claim 26, but fails to expressly teach further comprising transferring the information using an application that maps the API to an operation related to an operation of the one or more different transport protocols, wherein the application is implemented, at least in part, on a hardware accelerator.
However, Sen teaches transferring the information using an application that maps the API to an operation related to an operation of the one or more different transport protocols ([0033] – “The accelerator application interface includes an accelerator library used to access accelerator resources. The accelerator library may be an accelerator specific run-time library (e.g., Open Computing Language (OpenCL }, CUDA, Open Programmable Acceleration Engine (OPAE) API, or the like) that provides mapping of application constructs on to a hardware accelerator context. [...] the accelerator library also uses individual transport definitions to abstract transport-specific details into an API that provides the necessary translation to transport-specific interfaces via corresponding transport protocols. [...] Similarly, the transport definition is used to connect the application (e.g., application 820 of FIG. 8) to a remote hardware accelerator 312 resident in the accelerator sled 104. The transport definition allows applications to send commands and data to a target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) and receive data from the target hardware accelerator (e.g., one or more of hardware accelerators 312 of FIG. 3) without requiring knowledge of the underlying transport protocols (e.g., transport layers 823, 824 of FIG. 8) being used." The API for accessing an accelerator (provided by accelerator manager 402 - see [0047]) translates ("maps") to transport-specific details. [0059] – “Referring now to FIG. 6, in use, the computing platform 102 may execute a process 600 for sending an accelerator message to a hardware accelerator 212 or 312. The process 600 begins at operation 602, in which an application on the computing platform 102 determines a message to be sent to a hardware accelerator 212 or 312. [...] At operation 604, the application passes the command or function to the accelerator manager 402. In the illustrative embodiment, the application passes the command or function with use of an application programming interface such that the details of communication with the hardware accelerator 212 or 312 are hidden from the associated application."; [0062] – “Referring back to operation 606, if the accelerator manager 402 is to pass the message to a remote hardware accelerator 312, the process 600 proceeds to operation 612, in which the computing platform 102 generates a command capsule based on the message received from the application. [...] the command capsule may rearrange or otherwise reorganize the message in preparation for being sent to the accelerator sled 104. In some embodiments the command capsule may encapsulate the message in a protocol different from a protocol used by the message." The accelerator manager (“application”) generates a command capsule corresponding to a transport protocol used by the accelerator sled (“the one or more different transport protocols”), which may be different from the protocol used by the message from the application passed to the accelerator manager using the API.), wherein the application is implemented, at least in part, on a hardware accelerator ([0047] – “The accelerator manager 402 is configured to manage accelerators that an application executed by the processor 202 may interface with. In some embodiments, the accelerator manager 402 may implement an application programming interface for accessing an accelerator''; [0054] - "The accelerator manager 502 is configured to manage the hardware accelerators 312 on the accelerator sled 104 and to allow remote interfacing with the hardware accelerators 312 through the host fabric interface 310. The accelerator manager 502 may process message capsules received from and sent to the computing platform 102 and may, based on the content of the message capsules, execute the relevant necessary operations to interface with the hardware accelerators 312, such as reading data from the hardware accelerator 312, writing data to the hardware accelerator 312, executing commands on the hardware accelerator 312, getting and setting properties of the hardware accelerator 312, receiving and processing events or notifications from the acceleration device 312 (such as sending a message capsule to send an interrupt or set a semaphore on the computing platform 102), etc." The "application" = accelerator manager 402/502, accelerator manager 502 is implemented on accelerator sled 104. Accelerator manager provides an API for an application to access an accelerator as indicated in [0047]. Additionally, [0046] states environment 400, including accelerator manager 402, may be embodied on any component(s) of computing platform 102, which may include local hardware accelerator 212 as shown in FIG. 2.).
Sen is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the teachings of Francini in view of Bach such that information is transferred using an application implemented at least in part on a hardware accelerator (e.g., accelerator manager), where the application maps the API to an operation related to a transport protocol. Using an API that abstracts details of the underlying transmission types/transport protocols provides the benefit of seamless and transparent access to other computing resources (Sen: [0034]), and using a hardware accelerator may result in certain computing tasks being performed faster and more efficiently (Sen: [0002] and [0016]).
Claims 4, 21, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Francini in view of Bach as applied to claims 1, 18 and 26, and further in view of Hyder et al. (U.S. Patent. No. 5,983,274), hereinafter Hyder.
Regarding claim 4, the combination of Francini in view of Bach teaches the one or more processors of claim 1, but fails to teach wherein the circuitry is further to perform the API based at least on one or more drivers of a transport layer, the one or more drivers used to operate the first and second wireless computing resources.
However, Hyder teaches to perform the API based at least on one or more drivers of a transport layer (Claim 1 – “a protocol driver; a device driver; an integrating driver that interfaces with the protocol driver and the device driver using defined APIs”; Col. 2, lines 13-20 – “Because there are different types of transport protocols developed over time by different entities for different reasons, there may be different types of transport protocol drivers acting as software components running on a single host computer system in order to provide the necessary networking capabilities for a given installation. Some common transport protocols include TCP/IP, IPX, AppleTalk®, and others.”), the one or more drivers used to operate the first and second wireless computing resources (Col. 1, lines 60-62 – “link layer implemented by a network card device driver, and the transport and network layers implemented as a transport protocol driver”; Col. 2, lines 20-24 – “Each transport protocol driver will communicate with one or more individual network card device drivers in order to send network data over a communications network and receive incoming packets from the communications network.”; Col. 1, lines 30-37 – “Data that is shared between computers is sent in packets across the physical network connection and read by destination computers. […] As used herein, the term "network data" refers to data or information that is actually transmitted over the communications network between different computers.”; Col. 6, lines 19-20 – “cellular, and other wireless technologies, etc. provide ripe opportunities for exploiting the present invention.”; Col. 7, line 56-Col. 8, line 3 – “For sending network data from the upper layers 106, the transport protocol driver 100 will allocate a packet data structure from the integrating component 102, fill the data structure with network information and control information according to the present invention, and send it down through the integrating component 102 to the network card device driver 104 for transmitting the network data on the network interface card 108. In like manner, for a packet received from the network interface card 108, the network card device driver 104 will allocate a packet data structure from the integrating component 102, fill it with the network data and control information according to the present invention, and send it through the integrating component 102 to the transport protocol driver 100 for communication to the upper layers 106.”).
Hyder is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the teachings of Francini in view of Bach such that the API is performed based on one or more drivers of a transport layer that operate the first and second wireless computing resources as taught by Hyder. The layers of the ISO model, including the link layer and transport layer, are implemented using drivers as is well known in the art (Hyder: Col. 1, lines 51-62). Further, the integrating component (an API) taught by Hyder which is performed based on one or more drivers of a transport layer that operate computing resources sending and receiving data over a network provides the benefit of allowing transport protocol drivers and network card drivers to be developed more efficiently, and allow communication with any available transport protocol (Hyder: Col. 3, lines 45-65).
Regarding claim 21, the combination of Francini in view of Bach teaches the non-transitory machine-readable medium of claim 18, but fails to teach wherein the one or more processors are further to perform the API based at least on one or more drivers of a transport layer, the one or more drivers used to operate the first and second wireless computing resources.
However, Hyder teaches to perform the API based at least on one or more drivers of a transport layer (Claim 1 – “a protocol driver; a device driver; an integrating driver that interfaces with the protocol driver and the device driver using defined APIs”; Col. 2, lines 13-20 – “Because there are different types of transport protocols developed over time by different entities for different reasons, there may be different types of transport protocol drivers acting as software components running on a single host computer system in order to provide the necessary networking capabilities for a given installation. Some common transport protocols include TCP/IP, IPX, AppleTalk®, and others.”), the one or more drivers used to operate the first and second wireless computing resources (Col. 1, lines 60-62 – “link layer implemented by a network card device driver, and the transport and network layers implemented as a transport protocol driver”; Col. 2, lines 20-24 – “Each transport protocol driver will communicate with one or more individual network card device drivers in order to send network data over a communications network and receive incoming packets from the communications network.”; Col. 1, lines 30-37 – “Data that is shared between computers is sent in packets across the physical network connection and read by destination computers. […] As used herein, the term "network data" refers to data or information that is actually transmitted over the communications network between different computers.”; Col. 6, lines 19-20 – “cellular, and other wireless technologies, etc. provide ripe opportunities for exploiting the present invention.”; Col. 7, line 56-Col. 8, line 3 – “For sending network data from the upper layers 106, the transport protocol driver 100 will allocate a packet data structure from the integrating component 102, fill the data structure with network information and control information according to the present invention, and send it down through the integrating component 102 to the network card device driver 104 for transmitting the network data on the network interface card 108. In like manner, for a packet received from the network interface card 108, the network card device driver 104 will allocate a packet data structure from the integrating component 102, fill it with the network data and control information according to the present invention, and send it through the integrating component 102 to the transport protocol driver 100 for communication to the upper layers 106.”).
Hyder is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the teachings of Francini in view of Bach such that the API is performed based on one or more drivers of a transport layer that operate the first and second wireless computing resources as taught by Hyder. The layers of the ISO model, including the link layer and transport layer, are implemented using drivers as is well known in the art (Hyder: Col. 1, lines 51-62). Further, the integrating component (an API) taught by Hyder which is performed based on one or more drivers of a transport layer that operate computing resources sending and receiving data over a network provides the benefit of allowing transport protocol drivers and network card drivers to be developed more efficiently, and allow communication with any available transport protocol (Hyder: Col. 3, lines 45-65).
Regarding claim 30, Francini teaches the method of claim 26, wherein the API […] is stored as part of a layer different from another layer comprising the first wireless computing resource (Francini: FIG. 2B, transport layer 224S is the layer comprising the first wireless computing resource, client socket API 215S is a separate layer of the stack; [0022] – “The wireless access device 120 includes a processor 121, a memory 122 […] The memory 122 includes a buffer 123 and a communication protocol stack 124”; [0061] – “the wireless access device runs a communication protocol stack. The communication protocol stack includes a transport layer, a client socket API, a link layer (e.g., a reliable link layer), and a physical layer. […] The client socket API of the wireless access device provides an interface between the networked client transport layer socket of the server-facing transport layer and the link layer at the wireless access device (rather than between an application layer and a transport layer).”).
Francini in view of Bach fails to expressly teach the API is to be called by the first wireless computing resource.
However, Hyder teaches the API is to be called by the first wireless computing resource (Col. 1, lines 60-62 – “data link layer implemented by network card device driver, and the transport and network layers implemented as a transport protocol driver”; Col. 7, lines 38-39 – “Application Programming Interface (API) is a set of subroutines provided by one software component”; Col. 10, lines 34-40 – “The transport protocol driver 100 then sends or transfers the packet to the integrating component 102 at step 140 by making a subroutine call […] the integrating component 102 will send of transfer the packet to the network card device driver at step 142”; Claim 1 – “protocol driver; a device driver; an integrating driver that interfaces with the protocol driver and the device driver using defined APIs”).
Hyder is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the teachings of Francini in view of Bach such that the API is called by the first wireless computing resource, i.e., the transport layer, as taught by Hyder. The methods of Hyder including the integrating component (an API) which is called by a driver of a transport layer (the “first computing resource”) to transfer data to the driver of a link layer to be sent over a network provides the benefit of allowing transport protocol drivers and network card drivers to be developed more efficiently, and allow communication with any available transport protocol (Hyder: Col. 3, lines 45-65).
Claims 8-9, 12, and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Francini in view of Bach as applied to claims 1, 10, and 26, and further in view of Kaltenberger et al. (NPL Document: OpenAirInterface: Democratizing innovation in the 5G Era), hereinafter Kaltenberger.
Regarding claim 8, the combination of Francini in view of Bach teaches the one or more processors of claim 1, wherein:
the first and second wireless computing resources are associated with a fifth generation (Francini: [0019] – “The communication system 100 includes […] a wireless access device (WAD) 120 […] communication system 100 may be provided using cellular wireless technology (e.g., Third Generation (3G) wireless technology, Fourth Generation (4G) wireless technology such as Long Term Evolution (LTE), Fifth Generation (5G) wireless technology, or the like)”; [0061] – “the wireless access device runs a communication protocol stack. The communication protocol stack includes a transport layer, a client socket API, a link layer (e.g., a reliable link layer), and a physical layer. […] The client socket API of the wireless access device provides an interface between the networked client transport layer socket of the server-facing transport layer and the link layer at the wireless access device (rather than between an application layer and a transport layer).”);
the first wireless computing resource associated with the first layer (Francini: FIG. 2B, first layer = transport layer 224S);
the second wireless computing resource associated with the second layer (Francini: FIG. 2B, second layer = link layer 222C);
the API associated with the third layer (Francini: FIG. 2B, third layer = client socket API 215S); and
the third layer is located between the first and second layers (Francini: [0061] – “The client socket API of the wireless access device provides an interface between the networked client transport layer socket of the server-facing transport layer and the link layer at the wireless access device (rather than between an application layer and a transport layer).” Client socket API 215S provides an interface between link layer 222C and transport layer 224S at the wireless access device 120.).
Francini in view of Bach fails to expressly teach fifth generation new radio (5G-NR).
However, Kaltenberger teaches fifth generation new radio (5G-NR) (Page 2: "5G is also known by Release 15 of 3GPP. This release includes a brand new core network and radio interface, called 5G New Radio (5G-NR)."; The 5G release included both the new core network and radio interface, 5G-NR.; Page 6: “Control Plane Network Functions in the 5G system architecture are based on the service based architecture. […] The protocol stack for the service based interfaces is Application/HTTP2/TLS/TCP/IP/L2”.)
Kaltenberger is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that the 5G computing resources (e.g., layers of the communication protocol stack of WAD 120) in the communication system provided by 5G wireless technology as taught by Francini (Francini: [0018]) could be considered 5G-NR computing resources as claimed, since 5G-NR was part of the 5G release as taught by Kaltenberger (Kaltenberger: page 2).
Regarding claim 9, the combination of Francini in view of Bach teaches the one or more processors of claim 1, wherein:
the API is further to transfer information between a first layer and a second layer corresponding to a fifth generation (Francini: [0061] – “the wireless access device runs a communication protocol stack. The communication protocol stack includes a transport layer, a client socket API, a link layer (e.g., a reliable link layer), and a physical layer. […] The client socket API of the wireless access device communicates data, between the networked client transport layer socket for the mobile host device and the application layer of the mobile host device, based on interaction with the link layer (e.g., receiving data provided by the link layer for communications sourced by the application layer of the mobile host device and providing data for the link layer for communications intended for delivery to the application layer of the mobile host device). The client socket API of the wireless access device provides an interface between the networked client transport layer socket of the server-facing transport layer and the link layer at the wireless access device”; [0019] – “The communication system 100 may be provided using various types of underlying wireless technologies and, therefore, the MHD 110 and the WAD 120 may be configured to support various types of underlying wireless technologies (e.g., in terms of the implementation of certain layers of the communication protocol stacks 114 and 124 of the MHD 110 and the WAD 120, respectively). For example, communication system 100 may be provided using cellular wireless technology (e.g., Third Generation (3G) wireless technology, Fourth Generation (4G) wireless technology such as Long Term Evolution (LTE), Fifth Generation (5G) wireless technology, or the like)”), wherein the second wireless computing resource associated with the second layer requests an operation associated with the one or more different transport protocols (Francini: [0049] – “In FIG. 2B, the client socket API 215 is configured such that data of the application layer 216 of the MHD 110 (e.g., sourced by the application layer 216 or intended for delivery to the application layer 216), […] is passed between the application layer 216 of the MHD 110 and the networked client transport layer socket (transport layer 224S) which is hosted within the WAD 120 on behalf of the MHD 110) via the link layer connection 240B that is running over a physical connection between the MHD 110 and the WAD 120”; [0050] – “communication primitives of the client socket API 215 may include primitive rules for controlling manipulation of data (e.g., encapsulation and decapsulation of application data that is sourced by the application layer 216 for transmission toward the server 130, encapsulation and decapsulation of application data that is sourced by server 130 and that is intended for delivery to the application layer 216 of the MHD 110, or the like), primitive messages and associated primitive message formats that are configured to transport data of the application layer 216”; [0051] – “in a direction of transmission from the application layer 216 of the MHD 110 toward the application layer 236 of the server 130, communication of the application data of the application layer 216 of MHD 110 may be performed as follows. The client socket API-north 215N running on MHD 110 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 212 of the MHD 110.” FIG. 2B – the MHD 110 is connected to the reliable link layer of the WAD (the “second layer”), therefore the MHD 110 can be considered analogous to the “second computing resource associated with the second layer”, including the application client of the MHD 110 which uses the client socket API 215 to send data to the server 130 via the WAD 120 – which can be “requested” by an application as evidenced by the send() socket API call of Bach relied upon in claim 1. The “different transport protocols” include the protocol specifying the format of the socket API primitives as described in [0050] and as evidenced by Bach.); and
performance of the API causes the first wireless computing resource associated with the first layer to cause performance of the operation associated with the one or more different transport protocols (Francini: [0051] – “The physical layer 211 of the MHD 110 receives the link layer data structures from the reliable link layer 212 of the MHD 110 and transmits the link layer data structures from the MHD 110 toward WAD 120 wirelessly via the wireless communication link 140. The physical layer 221 of the WAD 120 receives the link layer data structures of the reliable link layer 212 of the MHD 110 wirelessly via the wireless communication link 140. The physical layer 221 of the WAD 120 provides the link layer data structures of the reliable link layer 212 of the MHD 110 to the reliable link layer 222C of the WAD 120. The reliable link layer 222C of the WAD 120 receives the link layer data structures of the reliable link layer 212 of the MHD 110 from the physical layer 221 of the WAD 120, processes the link layer data structures of the reliable link layer 212 of the MHD 110 to recover the primitive messages included therein (which, as previously discussed, may vary for different types of link layers, such as cellular, WiFi, or the like), and provides the primitive messages to the client socket API-south 215S running on the WAD 120. […]The client socket API-south 215S running on WAD 120 receives the primitive messages from the reliable link layer 222C of WAD 120, extracts the application data of the application layer 216 of MHD 110 from the primitive messages supported by the client socket API 215, and provides the application data of the application layer 216 of MHD 110 to the networked client transport layer socket (transport layer 224S) of the WAD 120 for transport to the server 130 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection).” Performance of the API causes the transport layer socket of transport layer 224S to send data to the server 130 (the operation requested by the application client on MHD 110 that is associated with a send() socket API call as evidenced by Bach).).
Francini in view of Bach fails to expressly teach fifth generation new radio (5G-NR).
However, Kaltenberger teaches fifth generation new radio (5G-NR) (Page 2: "5G is also known by Release 15 of 3GPP. This release includes a brand new core network and radio interface, called 5G New Radio (5G-NR)."; The 5G release included both the new core network and radio interface, 5G-NR.; Page 6: “Control Plane Network Functions in the 5G system architecture are based on the service based architecture. […] The protocol stack for the service based interfaces is Application/HTTP2/TLS/TCP/IP/L2”.)
Kaltenberger is considered to be analogous art to the claimed invention because they are reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that the 5G computing resources (e.g., layers of the communication protocol stack of WAD 120) in the communication system provided by 5G wireless technology as taught by Francini (Francini: [0018]) could be considered 5G-NR computing resources as claimed, since 5G-NR was part of the 5G release as taught by Kaltenberger (Kaltenberger: page 2).
Regarding claim 12, the combination of Francini in view of Bach teaches the system of claim 10, wherein:
the information is to be transferred between a first layer (Francini: FIG. 2B, transport layer 224S) and a second layer (Francini: FIG. 2B, link layer 222C) of a fifth generation (Francini: [0019] – “The communication system 100 includes […] a wireless access device (WAD) 120 […] communication system 100 may be provided using cellular wireless technology (e.g., Third Generation (3G) wireless technology, Fourth Generation (4G) wireless technology such as Long Term Evolution (LTE), Fifth Generation (5G) wireless technology, or the like)”; [0061] – “the wireless access device runs a communication protocol stack. The communication protocol stack includes a transport layer, a client socket API, a link layer (e.g., a reliable link layer), and a physical layer. […] The client socket API of the wireless access device provides an interface between the networked client transport layer socket of the server-facing transport layer and the link layer at the wireless access device (rather than between an application layer and a transport layer).”); and
the first layer and second layer are each associated with a different transport protocol (Francini: [0051] – “networked client transport layer socket (transport layer 224S) of the WAD 120 for transport to the server 130 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection)”; [0045] - "where the link layer connection 240B is provided using LTE, the link layer connection 240B is called a radio bearer or simply a bearer and the reliable link layer (namely, the reliable link layer 212 of MHD 110 and the reliable link layer 222C of WAD 120) may be divided into three sub-layers which are referred to as the Medium Access Control (MAC) sub-layer, the Radio Link Control (RLC) sub-layer, and the Packet Data Convergence Protocol (PDCP) sub-layer."; [0052] – “where the reliable link layer 222C of the WAD 120 is provided using LTE, the reliable link layer 222C may place the primitive messages into PDCP PDUs”).
Francini in view of Bach fails to expressly teach fifth generation new radio (5G-NR).
However, Kaltenberger teaches fifth generation new radio (5G-NR) (Page 2: "5G is also known by Release 15 of 3GPP. This release includes a brand new core network and radio interface, called 5G New Radio (5G-NR)."; The 5G release included both the new core network and radio interface, 5G-NR.; Page 6: “Control Plane Network Functions in the 5G system architecture are based on the service based architecture. […] The protocol stack for the service based interfaces is Application/HTTP2/TLS/TCP/IP/L2”.)
Kaltenberger is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that the 5G computing resources (e.g., layers of the communication protocol stack of WAD 120) in the communication system provided by 5G wireless technology as taught by Francini (Francini: [0018]) could be considered 5G-NR computing resources as claimed, since 5G-NR was part of the 5G release as taught by Kaltenberger (Kaltenberger: page 2).
Regarding claim 33, the combination of Francini in view of Bach teaches the method of claim 26, wherein:
the information is to be transferred between two layers of a fifth generation (Francini: [0019] – “The communication system 100 includes […] a wireless access device (WAD) 120 […] communication system 100 may be provided using cellular wireless technology (e.g., Third Generation (3G) wireless technology, Fourth Generation (4G) wireless technology such as Long Term Evolution (LTE), Fifth Generation (5G) wireless technology, or the like)”; [0052] – “In FIG. 2B, in a direction of transmission from the application layer 236 of the server 130 toward the application layer 216 of the MHD 110, communication of the application data of the application layer 236 of the server 230 may be performed as follows. The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection). The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120. The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222C of the WAD 120."; [0061] – “the wireless access device runs a communication protocol stack. The communication protocol stack includes a transport layer, a client socket API, a link layer (e.g., a reliable link layer), and a physical layer. [...] The client socket API of the wireless access device provides an interface between the networked client transport layer socket of the server-facing transport layer and the link layer at the wireless access device (rather than between an application layer and a transport layer)." FIG. 2B, "two layers" = transport layer 224S and link layer 222C on the wireless access device 120), wherein one layer is associated with the one or more transport protocols (Francini: [0051] – “networked client transport layer socket (transport layer 224S) of the WAD 120 for transport to the server 130 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection)” and the other layer is associated with the one or more different transport protocols (Francini: [0052] – “where the reliable link layer 222C of the WAD 120 is provided using LTE, the reliable link layer 222C may place the primitive messages into PDCP PDUs” The reliable link layer uses the primitive messages, which have rules and a format, i.e., a protocol, for the data as disclosed in [0050], and thus the reliable link layer is associated with the protocol of the primitive messages.); and
the API is located in a third layer (Francini: FIG. 2B, client socket API 215S; [0061] – “the wireless access device runs a communication protocol stack. The communication protocol stack includes a transport layer, a client socket API, a link layer (e.g., a reliable link layer), and a physical layer.”).
Francini in view of Bach fails to expressly teach fifth generation new radio (5G-NR).
However, Kaltenberger teaches fifth generation new radio (5G-NR) (Page 2: "5G is also known by Release 15 of 3GPP. This release includes a brand new core network and radio interface, called 5G New Radio (5G-NR)."; The 5G release included both the new core network and radio interface, 5G-NR.; Page 6: “Control Plane Network Functions in the 5G system architecture are based on the service based architecture. […] The protocol stack for the service based interfaces is Application/HTTP2/TLS/TCP/IP/L2”.)
Kaltenberger is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that the 5G computing resources (e.g., layers of the communication protocol stack of WAD 120) in the communication system provided by 5G wireless technology as taught by Francini (Francini: [0018]) could be considered 5G-NR computing resources as claimed, since 5G-NR was part of the 5G release as taught by Kaltenberger (Kaltenberger: page 2).
Claims 14 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Francini in view of Bach as applied to claims 10 and 26, and further in view of Lebin et al. (U.S. Pub. No. 2021/0385252), hereinafter Lebin.
Regarding claim 14, the combination of Francini in view Bach teaches the system of claim 10, but fails to teach wherein the API is embedded within another API.
However, Lebin teaches wherein the API is embedded within another API ([0043] - "Each called API server may in turn call additional APIs, and this execution flow can be nested many levels deep." Called APIs may call other APIs.).
Lebin is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using APIs to facilitate an information transfer. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Francini in view of Bach such that the API is embedded within another API as taught by Lebin. Embedding APIs within other APIs allows for the isolation of different functionalities of a complex API call to be split into many API calls (Lebin: [0043]).
Regarding claim 32, the combination of Francini in view of Bach teaches the method of claim 26, but fails to teach wherein the API is embedded within another API.
However, Lebin teaches wherein the API is embedded within another API ([0043] - "Each called API server may in turn call additional APIs, and this execution flow can be nested many levels deep." Called APIs may call other APIs.).
Lebin is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using APIs to facilitate an information transfer. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Francini in view of Bach such that the API is embedded within another API as taught by Lebin. Embedding APIs within other APIs allows for the isolation of different functionalities of a complex API call to be split into many API calls (Lebin: [0043]).
Claims 16, 25 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Francini in view of Bach as applied to claims 10, 18, and 26, and further in view of Young (U.S. Pub. No. 2021/0320850).
Regarding claim 16, the combination of Francini in view of Bach teaches the system of claim 10, but fails to teach the system further comprising: a network orchestrator configured to identify one or more transport profiles supported by the first wireless computing resource, wherein the network orchestrator is to deploy the second wireless computing resource with the first wireless computing resource.
However, Young teaches a network orchestrator configured to identify one or more transport profiles supported by the first wireless computing resource ([0039] - “Service orchestration and transport path management section 305 may include a service orchestrator (SO) 325, an analytics engine 330, a network functions virtual orchestrator (NFVO) 335"; [0043] – “SO 325 may generate and send a message to SIDC 345 that instructs SIDC 345 to identify one or more network infrastructures that are candidates for relocating the current transport path to maintain the SLA for the application service. In one implementation, the message may include service requirements including SLA requirements and/or service profiles associated with the application service."; [0044] – “SIDC 345 may include an infrastructure catalog 370 that stores network infrastructure profiles that provide transport paths for a corresponding particular service profile. In one implementation, infrastructure catalog 370 obtains the network infrastructure profiles from an inventory database 375 which orders the network infrastructure profiles based on a deployment preference value associated with each of the network infrastructure profiles." Network infrastructure profiles of network service infrastructures (implemented across 5G-NR "wireless computing resources" such as base stations in RAN 120 - see [0019] and [0026]) provide transport paths, making them "transport profiles".), wherein the network orchestrator is to deploy the second wireless computing resource with the first wireless computing resource ([0044] – “In one implementation, deployment preference values include abstracted "distances " between nodes in a transport path associated with a network infrastructure. For example, PCE 350 may calculate a logical "distance " which may be a function of latency, inversely proportional to bandwidth, inversely proportional to reliability, etc. In one implementation, the calculated distances are based on the build of the network (e.g., size of the circuit}, the current usage (e.g., based on monitoring), and expected usage (e.g., based on projected use of yet-to-be deployed services). [...] PCE 350 provides the alternative network service infrastructures to SIDC 345 as candidates for maintaining service availability at SLA requirements in response to a detected and/or projected outage and/or congested network conditions."; [0046]-[0047] - "SIDC 345 may select one or more alternative network service infrastructures and/or sub-infrastructures based on some or all of the above data. [. .. ] NFVO 335 may, based on instructions received from SO 325, deploy the alternative network service infrastructures and/or sub-infrastructures to orchestrate within data transport section 310." The alternative network service infrastructures ("wireless computing resources") that meet the requirements in the profiles are deployed together.).
Young is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of transferring information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Francini in view of Bach to include a network orchestrator configured to identify transport profiles supported by a wireless computing resource and configured to deploy another of the wireless computing resources configured with an identified transport profile as taught by Young. Identifying network infrastructure profiles and deploying alternative network infrastructures corresponding to the identified profiles ensures service availability is maintained in a 5G wireless network (Young: [0044]).
Regarding claim 25, the combination of Francini in view of Bach teaches the non-transitory machine-readable medium of claim 18, but fails to teach wherein the first wireless computing resource has been configured with a transport profile supported by the second wireless computing resource.
However, Young teaches wherein the first wireless computing resource has been configured with a transport profile supported by the second wireless computing resource ([0040] – “SLA database 360 may store and maintain service requirement profiles for network customers (e.g., UE 110). Each service requirement profile describes a particular network customer's network service performance requirements.”; [0044] – “SIDC 345 may include an infrastructure catalog 370 that stores network infrastructure profiles that provide transport paths for a corresponding particular service profile. In one implementation, infrastructure catalog 370 obtains the network infrastructure profiles from an inventory database 375 which orders the network infrastructure profiles based on a deployment preference value associated with each of the network infrastructure profiles. […] In one implementation, deployment preference values include abstracted "distances " between nodes in a transport path associated with a network infrastructure. For example, PCE 350 may calculate a logical "distance " which may be a function of latency, inversely proportional to bandwidth, inversely proportional to reliability, etc. In one implementation, the calculated distances are based on the build of the network (e. g. , size of the circuit), the current usage (e.g., based on monitoring), and expected usage (e.g., based on projected use of yet-to- be deployed services). [...] PCE 350 provides the alternative network service infrastructures to SIDC 345 as candidates for maintaining service availability at SLA requirements in response to a detected and/or projected outage and/or congested network conditions."; [0045] – “SIDC 345 may identify infrastructure design parameters associated with physical and virtual components of a particular network infrastructure. [...] The configuration of the multiple transport networks may include design parameters that detail the physical and virtual configuration of each transport network 320 and how they interconnect."; [0046]-[0047] – “SIDC 345 may select one or more alternative network service infrastructures and/or sub-infrastructures based on some or all of the above data. [...] NFVO 335 may, based on instructions received from SO 325, deploy the alternative network service infrastructures and/or sub-infrastructures to orchestrate within data transport section 310. A transport controller may, based on the instructions from SO 325, initiate configuration of transport networks 320 to support the alternative network service infrastructures and/or sub-infrastructures." Network service infrastructures (implemented across "computing resources" such as base stations in wireless network RAN 120 – see [0008], [0019] and [0026]) which allows UE 110 to wirelessly connect, are configured and deployed to provide transport paths, making them "transport profiles".).
Young is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of transferring information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Francini in view of Bach to include a network orchestrator configured to identify transport profiles supported by a wireless computing resource and configured to deploy another of the wireless computing resources configured with an identified transport profile as taught by Young. Identifying network infrastructure profiles and deploying alternative network infrastructures corresponding to the identified profiles ensures service availability is maintained in a 5G wireless network (Young: [0044]).
Regarding claim 29, the combination of Francini in view of Bach teaches the method of claim 26, but fails to teach further comprising: configuring the first wireless computing resource with transport profiles supported by the second wireless computing resource.
However, Young teaches configuring the first wireless computing resource with transport profiles supported by the second wireless computing resource ([0040] – “SLA database 360 may store and maintain service requirement profiles for network customers (e.g., UE 110). Each service requirement profile describes a particular network customer's network service performance requirements.”; [0044] – “SIDC 345 may include an infrastructure catalog 370 that stores network infrastructure profiles that provide transport paths for a corresponding particular service profile. In one implementation, infrastructure catalog 370 obtains the network infrastructure profiles from an inventory database 375 which orders the network infrastructure profiles based on a deployment preference value associated with each of the network infrastructure profiles. […] In one implementation, deployment preference values include abstracted "distances " between nodes in a transport path associated with a network infrastructure. For example, PCE 350 may calculate a logical "distance " which may be a function of latency, inversely proportional to bandwidth, inversely proportional to reliability, etc. In one implementation, the calculated distances are based on the build of the network (e. g. , size of the circuit), the current usage (e.g., based on monitoring), and expected usage (e.g., based on projected use of yet-to- be deployed services). [...] PCE 350 provides the alternative network service infrastructures to SIDC 345 as candidates for maintaining service availability at SLA requirements in response to a detected and/or projected outage and/or congested network conditions."; [0045] – “SIDC 345 may identify infrastructure design parameters associated with physical and virtual components of a particular network infrastructure. [...] The configuration of the multiple transport networks may include design parameters that detail the physical and virtual configuration of each transport network 320 and how they interconnect."; [0046]-[0047] – “SIDC 345 may select one or more alternative network service infrastructures and/or sub-infrastructures based on some or all of the above data. [...] NFVO 335 may, based on instructions received from SO 325, deploy the alternative network service infrastructures and/or sub-infrastructures to orchestrate within data transport section 310. A transport controller may, based on the instructions from SO 325, initiate configuration of transport networks 320 to support the alternative network service infrastructures and/or sub-infrastructures." Network service infrastructures (implemented across "computing resources" such as base stations in wireless network RAN 120 – see [0008], [0019] and [0026]) which allows UE 110 to wirelessly connect, are configured and deployed to provide transport paths, making them "transport profiles".).
Young is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of transferring information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Francini in view of Bach to include a network orchestrator configured to identify transport profiles supported by a wireless computing resource and configured to deploy another of the wireless computing resources configured with identified transport profiles as taught by Young. Identifying network infrastructure profiles and deploying alternative network infrastructures corresponding to the identified profiles ensures service availability is maintained in a 5G wireless network (Young: [0044]).
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Francini in view of Bach as applied to claim 18, and further in view of Kaltenberger and Welzl et al. (NPL Document: Transport Services: A Modern API for an Adaptive Internet Transport Layer), hereinafter Welzl.
Regarding claim 20, the combination of Francini in view of Bach teaches the non-transitory machine-readable medium of claim 18, wherein:
the information is to be transferred between a first layer and a second layer of a fifth generation (Francini: [0019] – “The communication system 100 includes […] a wireless access device (WAD) 120 […] communication system 100 may be provided using cellular wireless technology (e.g., Third Generation (3G) wireless technology, Fourth Generation (4G) wireless technology such as Long Term Evolution (LTE), Fifth Generation (5G) wireless technology, or the like)”; [0052] – “In FIG. 2B, in a direction of transmission from the application layer 236 of the server 130 toward the application layer 216 of the MHD 110, communication of the application data of the application layer 236 of the server 230 may be performed as follows. The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection). The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120. The client socket API-south 215s running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222C of the WAD 120."; [0061] - "the wireless access device runs a communication protocol stack. The communication protocol stack includes a transport layer, a client socket API, a link layer (e. g. , a reliable link layer), and a physical layer. [...] The client socket API of the wireless access device provides an interface between the networked client transport layer socket of the server-facing transport layer and the link layer at the wireless access device (rather than between an application layer and a transport layer)." FIG. 2B, "two layers" = transport layer 224S and link layer 222C on the wireless access device 120, third layer = client socket API-south 215S)
Francini in view of Bach fails to expressly teach fifth generation new radio (5G-NR), and that the third layer (the client socket API) that is based, at least in part, on multiple transport protocols.
However, Kaltenberger teaches fifth generation new radio (5G-NR) (Page 2: "5G is also known by Release 15 of 3GPP. This release includes a brand new core network and radio interface, called 5G New Radio (5G-NR)."; The 5G release included both the new core network and radio interface, 5G-NR.; Page 6: “Control Plane Network Functions in the 5G system architecture are based on the service based architecture. […] The protocol stack for the service based interfaces is Application/HTTP2/TLS/TCP/IP/L2”.)
Kaltenberger is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to transfer information between wireless computing resources. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that the 5G computing resources (e.g., layers of the communication protocol stack of WAD 120) in the communication system provided by 5G wireless technology as taught by Francini (Francini: [0018]) could be considered 5G-NR computing resources as claimed, since 5G-NR was part of the 5G release as taught by Kaltenberger (Kaltenberger: page 2).
The combination of Francini in view of Bach and Kaltenberger fails to teach the third layer (the client socket API) is based, at least in part, on multiple transport protocols.
However, Welzl teaches an API for a layer of a network protocol stack that is based, at least in part, on multiple transport protocols (Abstract - "TAPS defines a new recommended API for the Internet's transport layer. This API gives access to a wide variety of services from various protocols, and it is protocol-independent: the transport layer becomes adaptive, and applications are no longer statically bound to a particular protocol and/or network interface. We give an overview of the TAPS API, and we demonstrate its flexibility and ease of use with an example using a Python-based open source implementation."; Implementations - "PyTAPS is a prototype implementation of a transport system using the specification of the abstract interface by the IETF TAPS working group. PyTAPS supports TCP, UDP, and the use of TLS over TCP.").
Welzl is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of using an API to abstract to multiple transport protocols. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the client socket API taught by Francini in view of Bach such that the API is based on multiple transport protocols as suggested by Welzl. Welzl suggests the TAPS API which supports multiple protocols as a replacement for traditional socket APIs, since the TAPS API allows for more flexibility and switching between transport layer protocols such as TCP and UDP (Welzl: see Performance and Conclusion).
Claim 34 is rejected under 35 U.S.C. 103 as being unpatentable over Francini in view of Bach as applied to claim 26, and further in view of ROY et al. (U.S. Pub. No. 2023/0044165), hereinafter ROY, and DONG et al. (U.S. Pub No. 2022/0075731), hereinafter DONG.
Regarding claim 34, the combination of Francini in view of Bach teaches the method of claim 26, but fails to expressly teach wherein: performance of the API does not cause a reference counter to decrement; the reference counter is associated with the storage; and the storage is further to be used as part of a zero copy buffer method.
However, ROY teaches the storage is further to be used as part of a zero copy buffer method ([0048] - "one or more data transfers may potentially be implemented, partially or entirely, with a zero-copy transfer. In some embodiments, performing a zero copy transfer may involve, for example, transferring data between a target and a memory of a client using a memory access protocol (e.g., RDMA), For example, in some embodiments, one or more data transfers may be implemented with a zero-copy transfer by transferring data directly to a memory of a receiving device (e.g., memory 120 illustrated in FIG. 1 and/or buffer 251 illustrated in FIG. 2).").
ROY is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of transferring data between wireless computing resources using an API. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the teachings of Francini in view of Bach such that the storage is further to be used as part of a zero-copy buffer method as taught by ROY. Using a data transfer method such as a zero-copy buffer method as taught by ROY allows for the transfer of data with relatively low overhead and/or latency (ROY: [0021]).
Francini in view of Bach and ROY fails to expressly teach performance of the API does not cause a reference counter to decrement; the reference counter is associated with the storage.
However, DONG teaches performance of the API does not cause a reference counter to decrement; the reference counter is associated with the storage ([0038] – “Computing system 202 also includes one or more application programming interfaces (APIs) 220 configured to increment and decrement counters in buckets of buffers in a lock-free manner, including for multi-threading accesses"; [0057] – “counter logic 210 is configured to perform conditional counter operations on counters in buckets of arrays for the cache(s) being accessed. Conditional counter operation include, without limitation, incrementing a counter, decrementing a counter, and/or maintaining a value of a counter."; [0064] – “Buckets 702 include respective counters 708, each having a counter, stored and maintained memory 206 of FIG. 2, and that is incremented or decremented by counter logic 210, e.g., via a call to an API". Counters associated with cache ("storage") activity may be incremented and/or maintained based on counter logic which makes API calls to affect the counter increments. Decrementing is listed as an alternative and is therefore not required.).
DONG is considered to be analogous art to the claimed invention because it is reasonably pertinent to the problem faced by the inventor of managing a storage using an API. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the teachings of Francini in view of Bach and ROY such that the API does not cause a reference counter associated with the storage to decrement as taught by DONG. Using counters allows accesses to memory to be tracked and allows allocated memory to be used more efficiently (DONG: [0024]-[0025]).
Response to Arguments
Applicant’s arguments filed 11/21/2025 regarding the double patenting rejection of claims 1-35 (see pages 12-13 of the Remarks filed 11/21/2025) have been fully considered but they are not persuasive.
Applicant must file a terminal disclaimer as a complete and proper response to a double patenting rejection. Applicant’s statement of filing a terminal disclaimer upon allowance is not a proper response. See MPEP 804. A complete response to a nonstatutory double patenting (NSDP) rejection is either a reply by the Applicant showing that the claims subject to the rejection are patentably distinct from the reference claims or the filing of a terminal disclaimer in accordance with 37 CFR 1.321 in the pending application(s) with a reply to the Office action (see MPEP 1490 for a discussion of terminal disclaimers). Such a response is required even when the nonstatutory double patenting rejection is provisional.
Applicant’s arguments, see page 13 of the remarks, filed 11/21/2025, with respect to the rejection of claims 1, 10, 18 and 26 under 35 U.S.C. 112(b) have been fully considered and are persuasive. The rejection of claims 1, 10, 18, and 26 under 35 U.S.C. 112(b) has been withdrawn.
Specifically, the amendments to claims 1, 10, 18 and 26 overcome the previous rejections. However, claims 2, 4-5, 7-9, 11, 13-14, 19, 21, 23-24, 27, 30-35 remain rejected under 35 U.S.C. 112(b) for the reasons presented above in the section titled Claim Rejections - 35 USC § 112.
Applicant’s arguments with respect to claims 1, 10, 18, and 26 under 35 U.S.C. 102(a)(1) and 102(a)(2) on pages 13-15 of the remarks filed 11/21/2026 have been considered but are not persuasive.
Claims 1, 10, 18 and 26 stand rejected under 35 U.S.C. 103 as being unpatentable over Francini in view of new reference Bach (U.S. Patent No. 5,619,650).
Applicant argues Francini fails to teach “cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols”.
Examiner disagrees. Francini teaches to cause the data to be provided from the storage to the second wireless computing resource using the one or more different transport protocols ([0052] – “The server 130 transmits the application data to the WAD 120 via the transport layer connection 250B between the WAD 120 and the server 130 (illustratively, a TCP connection). The client socket API-south 215S running on WAD 120 receives the application data from the networked client transport layer socket (transport layer 224S) on the WAD 120. The client socket API-south 215S running on WAD 120 places the application data into primitive messages supported by the client socket API 215 and passes the primitive messages including the application data to the reliable link layer 222C of the WAD 120. The reliable link layer 222C of the WAD 120 receives the primitive messages, places the primitive messages into link layer data structures supported by the reliable link layer 222C”; [0022] – “The buffer 123 is configured to store, in various forms as may be provided or supported at various communication layers of communication protocol stack 124, both data communicated or intended for communication between the MHD 110 and the WAD 120 and data communicated or intended for communication between the WAD 120 and the server 130.” The application data (data communicated between the WAD 120 and server 130 which was stored in buffer 123 according to one or more transport protocols of the transport layer 224S, e.g., when it is received via the TCP socket at the WAD 120) is placed into the primitive messages of the socket API, which adhere to a “transport protocol” as described in [0050], and provided to the reliable link layer 222C.
For reference, paragraph [0050] of Francini recites – “The communication primitives of the client socket API 215 may include primitive rules for controlling manipulation of data (e.g., encapsulation and decapsulation of application data that is sourced by the application layer 216 for transmission toward the server 130, encapsulation and decapsulation of application data that is sourced by server 130 and that is intended for delivery to the application layer 216 of the MHD 110, or the like), primitive messages and associated primitive message formats that are configured to transport data of the application layer 216, or the like, as well as various combinations thereof.” Since the primitive messages provided to the reliable link layer 222C adhere to rules and a format for transporting data, they use a “transport protocol”.).
Bach further provides evidence that the primitives messages of a socket API as described by Francini adhere to a protocol for transporting data (Col. 2, lines 49-65 – “application program 108 communicates via the network using an Application programming interface (API). The sockets protocol is one of the more prevalent application to application APIs. […] The sockets API defines the format and parameter content of the commands an application program uses to establish communications with another application. It defines the API for both client and server applications and for connection-less and connection-based links. The defined API functions cause the operating system to issue the necessary commands to establish a communications link and to exchange data over that link.”; Col. 1, lines 41-44 – “a protocol is a set of rules and conventions”).
Accordingly, the combination of Francini in view of Bach teaches all the limitations of claims 1, 10, 18, and 26.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Walkin (U.S. Pub. No. 2018/0063013) teaches connections, including TCP connections, between client and server systems typically utilize send and receive buffers to transmit data, and further teaches transferring data between client and server applications includes copying data into a send buffer, the sending application using a sockets API to indicate to a kernel that there is data to be sent, and sending the contents of the send buffer over the connection to the receive buffer (see [0002] and [0026]).
Dinh et al. (U.S. Pub. No. 2021/0136095) teaches an application programming interface (API) refers to a set of subroutine definitions, protocols, and/or tools for building software, and generally, an API defines communication between software components (see [0018]).
Horie et al. (U.S. Patent No. 8,671,152) teaches a multiprocessor system for emulating RDMA functions, in which a first processor offloads a RDMA packet to a buffer, and a second processor reads the RDMA packet to perform TCP/IP processing thereon and to generate a TCP/IP packet to be transmitted (see Abstract and Claim 4).
Copsey et al. (U.S. Pub. No. 2014/0129731) teaches a method for data transfer utilizing a protocol module using a set of APIs configured to receive data from an application in a first protocol and send the data using a second protocol (see [0073]-[0076]).
Walbeck et al. (U.S. Pub. No. 2006/0248208) teaches a gateway, which provides an API, provides protocol translations from one protocol to another protocol, so that two networks using different protocols can be interconnected (see [0053] and [0061]). It also teaches a plurality of protocols used by the application layer of the OSI model (see [0051]).
Lowekamp et al. (U.S. Pub. No. 2014/0032774) teaches a gateway which provides APIs that translate communications from a first protocol to a second protocol to enable communication between clients that use different protocols (see [0026] and [0050]).
Wikipedia (NPL Document: “Protocol Stack”) teaches a protocol stack is the software implementation of a plurality of individual communication protocols, where each protocol module typically communicates with two others, and thus is thought of as layers in a stack of protocols (see page 1).
Dixon et al. (U.S. Pub. No. 2009/0217294) teaches it is known in the art how to combine multiple API calls into a single API call to perform the same operations as the multiple API call, as multiple API calls may be inefficient (see [0006]).
McCloghrie et al. (U.S. Patent No. 6,286,052) teaches one of ordinary skill in the art would recognize two or more API calls may be combined into a single call, or that any one call may be broken down into multiple calls (see Col. 19, lines 58-63).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNIFER MARIE GUTMAN whose telephone number is (703)756-1572. The examiner can normally be reached M-F: 9:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kevin Young can be reached at 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JENNIFER MARIE GUTMAN/Examiner, Art Unit 2194 /KEVIN L YOUNG/Supervisory Patent Examiner, Art Unit 2194