DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to claims filed on 07/06/2023.
Claims 1-24 are pending.
Claim Objections
Claims 7, 22, and 23 are objected to because of the following informalities: The claims are missing a period at the end. Appropriate correction is required.
Claim 24 is objected to because of the following informalities: “generating first set of multiple source descriptors”, “generating second set of multiple source descriptor”, “generating third set of multiple source descriptor”, “generating fourth set of multiple source descriptor”, and “generating first set of multiple destination descriptors” should read “generating a first set of multiple source descriptors”, “generating a second set of multiple source descriptors”, “generating a third set of multiple source descriptors”, “generating a fourth set of multiple source descriptors”, and “generating a first set of multiple destination descriptors”. Appropriate correction is required.
Claim 8 depends, directly or indirectly, from objected claims and does not resolve the deficiencies thereof and is therefore objected to for at least the same reasons.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation “in response to a determination that the address is in the container data structure, accessing, by the data transform accelerator, the data transform command based on the address, the data transform command in the host computing unit;” in lines 19-21. It is unclear exactly what is meant by “the data transform command in the host computing unit” in this limitation. For the sake of compact prosecution, Examiner will interpret this to mean “in response to a determination that the address is in the container data structure, accessing, by the data transform accelerator, the data transform command in the host computing unit based on the address;”.
Claim 12 recites the limitation “the data transform accelerator” in lines 8-9. There is insufficient antecedent basis for this limitation in the claim. Nowhere prior in the claim is any reference to a data transform accelerator, thus it is unclear what data transform accelerator is being referred to. For clarity this limitation could be changed to “a data transform accelerator”.
Further, claim 12 recites the limitation “the host computing unit” in lines 12. There is insufficient antecedent basis for this limitation in the claim. Nowhere prior in the claim is a reference to a host computing unit. Although the limitation of “a host” is present, it is unclear if this is meant to be the same as the host computing unit. For clarity this limitation could be changed to “the host” or the recitation of a host could be changes to “a host computing unit”.
Further, claim 12 recites the limitation “and additional pre-data” in line 16. There is insufficient antecedent basis for this limitation in the claim. Nowhere prior in the claim is any reference to additional pre-data, thus it is unclear what additional pre-data is being referred to. For the sake of compact prosecution, Examiner will interpret this to mean “additional metadata”.
Claim 22 recites the limitation “wherein a plurality of command submission session from a virtual machine each session having its own metadata shared by data transform commands grouped in the session”. It is unclear what is meant by this limitation. For the sake of compact prosecution, Examiner will interpret this to mean “wherein there are a plurality of command submission sessions from a virtual machine, each session having its own metadata shared by data transform commands grouped in the session”.
Claim 23 recites the limitation “wherein a plurality of virtual machine each creating its command submission sessions”. It is unclear what is meant by this limitation. For the sake of compact prosecution, Examiner will interpret this to mean “wherein there are a plurality of virtual machines each creating command submission sessions”.
Claim 24 recites the limitation “to the output buffers” in lines 14-15. There is insufficient antecedent basis for this limitation in the claim. Nowhere prior in the claim is a reference to output buffers. Although the limitation of “an output buffer” is present in claim 12 from which claim 24 depends, it is unclear if this is meant to be the same as the output buffers, and if so, it is unclear whether these buffers are singular or plural. For the sake of compact prosecution, Examiner will interpret this limitation to mean “to the output buffer”.
Claims 2-11 and 13-24 depend, directly or indirectly, from rejected claims and do not resolve the deficiencies thereof and are therefore rejected for at least the same reasons.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-24 are rejected under 35 U.S.C. 101 because the claimed invention recites a judicial exception, is directed to that judicial exception, an abstract idea, as it has not been integrated into practical application and the claims further do not recite significantly more than the judicial exception. Examiner has evaluated the claims under the framework provided in the 2019 Patent Eligibility Guidance published in the Federal Register 01/07/2019 and has provided such analysis below.
Step 1: Claims 1-11 are directed to a method and fall within the statutory category of process. Claims 12-24 are directed to a host and fall within the statutory category of machine. Therefore, “Are the claims to a process, machine, manufacture or composition of matter?” Yes.
In order to evaluate the Step 2A inquiry “Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?” we must determine, at Step 2A Prong 1, whether the claim recites a law of nature, a natural phenomenon or an abstract idea and further whether the claim recites additional elements that integrate the judicial exception into a practical application.
Step 2A Prong 1:
Claims 1 and 12: The limitations of “determining a communication interface between a host computing unit and a data transform accelerator hosting one or more virtual machines on a host operating system of the host computing unit,”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can mentally determine a communication interface between a host computing unit and a data transform accelerator. Further, the limitations of “partitioning a collection of container data structures into multiple sets, wherein one set for each virtual function is assigned by a software driver running in the host operating system of host computing unit;”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can mentally partition a collection of container data structures into multiple sets through mental assignment. This may also be done with pencil and paper. Further, the limitations of “partitioning a memory of the data transform accelerator into multiple partitions,”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can mentally partition a memory into multiple partitions through mental assignment. This may also be done with pencil and paper. Further, the limitations of “determining, by the data transform accelerator, an address associated with a data transform command in a container data structure of the collection of container data structures that is in the data transform accelerator;”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe a data transform command and, based on these observations, can mentally determine an address associated with this command. This may also be done with pencil and paper.
Therefore, Yes, claims 1 and 12 recite a judicial exception.
Step 2A Prong 2:
Claims 1 and 12: The judicial exception is not integrated into a practical application. In particular, the claims recite additional element recitations of “with each virtual machine of the one or more virtual machines using one or more virtual function in the data transform accelerator,”, “wherein the virtual function represents a virtualized instance of a compute resource of the data transform accelerator;”, “wherein one partition is attached to each virtual function in data transform accelerator;”, “with each virtual machine using one or more than one virtual function in the data transform accelerator;”, and “and wherein the output data is the input data after being transformed by the one or more data transform operations”, which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, the claims recite additional element recitations of “submitting one or more data transform commands for processing by the data transform accelerator device;”, “in response to a determination that the address is in the container data structure, accessing, by the data transform accelerator, the data transform command based on the address, the data transform command in the host computing unit;”, “and configuring, by the data transform accelerator, a data transform pipeline based on the metadata”, “generate container data structures in a memory of the data transform accelerator, the data transform accelerator in data communication with the host or in the memory of the host computing unit, using the software driver in the host operating system of the host computing unit;”, “generate input data in the memory hardware of host computing unit or in the memory of data transform accelerator by software driver in guest operating system of the virtual machine in the host computing unit;”, “generate metadata in the memory of data transform accelerator or in the memory of host computing unit, or in in the memory of the data transform accelerator and in the memory of the host computing unit by software driver in a guest operating system of the virtual machine in the host computing unit;”, “generate pre-data in the memory of data transform accelerator or in the memory of host computing unit, or in both in the memory of the data transform accelerator and in the memory of the host computing unit by software driver in guest operating system of the virtual machine in the host computing unit;”, “generate additional metadata in the memory of data transform accelerator or in the memory of host computing unit, or in both in the memory of the data transform accelerator and in the memory of the host computing unit by software driver in guest operating system of the virtual machine in the host computing unit;”, “reserve an output buffer in the memory hardware or in the memory of data transform accelerator or on the memory of host computing unit;”, “generate a first data transform command in the memory hardware, the first data transform command associated with the input data and the metadata;”, “and update the container data structure with an address of the first data transform command, wherein the address of the first data transform command is accessible by the data transform accelerator,”, “wherein accessing the address of the first data transform command by the data transform accelerator causes the data transform accelerator to obtain the input data, to perform one or more data transform operations on the input data based on the metadata, pre-data and additional pre-data and to transmit output data to the output buffer,”, which are merely recitations of data transmission, gathering, and storage which are insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the claims recite additional element recitations of “data processing hardware;”, “one or more than one virtual machine on a host operating system of the host computing unit,”, “one or more software drivers in the host operating system in the host computing unit;”, “one or more software drivers in a guest operating system in the virtual machines in the host computing unit;”, “and memory hardware in communication with the data processing hardware, the memory hardware storing instructions that when executed on the data processing hardware cause the data processing hardware to perform operations comprising:”, which are merely recitations of generic computing components (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application.
Therefore, “Do the claims recite additional elements that integrate the judicial exception into a practical application? No, these additional elements do not integrate the abstract idea into a practical application and they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
After having evaluated the inquires set forth in Steps 2A Prong 1 and 2, it has been concluded that claims 1 and 12 not only recite a judicial exception but that the claims are directed to the judicial exception as the judicial exception has not been integrated into practical application.
Step 2B:
Claims 1 and 12: The claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than generic computing components, field of use/technological environment, and insignificant extra solution activity which do not amount to significantly more than the abstract idea. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)].
Therefore, “Do the claims recite additional elements that amount to significantly more than the judicial exception? No, these additional elements, alone or in combination, do not amount to significantly more than the judicial exception.
Having concluded analysis within the provided framework, Claims 1 and 12 do not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claim 2, the claim recites additional element recitations of “wherein in each memory partition of the data transform accelerator, the method further includes reserving one or more buffers to hold at least one of: metadata, pre-data, or additional metadata for data transform commands grouped together in a command submission session” which are merely recitations of data storage which are insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claim 2 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 2 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 2 does not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claim 3, the claim recites additional element recitations of “wherein obtaining the metadata based on the information in the data transform command comprises obtaining command metadata from a first input buffer in the data transform accelerator or in the memory of the host computing unit” which are merely recitations of data retrieval which are insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claim 3 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 3 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 3 does not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claims 4 and 15, the claims recite additional element recitations of “wherein the command metadata specifies data transform operations to be performed by the data transform pipeline/accelerator” which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claims 4 and 15 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 4 and 15 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 4 and 15 do not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claim 5, the claim recites additional element recitations of “wherein obtaining the metadata based on the information in the data transform command comprises obtaining command pre-data from a second input buffer in the data transform accelerator or in the memory of the host computing unit” which are merely recitations of data retrieval which are insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claim 5 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 5 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 5 does not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claims 6 and 16, the claims recite additional element recitations of “wherein the command pre-data includes at least one of: initialization vector (IV), message authentication code (MAC), Galois/counter mode (GCM) authentication tag, or additional authentication data (AAD)” and “wherein the pre-data includes at least one parameter for the data transform accelerator: initialization vector (IV), message authentication code (MAC), Galois/counter mode (GCM) authentication tag, or additional authentication data (AAD)”, which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claims 6 and 16 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 6 and 16 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 6 and 16 do not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claim 7, the claim recites additional element recitations of “wherein obtaining the metadata based on the information in the data transform command comprises obtaining additional command metadata from a third input buffer in the data transform accelerator or in the memory of host computing unit” which are merely recitations of data retrieval which are insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claim 7 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 7 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 7 does not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claims 8 and 17, the claims recite additional element recitations of “wherein the additional command metadata includes at least one of: a source token, or an action token” which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claims 8 and 17 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 8 and 17 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 8 and 17 do not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claims 9 and 18, the claim recites additional element recitations of “the method further comprising obtaining, by the data transform accelerator, input data based on the information in the data transform command from the host computing unit memory or from the memory of the data transform accelerator” and “wherein the data transform accelerator obtains the input data from the host via the data communication” which are merely recitations of data retrieval which are insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claims 9 and 18 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 9 and 18 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 9 and 18 do not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claim 10, the claim recites additional element recitations of “the method further comprising performing, by the data transform accelerator, one or more data transform operations on the input data using the data transform pipeline” which are merely recitations of generically using a computer as a tool to apply the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. Further, claim 10 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 10 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 10 does not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claim 11, the claim recites additional element recitations of “the method further comprising transmitting, by the data transform accelerator, output data produced by the data transform pipeline to the host computing unit or to the memory of the data transform accelerator” which are merely recitations of data transmission which are insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claim 11 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 11 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 11 does not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claim 13, the claim recites additional element recitations of “wherein the host is in the data communication with the data transform accelerator based on peripheral component interconnect express (PCIe) standard” which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claim 13 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 13 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 13 does not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claim 14, the claim recites additional element recitations of “wherein generating the first data transform command comprises: generating a first source descriptor pointing to the input data;”, “generating a second source descriptor pointing to the metadata;”, “generating a third source descriptor pointing to the pre-data;”, “generating a fourth source descriptor pointing to the additional metadata;”, and “and generating a first destination descriptor pointing to the output buffer”, which are merely recitations of data storage which are insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claim 14 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 14 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 14 does not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claim 19, the claim recites additional element recitations of “wherein the data transform accelerator obtains the metadata, pre-data, and additional meta-data from an on-chip memory of the data transform accelerator or from the memory of the host computing unit or from the memory of host computing unit and the memory of data transform accelerator”, which are merely recitations of data retrieval which are insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claim 19 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 19 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 19 does not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claims 20 and 21, the claims recite additional element recitations of “wherein a second data transform command associates with the metadata, the second data transform command different from the first data transform command” and “wherein a plurality of data transform commands in a command submission session from a virtual machine associate with the metadata, different from the first data transform command”, which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claims 20 and 21 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 20 and 21 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 20 and 21 do not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claim 22, the claim recites additional element recitations of “wherein a plurality of command submission session from a virtual machine each session having its own metadata shared by data transform commands grouped in the session” which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claim 22 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 22 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 22 does not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claim 23, the claim recites additional element recitations of “wherein a plurality of virtual machine each creating its command submission sessions” which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claim 23 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 23 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 23 does not recite patent eligible subject matter under 35 U.S.C. § 101.
With regard to claim 24, the claim recites additional element recitations of “wherein generating the first data transform command comprises: generating first set of multiple source descriptors pointing to one or more input data buffers;”, “generating second set of multiple source descriptor pointing to one or more buffers containing metadata;”, “generating third set of multiple source descriptor pointing to one or more buffers containing pre-data;”, “generating fourth set of multiple source descriptor pointing to one or more buffers containing additional metadata;”, and “generating first set of multiple destination descriptors pointing to the output buffers”, which are merely recitations of data storage which are insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claim 24 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 24 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 24 does not recite patent eligible subject matter under 35 U.S.C. § 101.
Therefore, Claims 1-24 do not recite patent eligible subject matter under U.S.C. §101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4 and 7-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over King et al. US 2022/0164237 A1 (hereafter King) in view of Ong et al. US 2023/0096468 A1 (hereafter Ong).
With regard to claim 1, King teaches:
A method comprising: determining a communication interface between a host computing unit and a data transform accelerator “For example, an accelerator among accelerators 742 can provide data compression (DC) capability, cryptography services such as public key encryption (PKE), cipher, hash/authentication capabilities, decryption, or other capabilities or services. In some embodiments, in addition or alternatively, an accelerator among accelerators 742 provides field select controller capabilities as described herein. In some cases, accelerators 742 can be integrated into a CPU socket (e.g., a connector to a motherboard or circuit board that includes a CPU and provides an electrical interface with the CPU)” [King ¶ 59]. “In some examples, processors 806 of IPU 800 can execute one or more processes, applications, VMs, containers, microservices, and so forth that request performance of workloads by one or more of: processors 810, accelerators 820, memory pool 830, and/or servers 840-0 to 840-N. IPU 800 can utilize network interface 802 or one or more device interfaces to communicate with processors 810, accelerators 820, memory pool 830, and/or servers 840-0 to 840-N” [King ¶ 77]. “Bus interface 612 can provide an interface with host device (not depicted). For example, bus interface 612 can be compatible with PCI, PCI Express, PCI-x, Serial ATA, and/or USB compatible interface (although other interconnection standards may be used)” [King ¶ 52].
hosting one or more virtual machines on a host operating system of the host computing unit, “Memory 730 stores and hosts, among other things, operating system (OS) 732 to provide a software platform for execution of instructions in system 700 … The hypervisor can emulate multiple virtual hardware platforms that are isolated from another, allowing virtual machines to run Linux®, Windows® Server, VMware ESXi, and other operating systems on the same underlying physical host” [King ¶ 60,62]. “In some examples, processors 806 of IPU 800 can execute one or more processes, applications, VMs, containers, microservices, and so forth that request performance of workloads by one or more of: processors 810, accelerators 820, memory pool 830, and/or servers 840-0 to 840-N” [King ¶ 77].
partitioning a collection of container data structures into multiple sets, “The parser 222 or 232, in some examples, receives a packet as a formatted collection of bits in a particular order, and parses the packet into its constituent header fields. In some examples, the parser starts from the beginning of the packet and assigns header fields to fields (e.g., data containers) for processing” [King ¶ 20].
submitting one or more data transform commands for processing by the data transform accelerator device; “Example 1 includes one or more examples, and includes a non-transitory computer-readable medium comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: control addition of a program for execution by a programmable packet processing pipeline while maintaining operation of programs currently executing on the programmable packet processing pipeline” [King ¶ 89]. “For example, an accelerator among accelerators 742 can provide data compression (DC) capability, cryptography services such as public key encryption (PKE), cipher, hash/authentication capabilities, decryption, or other capabilities or services” [King ¶ 59]. “MAUs 224 or 234 can perform processing on the packet data. In some examples, MAUs includes a sequence of stages, with a stage including one or more match tables and an action engine. A match table can include a set of match entries against which the packet header fields are matched (e.g., using hash tables), with the match entries referencing action entries. When the packet matches a particular match entry, that particular match entry references a particular action entry which specifies a set of actions to perform on the packet (e.g., sending the packet to a particular port, modifying one or more packet header field values, dropping the packet, mirroring the packet to a mirror buffer, etc.). The action engine of the stage can perform the actions on the packet, which is then sent to the next stage of the MAU” [King ¶ 21].
determining, by the data transform accelerator, an address associated with a data transform command in a container data structure of the collection of container data structures that is in the data transform accelerator; “In some examples, this shared output buffer 254 can store packet data, while references (e.g., pointers) to that packet data are kept in different queues for egress pipeline 230. The egress pipelines can request their respective data from the common data buffer using a queuing policy that is control-plane configurable. When a packet data reference reaches the head of its queue and is scheduled for dequeuing, the corresponding packet data can be read out of the output buffer 254 and into the corresponding egress pipeline 230” [King ¶ 24]. “MAUs 224 or 234 can perform processing on the packet data. In some examples, MAUs includes a sequence of stages, with a stage including one or more match tables and an action engine. A match table can include a set of match entries against which the packet header fields are matched (e.g., using hash tables), with the match entries referencing action entries. When the packet matches a particular match entry, that particular match entry references a particular action entry which specifies a set of actions to perform on the packet (e.g., sending the packet to a particular port, modifying one or more packet header field values, dropping the packet, mirroring the packet to a mirror buffer, etc.)” [King ¶ 21].
in response to a determination that the address is in the container data structure, accessing, by the data transform accelerator, the data transform command based on the address, the data transform command in the host computing unit; “When the packet matches a particular match entry, that particular match entry references a particular action entry which specifies a set of actions to perform on the packet (e.g., sending the packet to a particular port, modifying one or more packet header field values, dropping the packet, mirroring the packet to a mirror buffer, etc.). The action engine of the stage can perform the actions on the packet, which is then sent to the next stage of the MAU” [King ¶ 21, fig. 2].
obtaining, by the data transform accelerator, metadata based on information in the data transform command; “Direct memory access (DMA) engine 652 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa, instead of copying the packet to an intermediate buffer at the host and then using another copy operation from the intermediate buffer to the destination buffer” [King ¶ 51]. “Transmit queue 606 can include data or references to data for transmission by network interface. Receive queue 608 can include data or references to data that was received by network interface from a network. Descriptor queues 620 can include descriptors (metadata) that reference data or packets in transmit queue 606 or receive queue 608” [King ¶ 52].
and configuring, by the data transform accelerator, a data transform pipeline based on the metadata. “Programming a packet processing pipeline is transitioning from configuration by device drivers to configuration by compiled software flows” [King ¶ 1]. “Modular programming of packet processing pipelines can be utilized to configure multiple independent data planes in one or more packet processing pipelines of a network interface device” [King ¶ 14].
King fails to explicitly teach with each virtual machine of the one or more virtual machines using one or more virtual function in the data transform accelerator, wherein the virtual function represents a virtualized instance of a compute resource of the data transform accelerator; partitioning a collection of container data structures into multiple sets, wherein one set for each virtual function is assigned by a software driver running in the host operating system of host computing unit; partitioning a memory of the data transform accelerator into multiple partitions, wherein one partition is attached to each virtual function in data transform accelerator; and configuring, by the data transform accelerator, a data transform pipeline based on the metadata.
However, Ong teaches:
with each virtual machine of the one or more virtual machines using one or more virtual function in the data transform accelerator, “The queues 472 may be referred to as transmit (Tx) queues 472 when transmitting data over a Tx data path and referred to as Rx queues 472 when receiving data over an Rx data path … In a virtualized server, the Rx queues 472 can be assigned either to a VMM or to VMs using SR-IOV. The Rx queues 472 can be assigned to physical functions (PFs) and/or virtual functions (VFs) as needed (which may be represented by the apps/middleware 491 in FIG. 4). The Rx queues 472 assigned to a particular interface function (e.g., PF or VF) can be used for distributing packet processing work to the different processors in a multi-processor system” [Ong ¶ 32].
wherein the virtual function represents a virtualized instance of a compute resource of the data transform accelerator; “Additionally or alternatively, the processor circuitry 1202, acceleration circuitry 1250, memory circuitry 1210, and/or storage circuitry 1220 may be divided into, or otherwise separated into virtualized environments using a suitable virtualization technology, such as, for example, virtual machines (VMs), virtualization containers, and/or the like” [Ong ¶ 132]. “The term "virtual machine" or "VM" at least in some examples refers to a virtualized computation environment that behaves in a same or similar manner as a physical computer and/or a server. The term "hypervisor" at least in some examples refers to a software element that partitions the underlying physical resources of a compute node, creates VMs, manages resources for VMs, and isolates individual VMs from each other” [Ong ¶ 248].
partitioning a collection of container data structures into multiple sets, wherein one set for each virtual function is assigned by a software driver running in the host operating system of host computing unit; “Before being passed to the host platform 490, the packets/frames (container data structure) are stored in an appropriate buffer 411. In various implementations, the Rx frame steering function 422 detects the incoming Rx frames and steers the Rx frames to the appropriate queue 411” [Ong ¶ 47]. “The Rx packets/frames are posted to system (host) memory queues 472 indicated to the HW by descriptors 473 and through the set of queues 472. The descriptors 473 include pointers/addresses 474a that point to or otherwise indicate respective slots (or memory locations of such slots) in the data queues 472” [Ong ¶ 52]. “The queues 472 may be referred to as transmit (Tx) queues 472 when transmitting data over a Tx data path and referred to as Rx queues 472 when receiving data over an Rx data path … In a virtualized server, the Rx queues 472 can be assigned either to a VMM or to VMs using SR-IOV. The Rx queues 472 can be assigned to physical functions (PFs) and/or virtual functions (VFs) as needed (which may be represented by the apps/middleware 491 in FIG. 4). The Rx queues 472 assigned to a particular interface function (e.g., PF or VF) can be used for distributing packet processing work to the different processors in a multi-processor system. In some implementations, on the Rx side, packets are classified by the NIC 468 under OS or VMM control into groups of conversations” [Ong ¶ 32]. “The NIC 468 interacts with applications (apps) and/or middleware 491 operating in/on the host platform 490 via NIC driver/ API 480” [Ong ¶ 29, fig. 4].
partitioning a memory of the data transform accelerator into multiple partitions, wherein one partition is attached to each virtual function in data transform accelerator; “Additionally or alternatively, the processor circuitry 1202, acceleration circuitry 1250, memory circuitry 1210, and/or storage circuitry 1220 may be divided into, or otherwise separated into virtualized environments using a suitable virtualization technology, such as, for example, virtual machines (VMs), virtualization containers, and/or the like” [Ong ¶ 132]. “Before, after, or concurrently with receipt of frames or packets, the host platform 490 allocates chunks, partitions, or other sections of the system memory 470 to be used as respective Rx queues 472, and also allocates chunks, partitions, or other sections of the system memory 470 to be used as respective descriptor rings 471” [Ong ¶ 48].
and configuring, by the data transform accelerator, a data transform pipeline based on the metadata. “The term "data pipeline" or "pipeline" at least in some examples refers to a set of data processing elements (or data processors) connected in series and/or in parallel, where the output of one data processing element is the input of one or more other data processing elements in the pipeline; the elements of a pipeline may be executed in parallel or in time-sliced fashion and/or some amount of buffer storage can be inserted between elements” [Ong ¶ 296]. “After Rx descriptor 473 is prefetched, the address is known to the PE 404 for the transfer (e.g., DMA) operation of a received frame/packet from its corresponding Rx buffer 411” [Ong ¶ 38]. “When a new Rx frame/packet in Rx buffers 411 are ready to be transferred (e.g., DMA operation) to the system memory 470, the destination location of the Rx queues 472 for the transfer (e.g., DMA) operation is obtained from the address field 474a of an Rx descriptor 473” [Ong ¶ 51].
Ong is considered to be analogous to the claimed invention because it is in the same field of interprogram communication using buffers. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified King to incorporate the teachings of Ong and include with each virtual machine of the one or more virtual machines using one or more virtual function in the data transform accelerator, wherein the virtual function represents a virtualized instance of a compute resource of the data transform accelerator; partitioning a collection of container data structures into multiple sets, wherein one set for each virtual function is assigned by a software driver running in the host operating system of host computing unit; partitioning a memory of the data transform accelerator into multiple partitions, wherein one partition is attached to each virtual function in data transform accelerator; and configuring, by the data transform accelerator, a data transform pipeline based on the metadata. Doing so would allow for improved system performance. “Different examples of IPUs 1300 discussed herein are capable of supporting one or more processors (such as any of those discussed herein) connected to the IPUs 1300, and enable improved performance, management, security and coordination functions between entities (e.g., cloud service providers), and enable infrastructure offload and/or communications coordination functions. As discussed infra, IPUs 1300 may be integrated with smart NICs and/or storage or memory (e.g., on a same die, system on chip (SoC), or connected dies) that are located at on-premises systems, NANs (e.g., base stations, access points, gateways, network appliances, and/or the like), neighborhood central offices, and so forth” [Ong ¶ 145].
Regarding claim 2, King in view of Ong teaches the method of claim 1, as referenced above. King further teaches the method further includes reserving one or more buffers to hold at least one of: metadata, pre-data, or additional metadata for data transform commands “Transmit queue 606 can include data or references to data for transmission by network interface. Receive queue 608 can include data or references to data that was received by network interface from a network. Descriptor queues 620 can include descriptors (metadata) that reference data or packets in transmit queue 606 or receive queue 608” [King ¶ 52].
King fails to explicitly teach wherein in each memory partition of the data transform accelerator, the method further includes reserving one or more buffers to hold at least one of: metadata, pre-data, or additional metadata for data transform commands grouped together in a command submission session.
However, Ong teaches wherein in each memory partition of the data transform accelerator, the method further includes reserving one or more buffers to hold at least one of: metadata, pre-data, or additional metadata for data transform commands grouped together in a command submission session. “Before, after, or concurrently with receipt of frames or packets, the host platform 490 allocates chunks, partitions, or other sections of the system memory 470 to be used as respective Rx queues 472, and also allocates chunks, partitions, or other sections of the system memory 470 to be used as respective descriptor rings 471” [Ong ¶ 48]. “In some implementations, on the Rx side, packets are classified by the NIC 468 under OS or VMM control into groups of conversations (command submission session). Each group of conversations is assigned its own Rx queue 472 and Rx processor or processor core” [Ong ¶ 32].
Regarding claim 3, King in view of Ong teaches the method of claim 1, as referenced above. King further teaches wherein obtaining the metadata based on the information in the data transform command comprises obtaining command metadata from a first input buffer in the data transform accelerator or in the memory of the host computing unit. “Direct memory access (DMA) engine 652 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa, instead of copying the packet to an intermediate buffer at the host and then using another copy operation from the intermediate buffer to the destination buffer. Memory 610 can be any type of volatile or nonvolatile memory device and can store any queue or instruction used to program network interface 600. Transmit queue 606 can include data or references to data for transmission by network interface. Receive queue 608 can include data or references to data that was received by network interface from a network. Descriptor queues 620 (first input buffer) can include descriptors (metadata) that reference data or packets in transmit queue 606 or receive queue 608” [King ¶ 51-52].
Regarding claim 4, King in view of Ong teaches the method of claim 3, as referenced above. King further teaches wherein the command metadata specifies data transform operations to be performed by the data transform pipeline. “MAUs 224 or 234 can perform processing on the packet data. In some examples, MAUs includes a sequence of stages, with a stage including one or more match tables and an action engine. A match table can include a set of match entries against which the packet header fields are matched (e.g., using hash tables), with the match entries referencing action entries. When the packet matches a particular match entry, that particular match entry references a particular action entry which specifies a set of actions to perform on the packet (e.g., sending the packet to a particular port, modifying one or more packet header field values, dropping the packet, mirroring the packet to a mirror buffer, etc.)” [King ¶ 21].
Regarding claim 7, King in view of Ong teaches the method of claim 1, as referenced above. King fails to teach wherein obtaining the metadata based on the information in the data transform command comprises obtaining additional command metadata from a third input buffer in the data transform accelerator or in the memory of host computing unit.
However, Ong teaches wherein obtaining the metadata based on the information in the data transform command comprises obtaining additional command metadata from a third input buffer in the data transform accelerator or in the memory of host computing unit “In some cases, additional control parameters that cannot fit within the data descriptors 473 are needed to process the packet(s). In these cases, additional context descriptors are used in front of the data descriptors 473. Examples of such context descriptors include transmit segmentation (TSO), FD filter programming (see e.g., [FlowDirector]), and FCoE context programming. Additionally or alternatively, the additional control parameters can be indicated using additional fields within individual descriptors 473. Examples of such fields/parameters include a status information field 474c, a misc field 474d, an OWN bit (0) 475a, and a NEXT bit (N) 475b” [Ong ¶ 39]. “In this example, respective Rx descriptor ring buffers (Rx Desc) 471-M and 471-N (third input buffer) (collectively referred to as "ring buffers 471", "descriptor rings 471", "Rx rings 471", "Rx Desc 471", and/or the like) include descriptors 473 that point to respective memory locations (or slots) in the Rx queues 472 for posting packets that arrive from the network (e.g., over the physical layer transceiver circuitry (PHY) 401)” [Ong ¶ 32, fig. 4].
Regarding claim 8, King in view of Ong teaches the method of claim 7, as referenced above. King fails to teach wherein the additional command metadata includes at least one of: a source token, or an action token.
However, Ong teaches wherein the additional command metadata includes at least one of: a source token, or an action token. “As mentioned previously, the descriptors 473 include pointer 474a and length (source token) 474b pairs that point to locations in the data queues 472, and also include various control fields for data processing” [Ong ¶ 38]. “The host platform 490 then fills or otherwise configures each Rx Desc 471 with the information about its corresponding Rx queue 472 (e.g., address/location 474a of its corresponding frame buffer 472, length 474b of its corresponding frame buffer 472, and/or the like). The length or size of each frame buffer 472 of an Rx Desc 471 may be fixed in size (e.g. through a register on the NIC 468 that is configured by the host platform 490), or has a variable size by setting the length field 474b of the corresponding Rx descriptor 474” [Ong ¶ 48]. “The frame of each packet includes a destination address field (e.g., 6 octets) specifies the station(s) for which the MAC frame is intended, a source address field (e.g., 6 octets) including an address of the station sending the frame, a length/type field (e.g., 2 octets), a (MAC) client data (e.g., 46 to 1500 octets), and a frame check sequence (FCS) field (e.g., 4 octets), and an IPG field (e.g., 12 octets)” [Ong ¶ 61 Examiner notes this interpretation of source token is in line with the description given in paragraph 52 of the instant specification].
Regarding claim 9, King in view of Ong teaches the method of claim 1, as referenced above. King further teaches the method further comprising obtaining, by the data transform accelerator, input data based on the information in the data transform command from the host computing unit memory or from the memory of the data transform accelerator. “In some examples, switch fabric 660 can provide routing of packets (input data) from one or more ingress ports for processing prior to egress from switch 654.” [King ¶ 54]. “When the packet matches a particular match entry, that particular match entry references a particular action entry which specifies a set of actions to perform on the packet (e.g., sending the packet to a particular port, modifying one or more packet header field values, dropping the packet, mirroring the packet to a mirror buffer, etc.). The action engine of the stage can perform the actions on the packet, which is then sent to the next stage of the MAU. The deparser 226 or 236 can reconstruct the packet using a packet header vector (PHY) as modified by the MAU 224 or 234 and the payload received directly from the parser 222 or 232” [King ¶ 21-22].
Regarding claim 10, King in view of Ong teaches the method of claim 9, as referenced above. King further teaches the method further comprising performing, by the data transform accelerator, one or more data transform operations on the input data using the data transform pipeline. “When the packet matches a particular match entry, that particular match entry references a particular action entry which specifies a set of actions to perform on the packet (e.g., sending the packet to a particular port, modifying one or more packet header field values, dropping the packet, mirroring the packet to a mirror buffer, etc.). The action engine of the stage can perform the actions on the packet, which is then sent to the next stage of the MAU” [King ¶ 21].
Regarding claim 11, King in view of Ong teaches the method of claim 10, as referenced above. King further teaches the method further comprising transmitting, by the data transform accelerator, output data produced by the data transform pipeline to the host computing unit or to the memory of the data transform accelerator. “After passing through the selected ingress pipeline 220, the packet is sent to the traffic manager 250, where the packet is enqueued and placed in the output buffer 254” [King ¶ 19]. “When a packet data reference reaches the head of its queue and is scheduled for dequeuing, the corresponding packet data (output data) can be read out of the output buffer 254 and into the corresponding egress pipeline 230” [King ¶ 24]. “Direct memory access (DMA) engine 652 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa, instead of copying the packet to an intermediate buffer at the host and then using another copy operation from the intermediate buffer to the destination buffer” [King ¶ 51].
Claim(s) 5-6 and 12-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over King et al. US 2022/0164237 A1 (hereafter King) in view of Ong et al. US 2023/0096468 A1 (hereafter Ong) in view of Pope et al. US 2023/0224261 A1 (hereafter Pope).
Regarding claim 5, King in view of Ong teaches the method of claim 1, as referenced above. King further teaches wherein obtaining the metadata based on the information in the data transform command comprises obtaining command (data) pre-data from a second input buffer in the data transform accelerator or in the memory of the host computing unit. “Direct memory access (DMA) engine 652 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa, instead of copying the packet to an intermediate buffer at the host and then using another copy operation from the intermediate buffer to the destination buffer” [King ¶ 51]. “Transmit queue 606 can include data or references to data for transmission by network interface. Receive queue 608 can include data or references to data that was received by network interface from a network. Descriptor queues 620 (second input buffers) can include descriptors (data) that reference data or packets in transmit queue 606 or receive queue 608” [King ¶ 52].
King in view of Ong fails to explicitly teach command pre-data.
However, Pope teaches command pre-data “All fragments are processed in-order. The AAD (additional authenticated data) for the packet is provided, in entirety, in the first fragment” [Pope ¶ 540].
Pope is considered to be analogous to the claimed invention because it is in the same field of interprogram communication using buffers. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified King in view of Ong to incorporate the teachings of Pope and include command pre-data. Doing so would support data authentication operations. “The transmit path instance of the cryption offload engine may support encryption and authentication. The receive path instance of the cryption offload engine may support decryption and authentication” [Pope ¶ 528].
Regarding claim 6, King in view of Ong in view of Pope teaches the method of claim 5, as referenced above. King in view of Ong fails to teach wherein the command pre-data includes at least one of: initialization vector (IV), message authentication code (MAC), Galois/counter mode (GCM) authentication tag, or additional authentication data (AAD).
However, Pope teaches wherein the command pre-data includes at least one of: initialization vector (IV), message authentication code (MAC), Galois/counter mode (GCM) authentication tag, or additional authentication data (AAD). “All fragments are processed in-order. The AAD (additional authenticated data) for the packet is provided, in entirety, in the first fragment” [Pope ¶ 540].
With regard to claim 12, King teaches:
A host comprising: data processing hardware; “FIG. 4 depicts an example system. Host 400 can be implemented as a computing platform at least with one or more processors, one or more memory devices, interconnect circuitry, and one or more device interfaces” [King ¶ 29].
one or more than one virtual machine on a host operating system of the host computing unit, “The hypervisor can emulate multiple virtual hardware platforms that are isolated from another, allowing virtual machines to run Linux®, Windows® Server, VMware ESXi, and other operating systems on the same underlying physical host” [King ¶ 62]. “In some examples, processors 806 of IPU 800 can execute one or more processes, applications, VMs, containers, microservices, and so forth that request performance of workloads by one or more of: processors 810, accelerators 820, memory pool 830, and/or servers 840-0 to 840-N” [King ¶ 77].
one or more software drivers in the host operating system in the host computing unit; “In some examples, OS 732 can be Linux®, Windows® Server or personal computer, FreeBSD®, Android®, MacOS®, iOS®, VMware vSphere, openSUSE, RHEL, CentOS, Debian, Ubuntu, or any other operating system. The OS and driver can execute on a processor sold or designed by Intel®, ARM®, AMD®, Qualcomm®, IBM®, Nvidia®, Broadcom®, Texas Instruments®, among others” [King ¶ 64].
and memory hardware in communication with the data processing hardware, the memory hardware storing instructions that when executed on the data processing hardware cause the data processing hardware to perform operations comprising: “Storage 784 can be generically considered to be a "memory," although memory 730 is typically the executing or operating memory to provide instructions to processor 710” [King ¶ 70]. “According to some examples, a computer-readable medium may include a non-transitory storage medium to store or maintain instructions that when executed by a machine, computing device or system, cause the machine, computing device or system to perform methods and/or operations in accordance with the described examples” [King ¶ 80].
generate container data structures in a memory “The parser 222 or 232, in some examples, receives a packet as a formatted collection of bits in a particular order, and parses the packet into its constituent header fields. In some examples, the parser starts from the beginning of the packet and assigns header fields to fields (e.g., data containers) for processing” [King ¶ 20, fig. 4 and 8]. “In some examples, the deparser can construct this packet based on data received along with the PHY that specifies the protocols to include in the packet header, as well as its own stored list of data container locations for possible protocol's header fields” [King ¶ 22].
of the data transform accelerator, “In one example, system 700 includes interface 712 (data transform accelerator) coupled to processor 710, which can represent a higher speed interface or a high throughput interface for system components that needs higher bandwidth connections, such as memory subsystem 720 or graphics interface components 740, or accelerators 742 … Accelerators 742 can be a programmable or fixed function offload engine that can be accessed or used by a processor 710” [King ¶ 58]. “Direct memory access (DMA) engine 652 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa, instead of copying the packet to an intermediate buffer at the host and then using another copy operation from the intermediate buffer to the destination buffer” [King ¶ 51].
generate input data in the memory hardware of host computing unit or in the memory of data transform accelerator by software driver in guest operating system of the virtual machine in the host computing unit; “Direct memory access (DMA) engine 652 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa, instead of copying the packet to an intermediate buffer at the host and then using another copy operation from the intermediate buffer to the destination buffer” [King ¶ 51].
generate metadata in the memory of data transform accelerator or in the memory of host computing unit, or in in the memory of the data transform accelerator and in the memory of the host computing unit by software driver in a guest operating system of the virtual machine in the host computing unit; “Direct memory access (DMA) engine 652 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa, instead of copying the packet to an intermediate buffer at the host and then using another copy operation from the intermediate buffer to the destination buffer” [King ¶ 51]. “Transmit queue 606 can include data or references to data for transmission by network interface. Receive queue 608 can include data or references to data that was received by network interface from a network. Descriptor queues 620 can include descriptors (metadata) that reference data or packets in transmit queue 606 or receive queue 608” [King ¶ 52].
Generate (data) pre-data in the memory of data transform accelerator or in the memory of host computing unit, or in both in the memory of the data transform accelerator and in the memory of the host computing unit by software driver in guest operating system of the virtual machine in the host computing unit; “Direct memory access (DMA) engine 652 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa, instead of copying the packet to an intermediate buffer at the host and then using another copy operation from the intermediate buffer to the destination buffer” [King ¶ 51]. “Transmit queue 606 can include data or references to data for transmission by network interface. Receive queue 608 can include data or references to data that was received by network interface from a network. Descriptor queues 620 can include descriptors (data) that reference data or packets in transmit queue 606 or receive queue 608” [King ¶ 52].
generate additional metadata in the memory of data transform accelerator or in the memory of host computing unit, or in both in the memory of the data transform accelerator and in the memory of the host computing unit by software driver in guest operating system of the virtual machine in the host computing unit; “Direct memory access (DMA) engine 652 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa, instead of copying the packet to an intermediate buffer at the host and then using another copy operation from the intermediate buffer to the destination buffer” [King ¶ 51]. “Transmit queue 606 can include data or references to data for transmission by network interface. Receive queue 608 can include data or references to data that was received by network interface from a network. Descriptor queues 620 can include descriptors (metadata) that reference data or packets in transmit queue 606 or receive queue 608” [King ¶ 52].
reserve an output buffer in the memory hardware or in the memory of data transform accelerator or on the memory of host computing unit; “After passing through the selected ingress pipeline 220, the packet is sent to the traffic manager 250, where the packet is enqueued and placed in the output buffer 254. In some examples, the ingress pipeline 220 that processes the packet specifies into which queue the packet is to be placed by the traffic manager 250 (e.g., based on the destination of the packet or a flow identifier of the packet)” [King ¶ 19].
generate a first data transform command in the memory hardware, the first data transform command associated with the input data and the metadata; “When the packet matches a particular match entry, that particular match entry references a particular action entry which specifies a set of actions to perform on the packet (e.g., sending the packet to a particular port, modifying one or more packet header field values, dropping the packet, mirroring the packet to a mirror buffer, etc.). The action engine of the stage can perform the actions on the packet, which is then sent to the next stage of the MAU” [King ¶ 21, fig. 2].
and update the container data structure with an address of the first data transform command, wherein the address of the first data transform command is accessible by the data transform accelerator, “MAC circuitry 616 can be configured to assemble data to be transmitted into packets, that include destination and source addresses along with network control information and error detection hash values” [King ¶ 47]. “The deparser 226 or 236 can reconstruct the packet using a packet header vector (PHV) as modified by the MAU 224 or 234 and the payload received directly from the parser 222 or 232. The deparser can construct a packet that can be sent out over the physical network, or to the traffic manager 250. In some examples, the deparser can construct this packet based on data received along with the PHV that specifies the protocols to include in the packet header, as well as its own stored list of data container locations for possible protocol's header fields” [King ¶ 22].
wherein accessing the address of the first data transform command by the data transform accelerator causes the data transform accelerator to obtain the input data, to perform one or more data transform operations on the input data based on the metadata, “The egress pipelines can request their respective data from the common data buffer using a queuing policy that is control-plane configurable. When a packet data reference reaches the head of its queue and is scheduled for dequeuing, the corresponding packet data can be read out of the output buffer 254 and into the corresponding egress pipeline 230” [King ¶ 24]. “When the packet matches a particular match entry, that particular match entry references a particular action entry which specifies a set of actions to perform on the packet (e.g., sending the packet to a particular port, modifying one or more packet header field values, dropping the packet, mirroring the packet to a mirror buffer, etc.). The action engine of the stage can perform the actions on the packet, which is then sent to the next stage of the MAU” [King ¶ 21, fig. 2]. “Memory 610 can be any type of volatile or nonvolatile memory device and can store any queue or instructions used to program network interface 600. Transmit queue 606 can include data or references to data for transmission by network interface. Receive queue 608 can include data or references to data that was received by network interface from a network. Descriptor queues 620 can include descriptors that reference data or packets in transmit queue 606 or receive queue 608” [King ¶ 52].
and to transmit output data to the output buffer, and wherein the output data is the input data after being transformed by the one or more data transform operations. “The traffic manager 250 can provide a shared buffer that accommodates any queuing delays in the egress pipelines. In some examples, this shared output buffer 254 can store packet data, while references (e.g., pointers) to that packet data are kept in different queues for egress pipeline 230” [King ¶ 24]. “The deparser 226 or 236 can reconstruct the packet using a packet header vector (PHV) as modified by the MAU 224 or 234 and the payload received directly from the parser 222 or 232. The deparser can construct a packet that can be sent out over the physical network, or to the traffic manager 250. In some examples, the deparser can construct this packet based on data received along with the PHV that specifies the protocols to include in the packet header, as well as its own stored list of data container locations for possible protocol's header fields” [King ¶ 22].
King fails to explicitly teach with each virtual machine using one or more than one virtual function in the data transform accelerator; one or more software drivers in a guest operating system in the virtual machines in the host computing unit; the data transform accelerator in data communication with the host or in the memory of the host computing unit, using the software driver in the host operating system of the host computing unit; generate additional metadata in the memory … and additional pre-data.
However, Ong teaches:
with each virtual machine using one or more than one virtual function in the data transform accelerator; “The queues 472 may be referred to as transmit (Tx) queues 472 when transmitting data over a Tx data path and referred to as Rx queues 472 when receiving data over an Rx data path … In a virtualized server, the Rx queues 472 can be assigned either to a VMM or to VMs using SR-IOV. The Rx queues 472 can be assigned to physical functions (PFs) and/or virtual functions (VFs) as needed (which may be represented by the apps/middleware 491 in FIG. 4). The Rx queues 472 assigned to a particular interface function (e.g., PF or VF) can be used for distributing packet processing work to the different processors in a multi-processor system” [Ong ¶ 32].
one or more software drivers in a guest operating system in the virtual machines in the host computing unit; “Additionally or alternatively, the computer program/code 1201, 1211, 1221 can include one or more operating systems (OS) and/or other software to control various aspects of the compute node 1200. The OS can include drivers to control particular devices that are embedded in the compute node 1200, attached to the compute node 1200, and/or otherwise communicatively coupled with the compute node 1200. Example OSs include consumer-based OS, real-time OS (RTOS), hypervisors, and/or the like” [Ong ¶ 124].
the data transform accelerator in data communication with the host or in the memory of the host computing unit, using the software driver in the host operating system of the host computing unit; “The NIC 468 interacts with applications (apps) and/or middleware 491 operating in/on the host platform 490 via NIC driver/ API 480” [Ong ¶ 29].
generate additional metadata in the memory “In some cases, additional control parameters that cannot fit within the data descriptors 473 are needed to process the packet(s). In these cases, additional context descriptors are used in front of the data descriptors 473. Examples of such context descriptors include transmit segmentation (TSO), FD filter programming (see e.g., [FlowDirector]), and FCoE context programming. Additionally or alternatively, the additional control parameters can be indicated using additional fields within individual descriptors 473. Examples of such fields/parameters include a status information field 474c, a misc field 474d, an OWN bit (0) 475a, and a NEXT bit (N) 475b” [Ong ¶ 39].
and additional pre-data “In some cases, additional control parameters that cannot fit within the data descriptors 473 are needed to process the packet(s). In these cases, additional context descriptors are used in front of the data descriptors 473. Examples of such context descriptors include transmit segmentation (TSO), FD filter programming (see e.g., [FlowDirector]), and FCoE context programming. Additionally or alternatively, the additional control parameters can be indicated using additional fields within individual descriptors 473. Examples of such fields/parameters include a status information field 474c, a misc field 474d, an OWN bit (0) 475a, and a NEXT bit (N) 475b” [Ong ¶ 39].
It would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified King to incorporate the teachings of Ong and include with each virtual machine using one or more than one virtual function in the data transform accelerator; one or more software drivers in a guest operating system in the virtual machines in the host computing unit; the data transform accelerator in data communication with the host or in the memory of the host computing unit, using the software driver in the host operating system of the host computing unit; generate additional metadata in the memory … and additional pre-data. Doing so would allow for improved system performance. “Different examples of IPUs 1300 discussed herein are capable of supporting one or more processors (such as any of those discussed herein) connected to the IPUs 1300, and enable improved performance, management, security and coordination functions between entities (e.g., cloud service providers), and enable infrastructure offload and/or communications coordination functions. As discussed infra, IPUs 1300 may be integrated with smart NICs and/or storage or memory (e.g., on a same die, system on chip (SoC), or connected dies) that are located at on-premises systems, NANs (e.g., base stations, access points, gateways, network appliances, and/or the like), neighborhood central offices, and so forth” [Ong ¶ 145].
King in view of Ong fails to explicitly teach command pre-data and pre-data.
However, Pope teaches:
command pre-data “All fragments are processed in-order. The AAD (additional authenticated data) for the packet is provided, in entirety, in the first fragment” [Pope ¶ 540].
pre-data “All fragments are processed in-order. The AAD (additional authenticated data) for the packet is provided, in entirety, in the first fragment” [Pope ¶ 540].
It would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified King in view of Ong to incorporate the teachings of Pope and include command pre-data and pre-data. Doing so would support data authentication operations. “The transmit path instance of the cryption offload engine may support encryption and authentication. The receive path instance of the cryption offload engine may support decryption and authentication” [Pope ¶ 528].
Regarding claim 13, King in view of Ong in view of Pope teaches the host of claim 12, as referenced above. King further teaches wherein the host is in the data communication with the data transform accelerator based on peripheral component interconnect express (PCIe) standard. “In an example, system 700 can be implemented using interconnected compute sleds of processors, memories, storages, network interfaces, and other components. High speed interconnects can be used such as PCIe, Ethernet, or optical interconnects (or a combination thereof)” [King ¶ 75].
Regarding claim 14, King in view of Ong in view of Pope teaches the host of claim 12, as referenced above. King further teaches and generating a first destination descriptor pointing to the output buffer. “Descriptor queues 620 can include descriptors that reference data or packets in transmit queue 606 or receive queue 608” [King ¶ 52]. “The traffic manager 250 can provide a shared buffer that accommodates any queuing delays in the egress pipelines. In some examples, this shared output buffer 254 can store packet data, while references (e.g., pointers) to that packet data are kept in different queues for egress pipeline 230” [King ¶ 24]. “When the packet matches a particular match entry, that particular match entry references a particular action entry which specifies a set of actions to perform on the packet (e.g., sending the packet to a particular port, modifying one or more packet header field values, dropping the packet, mirroring the packet to a mirror buffer, etc.)” [King ¶ 21].
generating a third source descriptor “Descriptor queues 620 can include descriptors that reference data or packets in transmit queue 606 or receive queue 608” [King ¶ 52].
King fails to teach wherein generating the first data transform command comprises: generating a first source descriptor pointing to the input data; generating a second source descriptor pointing to the metadata; generating a third source descriptor pointing to the pre-data; generating a fourth source descriptor pointing to the additional metadata.
However, Ong teaches:
wherein generating the first data transform command comprises: generating a first source descriptor pointing to the input data; “The address (addr) field 474a in an Rx descriptor 473 points to locations where data from received packets is/are to be stored or posted. The NIC 468 (or the MAC 405 or the PE 404) will fetch the descriptors 473 from the memory 470 based on a detected change to a tail pointer, parse the descriptors 473 to obtain the pointers 474a to where the data of Rx packets should be stored, and stores those pointers (or the descriptors 473 themselves) in corresponding slots in a descriptor cache in the NIC 468” [Ong ¶ 38]. “Each descriptor 473 (including descriptors 473ml, 473m2, 473m3, and 473nl) is a SW construct that describes packets and/or memory locations where packets (input data) are stored or should be stored. For example, an address 474a (or memory location 474a) where the packet is stored or should be stored, a length 474b (or size 474b) of the packet or memory location, and/or the like” [Ong ¶ 34].
generating a second source descriptor pointing to the metadata; “Examples of the Rx descriptor ring context information include a base address (or base (index) pointer), length of the Rx descriptor rings 471, head (index) pointer, and tail (index) pointer. The base address provides the base/start location of the Rx descriptor ring 471 on the system memory 470. The length specifies the number of Rx descriptors 473 that belong to the Rx descriptor ring 471. The size of an Rx descriptor 473 is typically fixed at 8 bytes or 16 bytes. The head and tail (index) pointers are read/written by both the NIC 468 and the host platform 490 to coordinate which Rx descriptor 473 is being processed by the NIC 468. For example, the head (index) pointer points to the current Rx descriptor 473 (relative to the base address) being processed by the NIC 468, and the tail (index) pointer points to the Rx descriptor 473 (relative to the base address) that has been setup by the host platform 490 for future Rx frame receiving (also referred to as a "head chasing the tail pointer design")” [Ong ¶ 49].
generating a fourth source descriptor pointing to the additional metadata; “In some cases, additional control parameters that cannot fit within the data descriptors 473 are needed to process the packet(s). In these cases, additional context descriptors are used in front of the data descriptors 473” [Ong ¶ 39].
King in view of Ong fails to teach generating a third source descriptor pointing to the pre-data.
However, Pope teaches pointing to the pre-data; “All fragments are processed in-order. The AAD (additional authenticated data) for the packet is provided, in entirety, in the first fragment” [Pope ¶ 540].
Regarding claim 15, King in view of Ong in view of Pope teaches the host of claim 12, as referenced above. King further teaches wherein the metadata specifies data transform operations to be performed by the data transform accelerator. “MAUs 224 or 234 can perform processing on the packet data. In some examples, MAUs includes a sequence of stages, with a stage including one or more match tables and an action engine. A match table can include a set of match entries against which the packet header fields are matched (e.g., using hash tables), with the match entries referencing action entries. When the packet matches a particular match entry, that particular match entry references a particular action entry which specifies a set of actions to perform on the packet (e.g., sending the packet to a particular port, modifying one or more packet header field values, dropping the packet, mirroring the packet to a mirror buffer, etc.)” [King ¶ 21].
Regarding claim 16, King in view of Ong in view of Pope teaches the host of claim 12, as referenced above. King in view of Ong fails to teach wherein the pre-data includes at least one parameter for the data transform accelerator: initialization vector (IV), message authentication code (MAC), Galois/counter mode (GCM) authentication tag, or additional authentication data (AAD).
However, Pope teaches wherein the pre-data includes at least one parameter for the data transform accelerator: initialization vector (IV), message authentication code (MAC), Galois/counter mode (GCM) authentication tag, or additional authentication data (AAD). “All fragments are processed in-order. The AAD (additional authenticated data) for the packet is provided, in entirety, in the first fragment” [Pope ¶ 540].
Regarding claim 17, King in view of Ong in view of Pope teaches the host of claim 12, as referenced above. King fails to teach wherein the additional metadata includes at least one of a source token, or an action token.
However, Ong teaches wherein the additional metadata includes at least one of a source token, or an action token. “As mentioned previously, the descriptors 473 include pointer 474a and length (source token) 474b pairs that point to locations in the data queues 472, and also include various control fields for data processing” [Ong ¶ 38]. “The host platform 490 then fills or otherwise configures each Rx Desc 471 with the information about its corresponding Rx queue 472 (e.g., address/location 474a of its corresponding frame buffer 472, length 474b of its corresponding frame buffer 472, and/or the like). The length or size of each frame buffer 472 of an Rx Desc 471 may be fixed in size (e.g. through a register on the NIC 468 that is configured by the host platform 490), or has a variable size by setting the length field 474b of the corresponding Rx descriptor 474” [Ong ¶ 48]. “The frame of each packet includes a destination address field (e.g., 6 octets) specifies the station(s) for which the MAC frame is intended, a source address field (e.g., 6 octets) including an address of the station sending the frame, a length/type field (e.g., 2 octets), a (MAC) client data (e.g., 46 to 1500 octets), and a frame check sequence (FCS) field (e.g., 4 octets), and an IPG field (e.g., 12 octets)” [Ong ¶ 61 Examiner notes this interpretation of source token is in line with the description given in paragraph 52 of the instant specification].
Regarding claim 18, King in view of Ong in view of Pope teaches the host of claim 12, as referenced above. King further teaches wherein the data transform accelerator obtains the input data from the host via the data communication. “Direct memory access (DMA) engine 652 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa, instead of copying the packet to an intermediate buffer at the host and then using another copy operation from the intermediate buffer to the destination buffer” [King ¶ 51].
Regarding claim 19, King in view of Ong in view of Pope teaches the host of claim 12, as referenced above. King further teaches wherein the data transform accelerator obtains the metadata, pre-data, and additional meta-data from an on-chip memory of the data transform accelerator or from the memory of the host computing unit or from the memory of host computing unit and the memory of data transform accelerator. “Direct memory access (DMA) engine 652 can copy a packet header, packet payload, and/or descriptor directly from host memory to the network interface or vice versa, instead of copying the packet to an intermediate buffer at the host and then using another copy operation from the intermediate buffer to the destination buffer” [King ¶ 51].
King fails to explicitly teach wherein the data transform accelerator obtains the metadata, pre-data, and additional meta-data from an on-chip memory of the data transform accelerator or from the memory of the host computing unit or from the memory of host computing unit and the memory of data transform accelerator.
However, Ong teaches wherein the data transform accelerator obtains the metadata, pre-data, and additional meta-data from an on-chip memory of the data transform accelerator or from the memory of the host computing unit or from the memory of host computing unit and the memory of data transform accelerator. “In some cases, additional control parameters that cannot fit within the data descriptors 473 are needed to process the packet(s). In these cases, additional context descriptors are used in front of the data descriptors 473. Examples of such context descriptors include transmit segmentation (TSO), FD filter programming (see e.g., [FlowDirector]), and FCoE context programming. Additionally or alternatively, the additional control parameters can be indicated using additional fields within individual descriptors 473. Examples of such fields/parameters include a status information field 474c, a misc field 474d, an OWN bit (0) 475a, and a NEXT bit (N) 475b” [Ong ¶ 39]. “The buffers 411 may be implemented using in-package memory circuitry (also referred to as "on-chip memory circuitry", "on-die memory circuitry", or the like) and/or cache device/circuitry” [Ong ¶ 41].
King in view of Ong fails to explicitly teach pre-data.
However, Pope teaches pre-data “All fragments are processed in-order. The AAD (additional authenticated data) for the packet is provided, in entirety, in the first fragment” [Pope ¶ 540].
Regarding claim 20, King in view of Ong in view of Pope teaches the host of claim 12, as referenced above. King fails to explicitly teach wherein a second data transform command associates with the metadata, the second data transform command different from the first data transform command.
However, Ong teaches wherein a second data transform command associates with the metadata, the second data transform command different from the first data transform command. “The queues 472 may be referred to as transmit (Tx) queues 472 when transmitting data over a Tx data path and referred to as Rx queues 472 when receiving data over an Rx data path … In a virtualized server, the Rx queues 472 can be assigned either to a VMM or to VMs using SR-IOV. The Rx queues 472 can be assigned to physical functions (PFs) and/or virtual functions (VFs) as needed (which may be represented by the apps/middleware 491 in FIG. 4). The Rx queues 472 assigned to a particular interface function (e.g., PF or VF) can be used for distributing packet processing work to the different processors in a multi-processor system. In some implementations, on the Rx side, packets are classified by the NIC 468 under OS or VMM control into groups of conversations (command submission session). Each group of conversations is assigned its own Rx queue 472 and Rx processor or processor core” [Ong ¶ 32]. “In this example, respective Rx descriptor ring buffers (Rx Desc) 471-M and 471-N (collectively referred to as "ring buffers 471", "descriptor rings 471", "Rx rings 471", "Rx Desc 471", and/or the like) include descriptors 473 that point to respective memory locations (or slots) in the Rx queues 472 for posting packets that arrive from the network (e.g., over the physical layer transceiver circuitry (PHY) 401). In this example, Rx Desc 471-M corresponds to Rx queue 472-M and Rx Desc 471-N corresponds to Rx queue 472-N” [Ong ¶ 32, fig. 4]. “For example, for a request to a database application that requires a response, the example IPU 1300 prioritizes its processing to minimize the stalling of the requesting application. In some examples, the IPU 1300 schedules the prioritized message request issuing the event to execute a SQL query database and the example IPU constructs microservices that issue SQL queries and the queries are sent to the appropriate devices or services” [Ong ¶ 155].
Regarding claim 21, King in view of Ong in view of Pope teaches the host of claim 12, as referenced above. King fails to explicitly teach wherein a plurality of data transform commands in a command submission session from a virtual machine associate with the metadata, different from the first data transform command.
However, Ong teaches:
wherein a plurality of data transform commands in a command submission session from a virtual machine associate with the metadata, different from the first data transform command. “The queues 472 may be referred to as transmit (Tx) queues 472 when transmitting data over a Tx data path and referred to as Rx queues 472 when receiving data over an Rx data path … In a virtualized server, the Rx queues 472 can be assigned either to a VMM or to VMs using SR-IOV. The Rx queues 472 can be assigned to physical functions (PFs) and/or virtual functions (VFs) as needed (which may be represented by the apps/middleware 491 in FIG. 4). The Rx queues 472 assigned to a particular interface function (e.g., PF or VF) can be used for distributing packet processing work to the different processors in a multi-processor system. In some implementations, on the Rx side, packets are classified by the NIC 468 under OS or VMM control into groups of conversations (command submission session). Each group of conversations is assigned its own Rx queue 472 and Rx processor or processor core” [Ong ¶ 32]. “In this example, respective Rx descriptor ring buffers (Rx Desc) 471-M and 471-N (collectively referred to as "ring buffers 471", "descriptor rings 471", "Rx rings 471", "Rx Desc 471", and/or the like) include descriptors 473 that point to respective memory locations (or slots) in the Rx queues 472 for posting packets that arrive from the network (e.g., over the physical layer transceiver circuitry (PHY) 401). In this example, Rx Desc 471-M corresponds to Rx queue 472-M and Rx Desc 471-N corresponds to Rx queue 472-N” [Ong ¶ 32, fig. 4]. “For example, for a request to a database application that requires a response, the example IPU 1300 prioritizes its processing to minimize the stalling of the requesting application. In some examples, the IPU 1300 schedules the prioritized message request issuing the event to execute a SQL query database and the example IPU constructs microservices that issue SQL queries and the queries are sent to the appropriate devices or services” [Ong ¶ 155].
Regarding claim 22, King in view of Ong in view of Pope teaches the host of claim 12, as referenced above. King fails to explicitly teach wherein a plurality of command submission session from a virtual machine each session having its own metadata shared by data transform commands grouped in the session.
However, Ong teaches:
wherein a plurality of command submission session from a virtual machine each session having its own metadata shared by data transform commands grouped in the session “The queues 472 may be referred to as transmit (Tx) queues 472 when transmitting data over a Tx data path and referred to as Rx queues 472 when receiving data over an Rx data path … In a virtualized server, the Rx queues 472 can be assigned either to a VMM or to VMs using SR-IOV. The Rx queues 472 can be assigned to physical functions (PFs) and/or virtual functions (VFs) as needed (which may be represented by the apps/middleware 491 in FIG. 4). The Rx queues 472 assigned to a particular interface function (e.g., PF or VF) can be used for distributing packet processing work to the different processors in a multi-processor system. In some implementations, on the Rx side, packets are classified by the NIC 468 under OS or VMM control into groups of conversations (command submission session). Each group of conversations is assigned its own Rx queue 472 and Rx processor or processor core” [Ong ¶ 32]. “In this example, respective Rx descriptor ring buffers (Rx Desc) 471-M and 471-N (collectively referred to as "ring buffers 471", "descriptor rings 471", "Rx rings 471", "Rx Desc 471", and/or the like) include descriptors 473 that point to respective memory locations (or slots) in the Rx queues 472 for posting packets that arrive from the network (e.g., over the physical layer transceiver circuitry (PHY) 401). In this example, Rx Desc 471-M corresponds to Rx queue 472-M and Rx Desc 471-N corresponds to Rx queue 472-N” [Ong ¶ 32, fig. 4]. “For example, for a request to a database application that requires a response, the example IPU 1300 prioritizes its processing to minimize the stalling of the requesting application. In some examples, the IPU 1300 schedules the prioritized message request issuing the event to execute a SQL query database and the example IPU constructs microservices that issue SQL queries and the queries are sent to the appropriate devices or services” [Ong ¶ 155].
Regarding claim 23, King in view of Ong in view of Pope teaches the host of claim 12, as referenced above. King fails to explicitly teach wherein a plurality of virtual machine each creating its command submission sessions.
However, Ong teaches:
wherein a plurality of virtual machine each creating its command submission sessions “The queues 472 may be referred to as transmit (Tx) queues 472 when transmitting data over a Tx data path and referred to as Rx queues 472 when receiving data over an Rx data path … In a virtualized server, the Rx queues 472 can be assigned either to a VMM or to VMs using SR-IOV. The Rx queues 472 can be assigned to physical functions (PFs) and/or virtual functions (VFs) as needed (which may be represented by the apps/middleware 491 in FIG. 4). The Rx queues 472 assigned to a particular interface function (e.g., PF or VF) can be used for distributing packet processing work to the different processors in a multi-processor system. In some implementations, on the Rx side, packets are classified by the NIC 468 under OS or VMM control into groups of conversations (command submission session). Each group of conversations is assigned its own Rx queue 472 and Rx processor or processor core” [Ong ¶ 32].
Regarding claim 24, King in view of Ong in view of Pope teaches the host of claim 12, as referenced above. King further teaches generating first set of multiple destination descriptors pointing to the output buffers. “Descriptor queues 620 can include descriptors that reference data or packets in transmit queue 606 or receive queue 608” [King ¶ 52]. “The traffic manager 250 can provide a shared buffer that accommodates any queuing delays in the egress pipelines. In some examples, this shared output buffer 254 can store packet data, while references (e.g., pointers) to that packet data are kept in different queues for egress pipeline 230” [King ¶ 24]. “When the packet matches a particular match entry, that particular match entry references a particular action entry which specifies a set of actions to perform on the packet (e.g., sending the packet to a particular port, modifying one or more packet header field values, dropping the packet, mirroring the packet to a mirror buffer, etc.)” [King ¶ 21].
generating third set of multiple source descriptor “Descriptor queues 620 can include descriptors that reference data or packets in transmit queue 606 or receive queue 608” [King ¶ 52].
King fails to teach wherein generating the first data transform command comprises: generating first set of multiple source descriptors pointing to one or more input data buffers; generating second set of multiple source descriptor pointing to one or more buffers containing metadata; pointing to one or more buffers containing pre-data; generating fourth set of multiple source descriptor pointing to one or more buffers containing additional metadata.
However, Ong teaches:
wherein generating the first data transform command comprises: generating first set of multiple source descriptors pointing to one or more input data buffers; “The address (addr) field 474a in an Rx descriptor 473 points to locations where data from received packets is/are to be stored or posted. The NIC 468 (or the MAC 405 or the PE 404) will fetch the descriptors 473 from the memory 470 based on a detected change to a tail pointer, parse the descriptors 473 to obtain the pointers 474a to where the data of Rx packets should be stored, and stores those pointers (or the descriptors 473 themselves) in corresponding slots in a descriptor cache in the NIC 468” [Ong ¶ 38]. “Each descriptor 473 (including descriptors 473ml, 473m2, 473m3, and 473nl) is a SW construct that describes packets and/or memory locations where packets (input data) are stored or should be stored. For example, an address 474a (or memory location 474a) where the packet is stored or should be stored, a length 474b (or size 474b) of the packet or memory location, and/or the like” [Ong ¶ 34]. “In this example, respective Rx descriptor ring buffers (Rx Desc) 471-M and 471-N (collectively referred to as "ring buffers 471", "descriptor rings 471", "Rx rings 471", "Rx Desc 471", and/or the like) include descriptors 473 that point to respective memory locations (or slots) in the Rx queues 472 for posting packets that arrive from the network (e.g., over the physical layer transceiver circuitry (PHY) 401). In this example, Rx Desc 471-M corresponds to Rx queue 472-M and Rx Desc 471-N corresponds to Rx queue 472-N” [Ong ¶ 32, fig. 4].
generating second set of multiple source descriptor pointing to one or more buffers containing metadata; “Examples of the Rx descriptor ring context information include a base address (or base (index) pointer), length of the Rx descriptor rings 471, head (index) pointer, and tail (index) pointer. The base address provides the base/start location of the Rx descriptor ring 471 on the system memory 470. The length specifies the number of Rx descriptors 473 that belong to the Rx descriptor ring 471. The size of an Rx descriptor 473 is typically fixed at 8 bytes or 16 bytes. The head and tail (index) pointers are read/written by both the NIC 468 and the host platform 490 to coordinate which Rx descriptor 473 is being processed by the NIC 468. For example, the head (index) pointer points to the current Rx descriptor 473 (relative to the base address) being processed by the NIC 468, and the tail (index) pointer points to the Rx descriptor 473 (relative to the base address) that has been setup by the host platform 490 for future Rx frame receiving (also referred to as a "head chasing the tail pointer design")” [Ong ¶ 49]. “In this example, respective Rx descriptor ring buffers (Rx Desc) 471-M and 471-N (collectively referred to as "ring buffers 471", "descriptor rings 471", "Rx rings 471", "Rx Desc 471", and/or the like) include descriptors 473 that point to respective memory locations (or slots) in the Rx queues 472 for posting packets that arrive from the network (e.g., over the physical layer transceiver circuitry (PHY) 401). In this example, Rx Desc 471-M corresponds to Rx queue 472-M and Rx Desc 471-N corresponds to Rx queue 472-N” [Ong ¶ 32, fig. 4].
pointing to one or more buffers containing (data) pre-data; “Examples of the Rx descriptor ring context information include a base address (or base (index) pointer), length of the Rx descriptor rings 471, head (index) pointer, and tail (index) pointer. The base address provides the base/start location of the Rx descriptor ring 471 on the system memory 470. The length specifies the number of Rx descriptors 473 that belong to the Rx descriptor ring 471. The size of an Rx descriptor 473 is typically fixed at 8 bytes or 16 bytes. The head and tail (index) pointers are read/written by both the NIC 468 and the host platform 490 to coordinate which Rx descriptor 473 is being processed by the NIC 468. For example, the head (index) pointer points to the current Rx descriptor 473 (relative to the base address) being processed by the NIC 468, and the tail (index) pointer points to the Rx descriptor 473 (relative to the base address) that has been setup by the host platform 490 for future Rx frame receiving (also referred to as a "head chasing the tail pointer design")” [Ong ¶ 49].
generating fourth set of multiple source descriptor pointing to one or more buffers containing additional metadata; “In some cases, additional control parameters that cannot fit within the data descriptors 473 are needed to process the packet(s). In these cases, additional context descriptors are used in front of the data descriptors 473” [Ong ¶ 39]. “In this example, respective Rx descriptor ring buffers (Rx Desc) 471-M and 471-N (collectively referred to as "ring buffers 471", "descriptor rings 471", "Rx rings 471", "Rx Desc 471", and/or the like) include descriptors 473 that point to respective memory locations (or slots) in the Rx queues 472 for posting packets that arrive from the network (e.g., over the physical layer transceiver circuitry (PHY) 401). In this example, Rx Desc 471-M corresponds to Rx queue 472-M and Rx Desc 471-N corresponds to Rx queue 472-N” [Ong ¶ 32, fig. 4].
King in view of Ong fails to teach pre-data.
However, Pope teaches pre-data; “All fragments are processed in-order. The AAD (additional authenticated data) for the packet is provided, in entirety, in the first fragment” [Pope ¶ 540].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARI F RIGGINS whose telephone number is (571)272-2772. The examiner can normally be reached Monday-Friday 7:00AM-4:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at (571) 272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.F.R./Examiner, Art Unit 2197
/BRADLEY A TEETS/Supervisory Patent Examiner, Art Unit 2197