Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to the claims filed 5/16/2025. Claims 1-20 are pending. Claims 1 (a machine), 15 (a machine), and 20 (a machine) are independent.
Response to Arguments
Applicant’s arguments, see page 9, filed 8/20/2025, with respect to the rejection(s) of claim(s) 1, 4-6, 13-15, and 18-20 under 102(a) as anticipated by Farley et al., US 2024/0022605 have been fully considered and are persuasive. Farley does not explicitly disclose: “enable an authorized application to program one or more functions of the requested subset of resources of the DPU.” Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Farley et al., US 2024/0022605 (filed July 2022), in view of Goel et al., US 2020/0314012 (filed 2020).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Independent claims 1, 15, and 20 now require: “enable an authorized application to program one or more functions of the requested subset of resources of the DPU.”
It is unclear what this clause requires, if anything.
Notably, claims 1, 15, and 20 already require a separate programming: “program a second DPU resource management circuit to generate auditing data”. The act of enabling a programming is not the act of programming and it is not clear that any action need be performed to enable programming. Additionally, claim 8 further requires that the programming is actually performed, meaning that claim 1, and by implication claims 15 and 20 cannot require “programming of the one or more functions”.
Dependent claims 2-14 and 16-19 are rejected due to their dependency on claims 1 and 15, respectively.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-6, 13-15, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Farley et al., US 2024/0022605 (filed July 2022), in view of Goel et al., US 2020/0314012 (filed 2020).
As to claims 1, 15, and 20 Farley discloses a machine comprising:
a Data Processing Unit (DPU) comprising one or more circuits that route packets within a communications network; and a first DPU resource management circuit comprising: (“the SCP device 406 may be replaced by the DPU device described herein while remaining within the scope of the present disclosure, with that DPU device provided by BLUEFIELD® DPU devices available from NVIDIA® Corporation of Santa Clara, California, United States, DPU devices available from FUNGIBLE® Inc. of Santa Clara, California, United States, and/or other DPU devices known in the art.” Farley ¶ 27. See also ¶¶ 29-30. “functionality described herein may be enabled on the DPU devices discussed above, as well as other devices with similar functionality,” Farley ¶ 32)
a processor; memory in electronic communication with the processor; and instructions stored in the memory, the instructions being executable by the processor to: (see Farley Fig. 1 and ¶ 30)
establish an interface for managing resources of the DPU that are available for use by applications of an attached device; (“the SCP device 702 may identify a client identifier for the client system 712, and associate that client identifier with the LCS provided for that client system 712 using the resource devices 704a-704c and 706.” Farley ¶ 60. The Application being a service provided to the client. SCP database discussed in Farley ¶ 61. “the identities of the resource devices utilized to provide the LCS may be stored in a database along with the rules or other information that define the polic(ies) for the operation of that LCS.” Farley ¶ 64)
access stored registration information that indicates which resources of the DPU are owned by each application; (“any LCS instruction received from a client device system by the SCP device 702 may result in the SCP device 702 retrieving the client identifier associated with that LCS from its SCP database,” Farley ¶ 61)
program a second DPU resource management circuit to generate auditing data based on the registration information; and (“the SCP devices described herein may provide a “trusted” orchestrator device that operates as a Root-of-Trust (RoT) for their corresponding resource devices/systems, to provide an intent management engine for managing the workload intents discussed below, to perform telemetry generation and/or reporting operations for their corresponding resource devices/systems,” Farley ¶ 30)
monitor usage of the resources of the DPU based on the auditing data generated by the second DPU resource management circuit, the auditing data indicating whether a particular application is authorized or not authorized to use a requested subset of the resources of the DPU. (“the enforcement of the LCS policies at block 810 may include allowing the LCS provisioning administrator device 708 to access statistical information that is generated by the resource devices 704a-704c and 706 during their operation to provide the LCSs 900 and 902, while preventing the LCS provisioning administrator device 708 from accessing any client data generated by the resource devices 704a-704c and 706 during their operation to provide the LCSs 900 and 902 (e.g., client data generated at the instruction of the client devices 710a and 712a).” Farley ¶ 72. Note further examples in Farley ¶¶ 73-83)
enable an authorized application (“SCP device 702 may identify a respective LCS policy for each LCS being provided to the client systems 710 and 712, respectively. For example, an LCS policy for an LCS may include an LCS security policy that defines access to different functionality associated with that LCS,” Farley ¶ 68. See also Farley ¶ 71)
Farley does not explicitly disclose: to program one or more functions of the requested subset of resources of the DPU.
Goel discloses:
to program one or more functions of the requested subset of resources of the DPU. (“CPU 34 may offload other software procedures or functions to DPU 32A to be executed by processing cores of DPU 32A. Furthermore, CPU 34 may offload software procedures or functions to GPU 36 via DPU 32A (e.g., computer graphics processes). In this manner, DPU 32A represents a dynamically programmable processing unit that can execute software instructions, as well as provide hardware implementations of various procedures or functions for data-processing tasks,” Goel ¶ 42. “software programs executable on CPU 34 can perform instructions to offload some or all data-intensive processing tasks associated with the software program to DPU 32A.... use function or procedure calls associated with the hardware implementations of various processes of DPU 32A to perform these functions, and when CPU 34 executes the software program, CPU 34 offloads performance of these functions/procedures to DPU 32A.” Goel ¶ 41).
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Farley with Goel by providing the CPU to DPU application offloading of Goel for the client system/DPU processing of Farley. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Farley with Goel in order to offload any data-processing intensive tasks and free the application processors for computing-intensive tasks, Goel ¶ 20.
As to claims 4, 18 Farley in view of Goel discloses a machine of claims 1, 15, and 20 and further discloses:
wherein the first DPU resource management circuit monitors usage of the resources of the DPU by: detecting usage of one or more resources of the DPU by an application; and
determining whether the application has ownership of the one or more resources of the DPU based on the auditing data. (“the enforcement of the LCS policies at block 810 may include allowing the LCS provisioning administrator device 708 to access statistical information that is generated by the resource devices 704a-704c and 706 during their operation to provide the LCSs 900 and 902, while preventing the LCS provisioning administrator device 708 from accessing any client data generated by the resource devices 704a-704c and 706 during their operation to provide the LCSs 900 and 902 (e.g., client data generated at the instruction of the client devices 710a and 712a).” Farley ¶ 72. Note further examples in Farley ¶¶ 73-83)
As to claims 5, 19, Farley in view of Goel discloses a machine of claims 4, 15, and 20 and further discloses:
wherein the first DPU resource management circuit monitors the usage of the resources of the DPU by: (“any of the LCS security policies, LCS QoS policies, LCS lifecycle management policies, and/or other LCS policies discussed above may be applied to the resource devices 704a-704c and 706 included in the resource slice 902a” Farley ¶ 71. “allowing for isolation of that subset of resource devices in order to enforce security policies, monitoring and reporting of the operation of that subset of resource devices in order to enforce QoS policies” Farley ¶ 54)
generating reporting data in response to determining, from the auditing data, the application does not have the ownership of the one or more resources of the DPU, wherein the reporting data indicates a violation associated with the usage of the one or more resources of the DPU by the application. (“the enforcement of the LCS policies at block 810 may provide for the generation of alerts based on LCS policies such as, for example, when violations of LCS security policies occur, based on compliance with (or drift from) LCS QoS policies, in response to the performance of lifecycle management operations according the LCS lifecycle management policies, etc.” Farley ¶ 77)
As to claim 6, Farley in view of Goel discloses a machine of claims 4, 15, and 20 and further discloses:
intercepting the usage of the one or more resources of the DPU by the application in response to determining the application does not have the ownership of the one or more resources of the DPU. (“the enforcement of the LCS policies at block 810 may include allowing the LCS provisioning administrator device 708 to access statistical information that is generated by the resource devices 704a-704c and 706 during their operation to provide the LCSs 900 and 902, while preventing the LCS provisioning administrator device 708 from accessing any client data generated by the resource devices 704a-704c and 706 during their operation to provide the LCSs 900 and 902 (e.g., client data generated at the instruction of the client devices 710a and 712a).” Farley ¶ 72. Note further examples in Farley ¶¶ 73-83)
As to claim 13, Farley in view of Goel discloses a machine of claims 1, 15, and 20 and further discloses:
wherein the applications comprise at least one of: one or more applications executable on the DPU; and
one or more applications executable on a server. (“allow a user of the client device 202 to express a “workload intent” that describes the general requirements of a workload that user would like to perform (e.g., “I need an LCS with 10 gigahertz (Ghz) of processing power and 8 gigabytes (GB) of memory capacity for an application requiring 20 terabytes (TB) of high-performance protected-object-storage for use with a hospital-compliant network”, or “I need an LCS for a machine-learning environment requiring Tensorflow processing with 3 TB s of Accelerator PMEM memory capacity”). As will be appreciated by one of skill in the art in possession of the present disclosure, the workload intent discussed above may be provided to one of the LCS provisioning subsystems 206a-206c, and may be satisfied using resource systems that are included within that LCS provisioning subsystem, or satisfied using resource systems that are included across the different LCS provisioning subsystems 206a-206c.” Farley ¶ 35. DPU/server resources.
As to claim 14, Farley in view of Goel discloses a machine of claims 1, 15, and 20 and further discloses:
wherein at least one of the first DPU resource management circuit and the second DPU resource management circuit are external to the DPU. (“an SCP device 702 that is coupled to the resource devices 704a-704c, but as discussed above may instead be provided by a DPU device and/or other orchestrator devices” Farley ¶ 48)
Claim(s) 2, 3, 7-11, 16, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Farley et al., US 2024/0022605 (filed July 2022), in view of Goel et al., US 2020/0314012 (filed 2020), and Pyla et al., US 2023/0065444 (filed August 2021).
As to claims 2, 16, Farley in view of Goel discloses a machine of claims 1, 15, and 20 but does not disclose:
generate a catalog data structure of the resources of the DPU, wherein the catalog data structure comprises a set of entries, and each entry in the set of the entries corresponds to at least one of:
respective resources of the DPU; and
one or more functions provided by the respective resources of the DPU.
Pyla discloses:
generate a catalog data structure of the resources of the DPU, (Pyla ¶¶ 41-42) wherein the catalog data structure comprises a set of entries, and each entry in the set of the entries corresponds to at least one of: (“with the capability to “discover,” review, register, and inventory all physical hardware connected to all TOR switches across data center 208.” Pyla ¶ 76)
respective resources of the DPU; and
one or more functions provided by the respective resources of the DPU. (“As part of this process, the data center administrator can review physical metadata of hardware, automatically identified during the discovery process, and, optionally, provide additional metadata in the form of key-value pairs to describe those assets. The key-value tags may be assigned and scoped at an individual object level or at an aggregate object type level. Such tags may be of various types, including automatic metadata types and/or custom metadata types. Automatic metadata tags may be information keys automatically generated by controller 270 and are not modifiable. For example, for compute servers, metadata such as manufacturer name, model number, processor family, number of sockets, cores, and amount of memory may fall under this category. Custom metadata tags are information keys and values created and managed by data center administrators to describe business and administrative concerns. For example, for compute servers, a key-value pair called “Cost center” and “Engineering department” could be added to assets for internal bookkeeping. In such an example, once all hardware assets are initialized, resources are ready for use. The capacity across all storage arrays in the deployment may be aggregated into a liquid pool. This storage aggregation generally happens at a low layer.” Pyla ¶ 76. Resources being cores/memory and functions being manufacturer name, model number or custom metadata tags.)
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Farley in view of Goel with Pyla by including a discovery phase whereby cloud resources are discovered and inventoried. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Farley in view of Goel with Pyla in order to obtain the required data for resource assignment of Farley ¶ 60.
As to claims 3, 17, Farley in view of Goel in view of Pyla discloses a machine of claims 2, 16, and 20 and further discloses:
the catalog data structure comprises one or more templates; and
each of the one or more templates corresponds to one or more resources of the DPU and one or more functionalities of the DPU. (“Automatic metadata tags may be information keys automatically generated by controller 270 and are not modifiable. For example, for compute servers, metadata such as manufacturer name, model number, processor family, number of sockets, cores, and amount of memory may fall under this category. Custom metadata tags are information keys and values created and managed by data center administrators to describe business and administrative concerns. For example, for compute servers, a key-value pair called “Cost center” and “Engineering department” could be added to assets for internal bookkeeping. In such an example, once all hardware assets are initialized, resources are ready for use. The capacity across all storage arrays in the deployment may be aggregated into a liquid pool. This storage aggregation generally happens at a low layer.” Pyla ¶ 76.)
As to claim 7, Farley in view of Goel discloses a machine of claims 2, 15, and 20 and further discloses:
… controls access to the resources by: receiving a request for the resources of the DPU; and (“the LCS policy for the LCS may be identified and enforced on each of the subset of resource devices in a “per-slice” manner, allowing for isolation of that subset of resource devices in order to enforce security policies, monitoring and reporting of the operation of that subset of resource devices in order to enforce QoS policies” Farley ¶ 54.)
allocating the resources of the DPU to an application in response to verifying the request (“the SCP device 702 may identify a client identifier for the client system 712, and associate that client identifier with the LCS provided for that client system 712 using the resource devices 704a-704c and 706.” Farley ¶ 60. The Application being a service provided to the client. SCP database discussed in Farley ¶ 61. “the identities of the resource devices utilized to provide the LCS may be stored in a database along with the rules or other information that define the polic(ies) for the operation of that LCS.” Farley ¶ 64)
Farley in view of Goel does not disclose:
wherein the second DPU resource management circuit accesses a catalog data structure of the resources of the DPU and
Pyla discloses:
wherein the second DPU resource management circuit accesses a catalog data structure of the resources of the DPU and
(“with the capability to “discover,” review, register, and inventory all physical hardware connected to all TOR switches across data center 208.” Pyla ¶ 76)
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Farley in view of Goel with Pyla by including a discovery phase whereby cloud resources are discovered and inventoried. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Farley in view of Goel with Pyla in order to obtain the required data for resource assignment of Farley ¶ 60.
As to claim 8, Farley in view of Goel in view of Pyla discloses a machine of claim 7 and further discloses:
consume a first portion of the resources of the DPU; wherein the one or more functions of the requested subset of the resources of the DPU are programmed (Goel ¶¶ 41 and 42) using the first portion of the resources of the DPU. (“the LCS policy for the LCS may be identified and enforced on each of the subset of resource devices in a “per-slice” manner, allowing for isolation of that subset of resource devices in order to enforce security policies, monitoring and reporting of the operation of that subset of resource devices in order to enforce QoS policies” Farley ¶ 54. See also Farley Fig. 11)
As to claim 9, Farley in view of Goel in view of Pyla discloses a machine of claim 7 and further discloses:
wherein the request comprises data indicating: runtime information associated with the resources of the DPU; and one or more attributes of the resources of the DPU. (“the LCS policy for the LCS may be identified and enforced on each of the subset of resource devices in a “per-slice” manner, allowing for isolation of that subset of resource devices in order to enforce security policies, monitoring and reporting of the operation of that subset of resource devices in order to enforce QoS policies, defining lifecycle management operations that may be performed on the LCS and/or the resource devices used to provide it in order to enforce lifecycle management policies” Farley ¶ 54. Requested LCS policy specifying QoS. “allow a user of the client device 202 to express a “workload intent” that describes the general requirements of a workload that user would like to perform (e.g., “I need an LCS with 10 gigahertz (Ghz) of processing power and 8 gigabytes (GB) of memory capacity for an application requiring 20 terabytes (TB) of high-performance protected-object-storage for use with a hospital-compliant network”, or “I need an LCS for a machine-learning environment requiring Tensorflow processing with 3 TB s of Accelerator PMEM memory capacity”). As will be appreciated by one of skill in the art in possession of the present disclosure, the workload intent discussed above may be provided to one of the LCS provisioning subsystems 206a-206c, and may be satisfied using resource systems that are included within that LCS provisioning subsystem, or satisfied using resource systems that are included across the different LCS provisioning subsystems 206a-206c.” Farley ¶ 35. Specifying policies and acceptance thereof.)
As to claim 10, Farley in view of Goel in view of Pyla discloses a machine of claim 9 and further discloses:
the runtime information comprises:
an indication of ownership, by the application, of the resources of the DPU; and
an operational state associated with the application and the resources of the DPU; and (“the SCP device 702 may identify a client identifier for the client system 712, and associate that client identifier with the LCS provided for that client system 712 using the resource devices 704a-704c and 706.” Farley ¶ 60. SCP database discussed in Farley ¶ 61. “the identities of the resource devices utilized to provide the LCS may be stored in a database along with the rules or other information that define the polic(ies) for the operation of that LCS.” Farley ¶ 64. Ownership being association with the client identifier. Operational state being the assigned resources.)
the one or more attributes comprise one or more functions provided by the resources of the DPU. (“allow a user of the client device 202 to express a “workload intent” that describes the general requirements of a workload that user would like to perform (e.g., “I need an LCS with 10 gigahertz (Ghz) of processing power and 8 gigabytes (GB) of memory capacity for an application requiring 20 terabytes (TB) of high-performance protected-object-storage for use with a hospital-compliant network”, or “I need an LCS for a machine-learning environment requiring Tensorflow processing with 3 TB s of Accelerator PMEM memory capacity”). As will be appreciated by one of skill in the art in possession of the present disclosure, the workload intent discussed above may be provided to one of the LCS provisioning subsystems 206a-206c, and may be satisfied using resource systems that are included within that LCS provisioning subsystem, or satisfied using resource systems that are included across the different LCS provisioning subsystems 206a-206c.” Farley ¶ 35. Specifying policies and acceptance thereof.)
As to claim 11, Farley in view of Goel in view of Pyla discloses a machine of claim 7, and further discloses:
further comprising a data structure comprising runtime information associated with the resources of the DPU, wherein the runtime information comprises: runtime resource usage associated with the resources of the DPU; and assignment information corresponding to the resources and the application. (“any of the LCS security policies, LCS QoS policies, LCS lifecycle management policies, and/or other LCS policies discussed above may be applied to the resource devices 704a-704c and 706 included in the resource slice 902a” Farley ¶ 71. “allowing for isolation of that subset of resource devices in order to enforce security policies, monitoring and reporting of the operation of that subset of resource devices in order to enforce QoS policies” Farley ¶ 54. QoS being runtime information along with the client id of Farley ¶ 60)
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Farley et al., US 2024/0022605 (filed July 2022), in view of Goel et al., US 2020/0314012 (filed 2020) and Kelly et al., US 2023/0091753 (filed September 2021).
As to claim 12, Farley in view of Goel discloses a machine of claims 1, 15, and 20 and further discloses:
the system comprises an orchestration platform associated with the applications; (“the SCP devices described herein may provide a “trusted” orchestrator device that operates as a Root-of-Trust (RoT) for their corresponding resource devices/systems” Farley ¶ 30) and …
Farley in view of Goel does not disclose:
the applications comprise one or more containerized applications.
Kelly discloses:
the applications comprise one or more containerized applications. (“Such activities by the DPU may be performed in relation to any executing workload on a node (e.g., virtual machines, containers, etc.) and/or on behalf of the node itself, or any portion thereof As an example, a DPU may be all or any portion of a Smart Network Interface Controller (SmartNIC).” Kelly ¶ 20)
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Farley in view of Goel with Kelly by accommodating all types of applications including containerized applications. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Farley in view of Goel with Kelly in order to support multiple workload types, thereby accommodating different architectures and expanding the possible customer base for your services.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892, particularly:
Billa, US 2021/0097082, discloses data processing units having DFA/NFA hardware accelerators.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL W CHAO whose telephone number is (571)272-5165. The examiner can normally be reached M, W-F 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rupal Dharia can be reached at (571) 272-3880. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL W CHAO/ Primary Examiner, Art Unit 2492