DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 9, 20 and 25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 9, the claim recites “allocating a portion of the disaggregated memory resources at one or more compute locations without interleaving based on the workload requirements.” Firstly, it is unclear as to how a portion of the memory resources are being allocated at one or more compute locations. While resources can be allocated to compute locations, it is unclear how memory resources are to be allocated at compute locations. Secondly, the claim depends upon claim 1, which previously recites the memory pool to cause the disaggregated memory resources among the compute locations to host data based on the interleaving arrangement. However, claim 9 appears to contradict the interleaving of claim 1 by now stating that the allocation is to occur without interleaving.
Claims 20 and 25 also recites the limitation identified above, thus, is rejected under 35 USC 112(b) for the same reasons as claim 9.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-4, 9-12, 14-15, 20-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bruno et al. (US 2023/0401099 A1) hereinafter Bruno et al. in view of Tavllaei et al. (US 11210218 B1) hereinafter Tavallaei et al.
Regarding claim 1, Bruno et al. teaches a method for configuring interleaving in a memory pool established in an edge computing arrangement (an engine orchestrates the placement/movement of workloads among nodes in an edge infrastructure Paragraph [0033]), comprising:
discovering disaggregated memory resources at respective compute locations, the compute locations connected to each another via at least one interconnect (distributed nodes are associated with node attributes that include resources such as CPU cores, installed RAM, available RAM/storage, etc. Paragraph [0021], [0024]);
identifying workload requirements for use of the compute locations by respective workloads, the workloads provided by client devices to the compute locations via a network (the workload attributes describe the requirements of the workload Paragraph [0023], where the workloads are placed/moved among nodes that are part of one or more cloud networks Paragraphs [0032]-[0033]);
determining an … arrangement for a memory pool that fulfills the workload requirements, the … arrangement to distribute data for the respective workloads among the disaggregated memory resources at the respective compute locations (a pool of nodes are configured to receive workloads through their associated workload queue Paragraphs [0045]-[0046] by determining which available nodes satisfy the attributes/requirements of the workload Paragraph [0048]); and
configuring the memory pool for use by the client devices of the network, the memory pool to cause the disaggregated memory resources among the compute locations to host data based on the … arrangement (each of the nodes in the pool are configured with a workload queue Paragraph [0046] which is placed with data based on its CPU/storage resources Paragraph [0044]).
Bruno et al. does not appear to teach, however, Tavallaei et al. teaches that the arrangement is an interleaving arrangement (memory addresses may be interleaved between a plurality of physical memory units of a disaggregated memory pool base Column 10, Lines 32-47).
The disclosures of Bruno et al. and Tavallaei et al., hereinafter BT, are analogous art to the claimed invention because they are in the same field of distributed processing in a memory system. Because both BT teach the use of allocating workloads to disaggregated memory resources, it would have been obvious to one skilled in the art to substitute one specific type of arrangement for another to achieve the predictable result of availability of a memory pool in the particular type of arrangement as disclosed by Tavallaei et al., in this case, an interleaving arragement (KSR, MPEP 2143).
Regarding claim 3, BT teaches all of the features with respect to claim 1 as outlined above.
Tavallaei et al. further teaches wherein the method is performed by a networked processing unit, and wherein the method further comprises: implementing, at the networked processing unit, the interleaving arrangement among the disaggregated memory resources by configuration of respective networked processing units at the respective compute locations (aspects of the logic subsystem by be virtualized and executable by remotely-accessible, networked computing devices that include processors configured in a cloud-computing configuration Column 11, Lines 4-24).
Regarding claim 4, BT teaches all of the features with respect to claim 1 as outlined above.
Bruno et al. further teaches wherein the workload requirements are identified based on one or more of: a latency measurement for use of compute resources at the respective compute locations; an estimation of an availability of acceleration resources for current workloads in the network; a prediction of an availability of acceleration resources for future workloads in the network; a latency measurement for communications in the network; an estimation of current traffic in the network; or a prediction of bandwidth or load requirements in the network (workload attributes describe the requirements of the workload which include storage requirements, core requirements and latency requirements Paragraphs [0022]-[0024]).
Regarding claim 9, BT teaches all of the features with respect to claim 1 as outlined above.
Bruno et al. teaches allocating a portion of the disaggregated memory resources at one or more compute locations without interleaving based on the workload requirements (each of the nodes in the pool are configured with a workload queue Paragraph [0046] and determining which available nodes satisfy the attributes/requirements of the workload Paragraph [0048] which is placed with data based on its CPU/storage resources Paragraph [0044]).
Regarding claim 10, BT teaches all of the features with respect to claim 1 as outlined above.
Tavallaei et al. further teaches storing data in the memory pool according to the interleaving arrangement; and retrieving data in the memory pool according to the interleaving arrangement (memory addresses corresponding to an allocation size may be interleaved Column 10, Lines 32-47, where this interleaving allows for the retrieval of data in parallel based on the interleaving Column 6, Lines 38-48).
Regarding claim 11, BT teaches all of the features with respect to claim 1 as outlined above.
Bruno et al. further teaches determining an updated interleaving arrangement; and reconfiguring the memory pool for use by the client devices, based on the updated interleaving arrangement (node attributes are updated which helps to place/orchestrate new workloads (i.e., updated workloads) more effectively Paragraph [0058]. Note that Tavallaei et al. explicitly teaches that the arrangement is an interleaving arrangement Column 6, Lines 38-48).
Claims 12 and 23 are rejected under 35 USC 103 for the same reasons as claim 1, as outlined above.
Regarding claim 12, Bruno et al. teaches a device, comprising: a networked processing unit (aspects of the logic subsystem by be virtualized and executable by remotely-accessible, networked computing devices that include processors configured in a cloud-computing configuration Column 11, Lines 4-24); and a storage medium including instructions embodied thereon, wherein the instructions (a storage medium having instructions executable by processors Paragraph [0148], which when executed by the networked processing unit , configure the networked processing unit to: perform the method of claim 1.
Regarding claim 23, Bruno et al. teaches a non-transitory machine-readable storage medium comprising information representative of instructions, wherein the instructions, when executed by processing circuitry (non-transitory medium having stored instructions executable by one or more hardware processors Paragraph [0148]-[0149]), cause the processing circuitry to: perform the method of claim 1.
Regarding claim 14, BT teaches all of the features with respect to claim 12 as outlined above.
Regarding claim 23, Bruno et al. teaches wherein the instructions further configure the networked processing unit to: provide commands to respective networked processing units at the respective compute locations, to cause the respective networked processing units to implement the interleaving arrangement among the disaggregated memory resources (the logical system is configured to execute instructions (i.e., commands) Column 11, Lines 4-23).
Claims 15 and 24 are rejected under 35 USC 103 for the same reasons as claim 4, as outlined above.
Claim 20 is rejected under 35 USC 103 for the same reasons as claim 9, as outlined above.
Claim 21 is rejected under 35 USC 103 for the same reasons as claim 10, as outlined above.
Claim 22 is rejected under 35 USC 103 for the same reasons as claim 11, as outlined above.
Claim(s) 2 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over BT in view of Wang et al. (US 2015/0350089 A1) hereinafter Wang et al.
Regarding claim 2, BT teaches all of the features with respect to claim 1 as outlined above.
BT does not appear to explicitly teach, however, Wang et al. teaches wherein the method is performed by a network switch, and wherein the method further comprises: processing requests, at the network switch, for the use of the memory pool by the client devices of the network (a network switch is interfaced with a common memory pool for request processing and a plurality of client interfaces to send requests and receive responses Paragraphs [0036]-[0038]).
The disclosures of BT and Wang et al., hereinafter BTW, are analogous art to the claimed invention because they are in the same field of endeavor of workload execution and distribution and/or execution using a network switch.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of BTW before them, to modify the teachings of BT to include the teachings of Wang since both BTW teach executing client requests. Therefore it is applying a known technique (a network switch performing the methods, such as processing requests [0036]-[0038] of Wang et al.) to a known device (distributing workloads based on workload requirements of Bruno et al.) ready for improvement to yield predictable results (requests are processed by the network switch [0036]-[0038] of Wang et al.), KSR, MPEP 2143.
Claim 13 is rejected under 35 USC 103 for the same reasons as claim 2, as outlined above.
Claim(s) 5-7 and 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over BT in view of Guim Bernat et al. (US 2020/0136906 A1) hereinafter Guim Bernat et al.
Regarding claim 5, BT teaches all of the features with respect to claim 1 as outlined above.
BT does not appear to explicitly teach, however, Guim Bernat et al. teaches wherein the respective compute locations correspond to processing hardware at respective base stations (Fig. 1 depicts a plurality of base stations which are augmented with compute and acceleration resources Paragraph [0034], [0047]), and wherein the client devices connect to the network via one or more of the respective base stations (edge resource nodes are located at or in communication with a base station of a network, and each resource node includes processing/storage capabilities for the client compute nodes via wired or wireless connections Paragraph [0047]).
The disclosures of BT and Guim Bernat et al., hereinafter BTG, are analogous art to the claimed invention because they are in the same field of endeavor of workload execution and distribution and/or edge computing.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of BTG before them, to modify the teachings of BT to include the teachings of Guim Bernat et al. since both BTG teach executing client requests. Therefore it is applying a known technique (using the base stations to connect client devices to the network [0047] of Guim Bernat et al.) to a known device (distributing workloads based on workload requirements of Bruno et al.) ready for improvement to yield predictable results (client devices are connected to the network via a base station [0047] of Guim Bernat et al.), KSR, MPEP 2143.
Regarding claim 6, BT teaches all of the features with respect to claim 1 as outlined above.
BT does not appear to explicitly teach, however, Guim Bernat et al. teaches wherein one or more of the respective compute locations include acceleration resources, and wherein the disaggregated memory resources are mapped to the acceleration resources (a mapping logic maps virtual partitions associated with memory resources and acceleration components to various partitions Paragraph [0141]).
The disclosures of BT and Guim Bernat et al., hereinafter BTG, are analogous art to the claimed invention because they are in the same field of endeavor of workload execution and distribution and/or edge computing.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of BTG before them, to modify the teachings of BT to include the teachings of Guim Bernat et al. since both BTG teach executing client requests. Therefore it is applying a known technique (virtual partitions are associated with memory resources and acceleration resources [0141] of Guim Bernat et al.) to a known device (distributing workloads based on workload requirements of Bruno et al.) ready for improvement to yield predictable results (memory resources are mapped to acceleration resources inside partitions [0141] of Guim Bernat et al.), KSR, MPEP 2143.
Regarding claim 7, BTG teaches all of the features with respect to claim 6 as outlined above.
Guim Bernat et al. further teaches wherein the disaggregated memory resources are connected to the acceleration resources via a Compute Express Link (CXL) interconnect (acceleration components can be added using Compute Express Link (CXL) Paragraph [0122]).
Claim 16 is rejected under 35 USC 103 for the same reasons as claim 5, as outlined above.
Claim 17 is rejected under 35 USC 103 for the same reasons as claim 6, as outlined above.
Claim 18 is rejected under 35 USC 103 for the same reasons as claim 7, as outlined above.
Allowable Subject Matter
Claims 8 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 8, “categorizing memory bandwidth available at the disaggregated memory resources into multiple categories; wherein the interleaving arrangement is determined using the multiple categories,” is not taught by the prior art. The closest prior art is BT in further view of Lin (US 2022/00466670 A1). Lin generally teaches the usage of interleaved mapping would cause resources to be allocated, where the resources would spread across a bandwidth that defines resource allocation for the user equipment. However, Lin is silent with regards to interleaving based on available memory bandwidth at the disaggregated resources being categorized into different categories or levels.
Claim 19 recites substantially similar subject matter as that of claim 8, thus, would be allowable for at least the same reasons.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Guo et al. (US 2019/0116128 A1) teaches dynamically allocating computing resources in an edge computing center.
Ananthanarayanan et al. (US 2022/0400085 A1) teaches collecting capacity and usage data for computing and network resources at an edge computing network. Based on the collected information, workloads are distributed
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JANE W BENNER whose telephone number is (571)270-0067. The examiner can normally be reached Mon - Thurs (8 AM - 5 PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, REGINALD BRAGDON can be reached at (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JANE W. BENNER
Primary Examiner
Art Unit 2131
/JANE W BENNER/Primary Examiner, Art Unit 2139