Prosecution Insights
Last updated: April 19, 2026
Application No. 18/108,517

POOLING VOLATILE MEMORY RESOURCES WITHIN A COMPUTING SYSTEM

Final Rejection §103
Filed
Feb 10, 2023
Examiner
WAI, ERIC CHARLES
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
3y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
529 granted / 644 resolved
+27.1% vs TC avg
Strong +27% interview lift
Without
With
+27.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
27 currently pending
Career history
671
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 644 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-29 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 7-15, 18, 26-27, and 29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saby et al. (US PG Pub No. 2024/0126601 A1) in view of Reddy (US Pat No. 11,960,723). Regarding claim 1, Saby teaches a system comprising: at least one first processor associated with a first portion of pooled volatile memory ([0033], wherein processors are associated with a volatile memory pool 106; Fig 2, Volatile Memory Pool 212), the pooled volatile memory comprising a second portion not associated with the at least one first processor ([0058-0060], wherein the cloud resource manager (CRM) manage the volatile memory pools and receives requests for memory allocations, i.e. it is inherent that the entire memory pool is not associated with the requesting server/processor); and allocate at least a third portion of the second portion of the pooled volatile memory to the at least one first processor ([0056-60]; Claim 10). Saby does not explicitly teach a second processor for performing the allocating, wherein the second portion and the third portion of the pooled volatile memory are not associated with the second processor. Saby is silent regarding the underlying hardware for carrying out the functions. Reddy teaches the use of a memory controller for managing the allocation of memory pools wherein the memory controller is a processor (col 2 lines 2-35). The processor of the memory controller is not associated with second and third portions of memory and is only used for the assignment of memory (col 2 lines 25-46). It would have been obvious to one of ordinary skill in the art before the effective filing date to utilize a hardware processor for implementing the allocation of memory. One would be motivated by the desire to offload the allocation to a dedicated memory controller to increase security (col 8 lines 24-27) Regarding claim 3, Saby teaches wherein the second processor is to maintain a memory map of the pooled volatile memory ([0017]), the at least one first processor is to request access to data stored in the pooled volatile memory, and the second processor is to use the memory map to locate the data ([0020]). Regarding claim 4, Saby teaches wherein the second processor is to allocate the third portion of the pooled volatile memory to the at least one first processor in response to a request to store data originating at least in part from an application being performed by the at least one first processor ([0056]). Regarding claim 7, Saby teaches wherein further comprising: a device associated with the second portion of the pooled volatile memory and not including the at least one first processor (Fig 2, wherein multiple servers exist to utilize the pooled volatile memory pool). Regarding claim 8, Saby teaches wherein further comprising: a third processor associated with the second portion of the pooled volatile memory (Fig 2, wherein multiple servers exist to utilize the pooled volatile memory pool). Regarding claim 9, Sindhu teaches wherein the second processor is to manage input and output (“IO”) to and from the at least one first processor (abstract; [0006]; [0008]) Regarding claim 10, Saby teaches wherein the second portion comprises volatile memory that is remote with respect to the at least one first processor ([0033]; wherein processors are associated with a volatile memory pool 106; Fig 2, Volatile Memory Pool 212; [0020]). Regarding claim 11, Saby teaches wherein the first portion is to be utilized by at least one tenant, and the second processor is to allocate a fourth portion of the second portion of the pooled volatile memory to one or more tenants (Fig 2, wherein multiple servers exist to utilize the pooled volatile memory pool). Regarding claims 12, 14-15 and 18, they are the processor claims of claims 1, 3-4 and 9 above. Therefore, they are rejected for the same reasons as claims 1, 3-4 and 9 above. Regarding claim 26, Saby teaches a method comprising: receiving, using, causing, Saby does not explicitly teach a receiving processor for performing the receiving and transferring. Saby is silent regarding the underlying hardware for carrying out the functions of the cloud resource manager (CRM). However, Sindhu teaches using a DPU for performing memory allocation and freeing ([0075]; [0089]). It would have been obvious to one of ordinary skill in the art before the effective filing date to utilize a hardware processor for implementing the allocation of memory. Regarding claim 27, Sindhu teaches wherein the receiving processor is a data processing unit (“DPU”) ([0075]; [0089]). Regarding claim 29, Saby teaches wherein the receiving processor and the requesting processor are components of a distributed computing environment comprising a data center or a multi-cloud environment, and the receiving processor receives the request from the requesting processor through the distributed computing environment (Figs 1-2). Claim(s) 2 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saby et al. (US PG Pub No. 2024/0126601 A1) in view of Reddy (US Pat No. 11,960,723), further in view of Sindhu et al. (US PG Pub No. 2019/0012278 A1). Regarding claim 2, Saby and Reddy do not teach further comprising: a data processing unit (“DPU”) comprising the second processor. Sindhu teaches a data processing unit (“DPU”) comprising the second processor ([0075]; [0089]). It would have been obvious to one of ordinary skill in the art before the effective filing date to utilize a DPU for implementing the allocation of memory. One would be motivated by the desire to support use a DPU to support specialized network-on-chip (NoC) fabrics for inter-processor communication as taught by Sindhu ([0075]). Regarding claim 13, it is the processor claim of claim 2 above. Therefore, it is rejected for the same reasons as claim 2 above. Claim(s) 5-6, 16-17, 19, 22-25, and 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saby et al. (US PG Pub No. 2024/0126601 A1) in view of Reddy (US Pat No. 11,960,723), further in view of Zhang et al. (US PG Pub No. 2022/0263913 A1). Regarding claim 5, Saby and Reddy do not teach further comprising: a subsystem comprising the second processor and a switch to transfer data from the third portion of the pooled volatile memory to the at least one first processor or a fourth portion of the pooled volatile memory. Zhang teaches coupling multiple host systems together using a network switch where requests and responses are communicated with the data center cluster using the memory pool (abstract; [0134]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a switch to transfer data from the third portion of the pooled volatile memory to the first processor. One would be motivated by the desire to utilize commonly used networking components to enable multiple hosts to communicate with each other. Regarding claim 6, Zhang teaches wherein the switch is a Compute Express Link (“CXL”) switch (abstract; [0134]). Regarding claims 16-17, they are the processor claims of claims 5-6 above. Therefore, they are rejected for the same reasons as claims 5-6 above. Regarding claim 19, Saby teaches a method comprising: receiving, allocating, Saby does not teach using a first processor for the receiving and allocating, wherein the second portion of the aggregated volatile memory is not associated with the first processor. Saby is silent regarding the underlying hardware for carrying out the functions. Reddy teaches the use of a memory controller for managing the allocation of memory pools wherein the memory controller is a processor (col 2 lines 2-35). The processor of the memory controller is not associated with the second portion of memory and is only used for the assignment of memory (col 2 lines 25-46). It would have been obvious to one of ordinary skill in the art before the effective filing date to utilize a hardware processor for implementing the allocation of memory. One would be motivated by the desire to offload the allocation to a dedicated memory controller to increase security (col 8 lines 24-27) Saby and Reddy do not teach causing a subsystem, using the first processor, comprising one or more circuits, and a switch to transfer the data from the second processor to the second portion of the aggregated volatile memory. Zhang teaches coupling multiple host systems together using a network switch where requests and responses are communicated with the data center cluster using the memory pool (abstract; [0134]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a switch to transfer data from the third portion of the pooled volatile memory to the first processor. One would be motivated by the desire to utilize commonly used networking components to enable multiple hosts to communicate with each other. Regarding claim 22, Saby teaches wherein the DPU receives the request from the requesting processor through a data center or a multi-cloud environment ([0056-60]; Figs 1-2). Regarding claim 23, Saby teaches further comprising: updating a memory map for the aggregated volatile memory to record the allocation of the second portion of the aggregated volatile memory to the requesting processor ([0017]; [0020]). Regarding claim 24, Zhang teaches wherein the switch is a Compute Express Link (“CXL”) switch (abstract; [0134]). Regarding claim 25 Saby teaches allocating at least a third portion of the aggregated volatile memory to different processor, the third portion not being not directly accessible by the different processor ([0056]). Regarding claim 28, Saby, Reddy, and Zhang do not teach wherein the receiving processor causes a Compute Express Link (“CXL”) switch to transfer the data. Zhang teaches coupling multiple host systems together using a network switch, such as a Compute Express Link (“CXL”) switch, where requests and responses are communicated with the data center cluster using the memory pool (abstract; [0134]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a switch to transfer data from the third portion of the pooled volatile memory to the first processor. One would be motivated by the desire to utilize commonly used networking components to enable multiple hosts to communicate with each other. Claim(s) 20-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Saby et al. (US PG Pub No. 2024/0126601 A1) in view of Reddy (US Pat No. 11,960,723), further in view of Zhang et al. (US PG Pub No. 2022/0263913 A1), further in view of Sindhu et al. (US PG Pub No. 2019/0012278 A1). Regarding claim 20, Saby, Reddy, and Zhang do not teach wherein the receiving, allocating, and causing are performed by a data processing unit. Sindhu teaches a data processing unit (“DPU”) comprising the second processor ([0075]; [0089]). It would have been obvious to one of ordinary skill in the art before the effective filing date to utilize a DPU for implementing the allocation of memory. One would be motivated by the desire to support use a DPU to support specialized network-on-chip (NoC) fabrics for inter-processor communication as taught by Sindhu ([0075]). Regarding claim 21, Sindhu teaches the DPU comprises the subsystem ([0075]; [0089]). Response to Arguments Applicant’s arguments with respect to claim(s) 1-29 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC C WAI whose telephone number is (571)270-1012. The examiner can normally be reached Monday - Friday 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Eric C Wai/Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Feb 10, 2023
Application Filed
Sep 06, 2025
Non-Final Rejection — §103
Nov 07, 2025
Applicant Interview (Telephonic)
Nov 10, 2025
Examiner Interview Summary
Nov 25, 2025
Response Filed
Mar 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602261
CONTAINER SCHEDULING ACCORDING TO PREEMPTING A SET OF PREEMPTABLE CONTAINERS DEPLOYED IN A CLUSTER
2y 5m to grant Granted Apr 14, 2026
Patent 12602248
METHOD AND DEVICE OF LAUNCHING AN APPLICATION IN BACKGROUND
2y 5m to grant Granted Apr 14, 2026
Patent 12585498
SYSTEM AND METHOD FOR RESOURCE MANAGEMENT IN DYNAMIC SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12585503
UNIFIED RESOURCE MANAGEMENT ARCHITECTURE FOR WORKLOAD SCHEDULERS
2y 5m to grant Granted Mar 24, 2026
Patent 12579001
REINFORCEMENT LEARNING SPACE STATE PRUNING USING RESTRICTED BOLTZMANN MACHINES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+27.2%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 644 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month