Prosecution Insights
Last updated: April 19, 2026
Application No. 18/943,133

TECHNIQUES FOR IMPLEMENTING REMOTE DIRECT MEMORY ACCESS THROUGH A DATA PROCESSING UNIT

Non-Final OA §102
Filed
Nov 11, 2024
Examiner
CHOU, ALAN S
Art Unit
2451
Tech Center
2400 — Computer Networks
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
89%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
478 granted / 636 resolved
+17.2% vs TC avg
Moderate +14% lift
Without
With
+13.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
15 currently pending
Career history
651
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
48.1%
+8.1% vs TC avg
§102
24.3%
-15.7% vs TC avg
§112
3.9%
-36.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 636 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented for examination. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-5, 7-15, 17-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Xu et al. U.S. Patent Application Publication Number 2024/0388545 A1 (hereinafter Xu). As per claims 1, 11, 20, Xu discloses a computer-implemented method for processing requests to access a shared storage system, the method comprising: receiving a first storage request (see gateway device receiving request from client on page 26 section [0416] and Figure 12) from a proxy driver (see application 101, or proxy driver as claimed, generating a RDMA write request, or first storage request as claimed, on page 9 section [0148] and Figure 6) executing on a host node (see client 100, or host node as claimed, sending RDMA requests on page 9 section [0148] and Figure 6), wherein the first storage request indicates a location within the shared storage system (see RDMA write request packet include a logical address of a memory space of an RDMA storage node 200, or location within the shared storage system as claimed, on page 9 section [0148]) and a first address range associated with the host node (see logical address range on page 31 section [0493] and see client, or host node address is sent to the gateway device on page 22 section [0353]); converting the first storage request to a second storage request that indicates the location (see converting the request packet into RDMA requests, or second storage request as claimed, corresponding to destination address on page 26 section [0416]) and a second address range associated with a proxy node (see gateway device as proxy node or DPU as claimed on storing address information on page 22 section [0353] and Figure 10 and storing logical address corresponding to memory space on page 27 section [0426] on Figure 14); and transmitting the second storage request to a storage driver (see application 201, or storage driver as claimed, executes on RDMA storage node 200, or shared storage system as claimed, on page 9 section [0148] and Figure 6) that executes on the proxy node and is associated with the shared storage system (see sending the converted RDMA request packet, or second storage request as claimed, to RDMA storage node, or shared storage system as claimed, on page 26 section [0416]), wherein the storage driver invokes a remote direct memory access (RDMA) data transfer operation between the shared storage system and the host node to fulfill the first storage request (see RDMA storage node, or shared storage system as claimed, perform the read/write operation request for the client, or host node as claimed, on page 26 section [0416]). As per claims 2, 12, Xu discloses the computer-implemented method of claim 1, wherein converting the first storage request to the second storage request comprises mapping the first address range to the second address range (see mapping address information between client and the storage device at the gateway device on page 22 section [0353]). As per claims 3, 13, Xu discloses the computer-implemented method of claim 1, wherein converting the first storage request to the second storage request comprises generating a shadow buffer (see gateway device operate a cache, or shadow buffer as claimed, for perform data read/write operation on page 26-27 section [0420] and Figure 13) that resides on the proxy node (see gateway device as proxy node as claimed on storing address information on page 22 section [0353] and Figure 10 and storing logical address corresponding to memory space on page 27 section [0426] on Figure 14) and represents a host buffer that resides on the host node and corresponds to the first address range (see buffer 102, or host buffer as claimed, at client 100, or host node as claimed, on page 9 section [0148] and Figure 6). As per claims 4, 14, Xu discloses the computer-implemented method of claim 1, wherein transmitting the second storage request to the storage driver comprises causing a virtual file system (see gateway device, or proxy node as claimed, virtualizes the RDMA memory space and set up address space conversion on page 27 section [0430]) executing on the proxy node to route the second storage request to the storage driver based on the location (see sending the converted RDMA request packet, or second storage request as claimed, to RDMA storage node, or shared storage system as claimed, on page 26 section [0416]). As per claims 5, 15, Xu discloses the computer-implemented method of claim 1, wherein invoking the RDMA data transfer operation comprises converting the second storage request to an RDMA request that indicates the location (see RDMA write request packet include a logical address of a memory space of an RDMA storage node 200, or location within the shared storage system as claimed, on page 9 section [0148]), the first address range (see gateway device, or proxy node as claimed, virtualizes the RDMA memory space and set up address space conversion on page 27 section [0430]), and a remote key that exposes at least a portion of a physical memory included in the host node to the shared storage system (see remote key for access permission to memory on page 9 section [0149]). As per claims 7, 17, Xu discloses the computer-implemented method of claim 1, wherein invoking the RDMA data transfer operation comprises mapping (see gateway device, or proxy node as claimed, virtualizes the RDMA memory space and set up address space conversion to destination address on page 27 section [0430]) the second address range to the first address range (see RDMA write request packet include a logical address of a memory space of an RDMA storage node 200, or location within the shared storage system as claimed, on page 9 section [0148]) and a remote key that exposes at least a portion of a physical memory included in the host node to the shared storage system (see remote key for access permission to memory on page 9 section [0149]). As per claims 8, 18, Xu discloses the computer-implemented method of claim 1, wherein invoking the RDMA data transfer operation comprises causing data to be copied from the shared storage system to the host node in accordance with the first storage request (see fulfilling RDMA write request to copy data to host node on page 9 section [0148]). As per claim 9, Xu discloses the computer-implemented method of claim 1, wherein the first storage request is issued by a software application executing on either a central processing unit included in the host node or a graphics processing unit included in the host node (see certain RDMA operation requires CPU to operate on page 9 section [0150] and see CPU usually perform task of addressing on page 17 section [0265]). As per claim 10, Xu discloses the computer-implemented method of claim 1, wherein the shared storage system comprises at least one of shared file storage, shared block storage, or object storage (see memory storage space allocated in page on page 14 section [0223]). As per claim 19, Xu discloses the one or more non-transitory computer readable media of claim 11, wherein the second storage request is issued by a proxy application that resides in a user space of the DPU (see gateway device, or DPU as claimed, runs support function, or proxy application as claimed, to convert request to second storage requests in RDMA instructions on page 16 section [0254]). Allowable Subject Matter Claims 6, 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ameling et al. U.S. Patent Number 11,962,434 B2. Proxy translating request (see section [0039]). Gum Bernat et al. U.S. Patent Application Publication Number 2023/0375994 A1. Proxy for request (see section [0023]) and remote direct memory access (see section [0044]). Chandrasekaran et al. U.S. Patent Application Publication Number 2023/0244522 A1. Proxy to convert request format (see section [0050]) and remote direct memory access (see section [0166]). Zhang et al. U.S. Patent Application Publication Number 2023/0239326 A1. RDMA (see section [0068]) and communication proxy converting access request (see section [0166]). Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAN S CHOU whose telephone number is (571)272-5779. The examiner can normally be reached Monday-Friday 9:00-5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris L Parry can be reached at (571)272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALAN S CHOU/Primary Examiner, Art Unit 2451
Read full office action

Prosecution Timeline

Nov 11, 2024
Application Filed
Feb 19, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598189
CONTENT COLLABORATION SYSTEM HAVING ACCESS CONTROLS FOR PUBLIC ACCESS TO DIGITAL CONTENT
2y 5m to grant Granted Apr 07, 2026
Patent 12598270
GENERATING AND PROVIDING IN-MEETING COACHING FOR VIDEO CALLS
2y 5m to grant Granted Apr 07, 2026
Patent 12596761
Systems and methods for generating and utilizing lookalike Uniform Resource Locators (URLs)
2y 5m to grant Granted Apr 07, 2026
Patent 12598224
MOBILE PEER-TO-PEER NETWORKS AND RELATED APPLICATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12562992
PROXY STATE SIGNALING FOR NETWORK OPTIMIZATIONS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
89%
With Interview (+13.7%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 636 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month