Prosecution Insights
Last updated: April 19, 2026
Application No. 19/172,409

TRANSPARENT REMOTE MEMORY ACCESS OVER NETWORK PROTOCOL

Non-Final OA §103
Filed
Apr 07, 2025
Examiner
CHEN, WUJI
Art Unit
2449
Tech Center
2400 — Computer Networks
Assignee
Enfabrica Corporation
OA Round
3 (Non-Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
170 granted / 239 resolved
+13.1% vs TC avg
Strong +38% interview lift
Without
With
+37.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
26 currently pending
Career history
265
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
65.6%
+25.6% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 239 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is in response to communication filed on 12/4/2025. Claims 1-20 are pending. Claims 1, 3, 4, 9, 11, 13, 14 and 19 have been amended. Response to Arguments Applicant's argument(s) filed on 12/4/2025with respect to claim(s) 1-20 have been fully considered but they are not persuasive. In the communication field, applicant argues in substance that: a. Regarding claim(s) 1 and 11, Applicant argues (Remark page(s) 6-7) “Shpiner appears to relate to "methods and systems for scheduling packet transmission in a switch for reducing cache-miss events at a destination network node." Shpiner at para. [0001]. Shpiner describes "packet 100 comprising a header 104 and a payload 108," and header 104 "comprises various fields such as flow identifier 112 and an attribute 116," where both the terms "cache key" and "attribute" refer to "the header field numbered 116." Id. at para. [0056]. Shpiner further describes "the attribute in the packet's header that is used by the destination network node as the cache key comprises a destination virtual address to be translated (using a respective cached context item) into a physical address in system memory 44 for storing payload 108 of the received packet." Id. at para. [0057]. Shpiner, however, does not teach or suggest "the request payload comprising a virtual memory address." Rather, Shpiner's "attribute" is included in "the packet's header." Shpiner also fails to teach or suggest "the request payload comprising ... a memory access request," and "the memory access request" includes "a memory read request or a memory write," as recited in amended independent claim 1.” In response to argument [a], Examiners respectfully disagrees. Therefore, Shpiner teaches this limitation at “[0056], Fig.2; the packets received in switch 24A, include packets such as packet 100 comprising a header 104 and a payload 108. Header 104 comprises various fields such as flow identifier 112 and an attribute 116 to be used in the destination network node as a cache key. In the description that follows, the terms “cache key” and “attribute” (which is used by the destination network node as the cache key) are used interchangeably, and both terms refer to the header field numbered 116 in the figure. [0057], the cache key specified by attribute 116 comprises a key that is used at the destination network adapter for accessing, in cache memory 64, the context item required for processing packet 100 by protocol processor 60. In an embodiment, the attribute in the packet's header that is used by the destination network node as the cache key comprises a destination virtual address to be translated (using a respective cached context item) into a physical address in system memory 44 for storing payload 108 of the received packet. In other embodiments, the attribute that is used by the destination network node as the cache key comprises a destination QP number or a RDMA message number. [examiner notes: the destination virtual address interprets to be the network address of a destination SFA associated with the remote memory. The request is a memory write request because it writes/stores payload 108 of the received packet into the system memory.] Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/4/2025 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1. Claim(s) 1-3,7, 10, 11-13, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Greenfield (US 9049265 B1) in view of Shpiner (US 20190173810 A1). With respect to independent claims: Regarding claim(s) 1, the method for providing memory access, the method comprising: receiving, at a destination server fabric adapter (SFA) from a source SFA coupled to a server, (Greenfield, col.5, lines 45-65; col.6, lines 1-16; FIGs.1-4; both the machines 110 a-n and the client 120 may each have a network interface controller 116 for network communications. the InfiniBand network 130 conveys direct memory access (DMA) requests 122 from a client 120 to a machine 110 (storage server). At the machine 110, an DMA-capable InfiniBand network interface controller (NIC) 116 performs reads and writes of the storage resource 114 (e.g., DRAM). DMA uses zero-copy, OS-bypass to provide high throughput, low latency access to data (e.g. 4 GB/s of bandwidth and 5 microsecond latency). [examiner notes: the DMA-capable InfiniBand network interface controller (NIC) 116 is equivalent to the destination server fabric adapter.]) translating, at the destination SFA, the virtual memory address into a physical memory address of a local memory associated with the destination SFA; and (Greenfield, col.7, lines 55-65; FIG.4; Both the storage controller 115 and the network interface controller 116 can translate (e.g., map) the same virtual address 174 (e.g., pointer) to the same physical address 182, such as that of a buffer 180.) accessing, by the destination SFA, memory at the physical memory address according to the memory access request. (Greenfield, col.5, lines 30-40; FIG.4; upon receiving a client request 122 for access to data, the server process 118 issues a DMA command 190 to the storage resources 114 to load the data 10 to a memory location 182 in the DMA memory region 180. [examiner notes: a memory location is equivalent to a memory address.]) Greenfield fails to teach a request message comprising a request header and a request payload, the request header comprising a network address of the destination SFA, and the request payload comprising a virtual memory address and a memory access request comprising a memory read request or a memory write request; Shpiner however in the same field of computer networking teaches a request message comprising a request header and a request payload, (Shpiner, [0056], the packets received in switch 24A, include packets such as packet 100 comprising a header 104 and a payload 108.) the request header comprising a network address of the destination SFA, (Shpiner, [0044], the packet processing module typically checks certain fields in the packets headers such as source and destination addresses, port numbers, and the underlying network protocol used.) and the request payload comprising a virtual memory address and a memory access request comprising a memory read request or a memory write request; (Shpiner, [0056], Fig.2; the packets received in switch 24A, include packets such as packet 100 comprising a header 104 and a payload 108. Header 104 comprises various fields such as flow identifier 112 and an attribute 116 to be used in the destination network node as a cache key. In the description that follows, the terms “cache key” and “attribute” (which is used by the destination network node as the cache key) are used interchangeably, and both terms refer to the header field numbered 116 in the figure. [0057], the cache key specified by attribute 116 comprises a key that is used at the destination network adapter for accessing, in cache memory 64, the context item required for processing packet 100 by protocol processor 60. In an embodiment, the attribute in the packet's header that is used by the destination network node as the cache key comprises a destination virtual address to be translated (using a respective cached context item) into a physical address in system memory 44 for storing payload 108 of the received packet. In other embodiments, the attribute that is used by the destination network node as the cache key comprises a destination QP number or a RDMA message number. [examiner notes: the destination virtual address interprets to be the network address of a destination SFA associated with the remote memory. The request is a memory write request because it writes/stores payload 108 of the received packet into the system memory.]) Therefore, it would have been obvious to one with ordinary skill in the art at the time before the effective filing date of the claim invention to have modified the system/method of Greenfield to specify a request message comprising a request header and a request payload, the request header comprising a network address of the destination SFA, and the request payload comprising a virtual memory address and a memory access request comprising a memory read request or a memory write request as taught by Shpiner. The motivation/suggestion would have been because there is a need to reducing cache-miss rate at a destination network node (Shpiner, [0001]). Regarding claim(s) 11, server fabric adapter (SFA) communication system comprising: Greenfield teaches a source SFA coupled to a server; (Greenfield, col.5, lines 45-65; col.6, lines 1-16; FIGs.1-4; both the machines 110 a-n and the client 120 may each have a network interface controller 116 for network communications. the InfiniBand network 130 conveys direct memory access (DMA) requests 122 from a client 120 to a machine 110 (storage server). At the machine 110, an DMA-capable InfiniBand network interface controller (NIC) 116 performs reads and writes of the storage resource 114 (e.g., DRAM). DMA uses zero-copy, OS-bypass to provide high throughput, low latency access to data (e.g. 4 GB/s of bandwidth and 5 microsecond latency). [examiner notes: the DMA-capable InfiniBand network interface controller (NIC) 116 is equivalent to the destination server fabric adapter.]) a destination SFA communicatively coupled to a local memory; (Greenfield, FIGs.3-4 shows a NIC 116 communicatively coupled to the local memory 114 and a shared memory region 180.) a request message received by the destination SFA from the source SFA; ; (Greenfield, col.5, lines 45-65; col.6, lines 1-16; FIGs.1-4; both the machines 110 a-n and the client 120 may each have a network interface controller 116 for network communications. the InfiniBand network 130 conveys direct memory access (DMA) requests 122 from a client 120 to a machine 110 (storage server). At the machine 110, an DMA-capable InfiniBand network interface controller (NIC) 116 performs reads and writes of the storage resource 114 (e.g., DRAM). DMA uses zero-copy, OS-bypass to provide high throughput, low latency access to data (e.g. 4 GB/s of bandwidth and 5 microsecond latency). [examiner notes: the DMA-capable InfiniBand network interface controller (NIC) 116 is equivalent to the destination server fabric adapter.]) and a physical memory address of the local memory translated by the destination SFA from the virtual memory address. (Greenfield, col.5, lines 30-40; FIG.4; upon receiving a client request 122 for access to data, the server process 118 issues a DMA command 190 to the storage resources 114 to load the data 10 to a memory location 182 in the DMA memory region 180. [examiner notes: a memory location is equivalent to a memory address.]) Greenfield fails to teach a request payload in the request message having a virtual memory address and a memory access request comprising a memory read request or a memory write request; Greenfield fails to teach a request header in the request message having a network address of the destination SFA; a request payload in the request message having a virtual memory address and a memory access request comprising a memory read request or a memory write request; Shpiner however in the same field of computer networking teaches a request header in the request message having a network address of the destination SFA; (Shpiner, [0044], the packet processing module typically checks certain fields in the packets headers such as source and destination addresses, port numbers, and the underlying network protocol used.) a request payload in the request message having a virtual memory address and a memory access request comprising a memory read request or a memory write request; (Shpiner, [0056] Fig.2; the packets received in switch 24A, include packets such as packet 100 comprising a header 104 and a payload 108. Header 104 comprises various fields such as flow identifier 112 and an attribute 116 to be used in the destination network node as a cache key. In the description that follows, the terms “cache key” and “attribute” (which is used by the destination network node as the cache key) are used interchangeably, and both terms refer to the header field numbered 116 in the figure. [0057], the cache key specified by attribute 116 comprises a key that is used at the destination network adapter for accessing, in cache memory 64, the context item required for processing packet 100 by protocol processor 60. In an embodiment, the attribute in the packet's header that is used by the destination network node as the cache key comprises a destination virtual address to be translated (using a respective cached context item) into a physical address in system memory 44 for storing payload 108 of the received packet. In other embodiments, the attribute that is used by the destination network node as the cache key comprises a destination QP number or a RDMA message number. [examiner notes: the destination virtual address interprets to be the network address of a destination SFA associated with the remote memory. The request is a memory write request because it writes/stores payload 108 of the received packet into the system memory.]) Therefore, it would have been obvious to one with ordinary skill in the art at the time before the effective filing date of the claim invention to have modified the system/method of Greenfield to specify a request header in the request message having a network address of the destination SFA; a request payload in the request message having a virtual memory address and a memory access request comprising a memory read request or a memory write request as taught by Shpiner. The motivation/suggestion would have been because there is a need to reducing cache-miss rate at a destination network node (Shpiner, [0001]). With respect to dependent claims: Regarding claim(s) 2, the method of claim 1, further comprising: Greenfield-Shpiner teach synthesizing, by the destination SFA, a response message comprising a response payload; and (Greenfield, col.5, lines 30-45; FIGs.3-4; the response 124 includes an information portion 126 storing information 12 (e.g., metadata) and a data portion 128 for the actual data 10. The command 192 to the NIC 116 may be a vector 194 of at least two data buffers 196 a, 196 b, where one of the data buffers 196 a, 196 b is memory location 182. At no point does the server process 118 actually perform instructions to read the contents of memory location 182. Col.7, lines 10-45; FIGs.3-4; when the client 120 executes a read request 122 to read data 10 from the server 110, the server 110 reads and sends the requested data 10 and its associated integrity value 11, 11 b to the client 120 in a response 124.) transmitting, by the destination SFA, the response message to the source SFA. (Greenfield, col.5, lines 30-45; FIGs.3-4; upon receiving a client request 122 for access to data, the server process 118 issues a DMA command 190 to the storage resources 114 to load the data 10 to a memory location 182 in the DMA memory region 180. Upon completion, the server process 118 issues a command 192 to the network interface controller (NIC) 116, instructing the NIC 116 to send a response 124 to the client 120.) Regarding claim(s) 3, the method of claim 2, Greenfield-Shpiner teach wherein the memory access request comprises the memory read request, and (Greenfield, col.4, lines 65-67; col.5, lines 1-11; the storage server 110 may receive a request 122 from a client 120, recognize the type of request 122, initiate a memory read (e.g., a disk read of the device) and return data 10 for execution of the request 122 to the kernel storage space 160 of the storage resources 114.) the response payload comprises information read from the memory at the physical memory address. (Greenfield, col.7, lines 10-45; FIGs.3-4; when the client 120 executes a read request 122 to read data 10 from the server 110, the server 110 reads and sends the requested data 10 and its associated integrity value 11, 11 b to the client 120 in a response 124.) Regarding claim(s) 7, the method of claim 1, Greenfield- Shpiner teach wherein the request message is received at the destination SFA using a network protocol, and the network protocol comprises a datagram-based protocol or a byte-stream-based protocol. (Shpiner, [0070], the packets received in switch 24A and that are destined to the destination network node are communicated end-to-end using various communication protocols. For example, some of these packets are communicated using the RDMA over Converged Ethernet (RoCE) protocol, whereas other packets are communicated using the Transmission Control Protocol (TCP) (byte-stream-based protocol).) The same motivation to combine as the pendent claim 1 applies here. Regarding claim(s) 10, the method of claim 1, Greenfield-Shpiner teach wherein the memory at the physical memory address comprises one or more of random access memory (RAM), read-only memory (ROM), flash memory, or dynamic RAM (DRAM). (Greenfield, FIG.1; dynamic random access memory (DRAM).) Claim(s) 12 is/are substantially similar to claim 2, and is thus rejected under substantially the same rationale. Claim(s) 13 is/are substantially similar to claim 3, and is thus rejected under substantially the same rationale. Claim(s) 17 is/are substantially similar to claim 7, and is thus rejected under substantially the same rationale. Claim(s) 20 is/are substantially similar to claim 10, and is thus rejected under substantially the same rationale. 2. Claim(s) 4, 6, 14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Greenfield view of Shpine further in view of Lee (US 20200264985 A1). Regarding claim(s) 4, the method of claim method of claim 2, Greenfield-Shpiner teach wherein: the memory access request comprises the memory write request, the method further comprises storing, by the destination SFA, information from the memory write request in the memory at the physical memory address, (Greenfield, col.6, lines 50-65; FIGs.1-4; when the client 120 executes a write request 122 to write data 10 to the server 110, the client 120 computes a first integrity value 11 a (e.g., hash) of the data 10 and sends the first integrity value 11 a to the server 110 with the data 10. The server 110 proceeds to write the data 10 to its storage resources 114 without use of the computational resources 112 (e.g., via direct memory access through the network interface controller 116) and computes a second integrity value 11 b. If the first and second integrity values 11 a, 11 b do not match, the server 110 may raise an error. The server 110 may store the second integrity value 11 b in the metadata 12 associated with the data 10.) Greenfield-Shpiner do not teach and the response payload comprises an acknowledgement. Lee however in the same field of computer networking teaches and the response payload comprises an acknowledgement. (Lee, [0145], when the write data WT_DAT is a type of the new write data, the memory system 110 selects a second physical address PA_2 which is in an unassigned state where a logical address is not assigned. The memory system 110 performs the write operation of the write data WT_DAT on the second physical address PA_2. The memory system 110 may generate the map data by mapping the first logical address LA_1 to the second physical addresses PA_2 on which the write operation has been performed. The memory system 110 may transmit a third acknowledgement ACK3 including a message indicating that the write operation has been completely performed, to the host 102. [0148] In step S195, the memory system 110 performs the write operation of the write data WT_DAT on the second physical address PA_2. In step S225, the memory system 110 may search a physical address corresponding to the first logical address LA_1 in the map data (L2P controller map data L2P_MAP_C stored in the memory 144)) Therefore, it would have been obvious to one with ordinary skill in the art at the time before the effective filing date of the claim invention to have modified the system/method of Greenfield to specify the response payload comprises an acknowledgement as taught by Lee. The motivation/suggestion would have been because there is a need to improve an internal operation related to a write operation which is performed within the memory system and efficiency of invalid data management is also improved (Lee, [0005]). Regarding claim(s) 6, the method of claim 1, Greenfield-Shpiner-Lee teach further comprising: receiving, at the destination SFA from the source SFA, a subsequent request message; and transmitting, by the destination SFA and in response to the subsequent request message, a no-acknowledgement (NACK) response to the source SFA when the memory at the physical memory address is unavailable. (Lee, [0140], the first acknowledgement ACK1 may further include a message indicating that the first physical address PA_1 received from the host 102 has been invalidated. [examiner notes: the first request interprets to be a subsequent request message.]) The same motivation to combine as the dependent claim 4 applies here. Claim(s) 14 is/are substantially similar to claim 4, and is thus rejected under substantially the same rationale. Claim(s) 16 is/are substantially similar to claim 6, and is thus rejected under substantially the same rationale. 3. Claim(s) 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Greenfield view of Shpiner further in view of Bhabbur (US 20190141041 A1) Regarding claim(s) 5, the method of claim 1, Greenfield-Shpiner do not teach wherein: the request message comprises a cryptographic authentication token, and the method further comprises using the cryptographic authentication token to authenticate the source SFA or the server. Bhabbur however in the same field of computer networking teaches wherein: the request message comprises a cryptographic authentication token, and the method further comprises using the cryptographic authentication token to authenticate the source SFA or the server. (Bhabbur, [0022], authorized sessions may be issued an authentication token. Some embodiments are expected to enhance RDMA communication security by having a receiver agent configured to validate an authentication token prior to serving the request for RDMA read or write operation. [0132], the authentication token may include these values in an encrypted or cryptographically signed ciphertext that serves as the authentication token.) Therefore, it would have been obvious to one with ordinary skill in the art at the time before the effective filing date of the claim invention to have modified the system/method of Greenfield to specify wherein: the request message comprises a cryptographic authentication token, and the method further comprises using the cryptographic authentication token to authenticate the source SFA or the server as taught by Bhabbur. The motivation/suggestion would have been because there is a need to improve increase security of the data exchange between the participating computing devices (also referred to as nodes or peers) (Bhabbur, [0015]). Claim(s) 15 is/are substantially similar to claim 5, and is thus rejected under substantially the same rationale. 4. Claim(s) 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Greenfield view of Shpine further in view of Chinya (US 20110072234 A1). Regarding claim(s) 8, the method of claim 1, Greenfield-Shpiner do not teach wherein the memory access request is processed through a peripheral component interconnect express (PCIe) interface. Chinya however in the same field of computer networking teaches wherein the memory access request is processed through a peripheral component interconnect express (PCIe) interface. (Chinya, [0044]], for accelerators coupled to a PCIe™ bus, as the bus is non-coherent, the underlying run-time software may implement the software based coherence mechanism.) Therefore, it would have been obvious to one with ordinary skill in the art at the time before the effective filing date of the claim invention to have modified the system/method of Greenfield to specify wherein the memory access request is processed through a peripheral component interconnect express (PCIe) interface as taught by Chinya. The motivation/suggestion would have been because there is a need to create a shared memory model as seen by the programmer and depend on memory protection mechanisms to fault and move the pages back and forth between different memories (Chinya, [0002]). Claim(s) 18 is/are substantially similar to claim 8, and is thus rejected under substantially the same rationale. 5. Claim(s) 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Greenfield view of Shpine further in view of Dusanapudi (US 20190188146 A1). Regarding claim(s) 9, the method of claim 1, Greenfield- Shpiner do not teach wherein the memory access request is processed through a compute express link (CXL) interface. Dusanapudi however in the same field of computer networking teaches wherein the memory access request is processed through a compute express link (CXL) interface. (Dusanapudi, [0002], when a process sends a request to a processing core to read data from, or write data to, a particular virtual address, the MMU queries the page table (or a translation lookaside buffer) to identify the corresponding physical address. The processing core then uses the physical address to perform the read or write requested by the process. [0025], the non-core hardware 120 includes a compression engine 125, crypto engine 130, coherent accelerator processor interface (CAPI) 135, and/or graphics processing unit (GPU) accelerator 140 which are located in the chip 105 external to the processing core 110. CAPI 135 permits requesting components external to the processor chip 105 to use the non-core MMU 145 to perform address translations. [examiner notes: coherent accelerator processor interface (CAPI) is equivalent to the compute express link (CXL) interface.]) Therefore, it would have been obvious to one with ordinary skill in the art at the time before the effective filing date of the claim invention to have modified the system/method of Greenfield to specify wherein the memory access request is processed through a compute express link (CXL) interface as taught by Dusanapudi. The motivation/suggestion would have been because there is a need to test every variant or type of translation request that may be submitted to the non-core MMU during runtime in order to catch any bugs or problems with its functionality (Dusanapudi, [0043]). Claim(s) 19 is/are substantially similar to claim 9, and is thus rejected under substantially the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WUJI CHEN whose telephone number is (571)270-0365. The examiner can normally be reached on 9am-6pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VIVEK SRIVASTAVA can be reached on (571) 272-7304. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WUJI CHEN/ Examiner, Art Unit 2449 /NICHOLAS P CELANI/Primary Examiner, Art Unit 2449
Read full office action

Prosecution Timeline

Apr 07, 2025
Application Filed
May 02, 2025
Non-Final Rejection — §103
Aug 12, 2025
Response Filed
Aug 28, 2025
Final Rejection — §103
Dec 04, 2025
Request for Continued Examination
Dec 18, 2025
Response after Non-Final Action
Jan 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603932
REMOTE DESKTOP INFRASTRUCTURE
2y 5m to grant Granted Apr 14, 2026
Patent 12598155
GEOCODING WITH GEOFENCES
2y 5m to grant Granted Apr 07, 2026
Patent 12572482
A NOVEL DATA PROCESSING ARCHITECTURE AND RELATED PROCEDURES AND HARDWARE IMPROVEMENTS
2y 5m to grant Granted Mar 10, 2026
Patent 12549924
SYSTEMS, METHODS AND APPARATUS FOR GEOFENCE NETWORKS
2y 5m to grant Granted Feb 10, 2026
Patent 12526224
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR SELECTING NETWORK FUNCTION (NF) PROFILES OF NF SET MATES TO ENABLE ALTERNATE ROUTING
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+37.8%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 239 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month