Prosecution Insights
Last updated: April 19, 2026
Application No. 18/791,237

DESTINATION INITIATED NETWORK TRANSMISSION FOR DATA STREAMS

Final Rejection §103§112
Filed
Jul 31, 2024
Examiner
NGUYEN, LINH T
Art Unit
2459
Tech Center
2400 — Computer Networks
Assignee
Nvidia Corporation
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
96%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
248 granted / 354 resolved
+12.1% vs TC avg
Strong +26% interview lift
Without
With
+26.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
30 currently pending
Career history
384
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
64.2%
+24.2% vs TC avg
§102
9.2%
-30.8% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 354 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 1/14/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment Acknowledgment is made that claims 1 and 10 are amended. Claims 1-20 are pending in the instant application. Response to Arguments Applicant’s arguments, see Remarks, filed on 1/5/2026 have been fully considered. Claim Rejections under 35 USC § 112 Claims 10 and 16-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the applicant regards as the invention. Claims 10 and 16 have been amended. Therefore, the rejection is withdrawn. Claim Rejections under 35 USC § 103 Claims 1, 2-8 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (US 2022/0179560), hereinafter Liu in view of Epstein et al. (US 2020/0311014), hereinafter Epstein further in view of Voloshin (US 2025/0133134). Claim 1 has been amended as follows “send Remote Direct Memory Access (RDMA) initialization information, having instructions to initialize transmission for transmitting over a RDMA connection to be initialized, to a communication management device in local communication with the remote sensor” (Emphasis added). On page 12 of the Remarks, Applicant argues prior art of record fails to teach the amended features recited in claim 1. Especially, the limitation initialization have “instructions to initialize transmission” over a RDMA connection “to be initialized” Applicant’s argument is persuasive. Therefore, a new ground of rejection is made in light of the rejection. Regarding independent claims 9 and 16, Applicant argues these claims are allowable at least for reasons including some of those discussed in connection with claim 1. Applicant refers to the limitation “sending commands from the host device to the one or more circuits to allow for transmission of the streaming data.” (claim 9) and “wherein the communication path is established by the destination device to write at least a portion of the sensor stream to memory of at least the destination device.” (claim 16). Applicant argues that the proposed combination of Liu and Epstein does not teach the subject matter as recited in claims 9 and 16. The examiner respectfully disagrees and finds the argument unpersuasive. The claims’ recited limitations do not include limitation that is recited in amended claim 1, for example, claim 1 recites RDM initialization having instructions to initialize transmission over a RMDA connection to be initialized, while claim 9 recites “sending commands from the host device … to allow for transmission of the streaming data”. Liu’s paragraph [0227] still teaches this limitation, according to Liu, the storage device generates a RDMA message including service data and a destination address in the AI memory. The RDMA message is then sent to the AI apparatus. The sent message’s content enables the AI apparatus to store data in the storage according to the destination address. Thus, Liu teaches the RDMA message allows for transmission of the streaming data. Similar argument is applied to claim 16. Applicant’s arguments are unpersuasive. Therefore, the rejections of claims 9 and 16 are maintained. Dependent claims 3-8 These claims are dependent claims of claim 1, a new ground of rejection is made in light of the amendment made to claim 1. Dependent claims 10-15 and 17-20 The rejections of these claims are maintained due to their dependency to claims 9 and 16, respectively. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 2-8 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (US 2022/0179560), hereinafter Liu in view of Epstein et al. (US 2020/0311014), hereinafter Epstein further in view of Voloshin (US 2025/0133134). As for claim 1, Liu teaches a processor (paragraph [0255] describes a processor), comprising: one or more circuits to: cause memory to be allocated for data associated with a remote sensor on device separate from the memory (paragraphs [0189]-[0190] and [0256] describe a client send the service data (i.e. face image set) to the storage device. After a storage device receives the service data, the processor in the storage device first caches the data into the memory and writes the service data into a hard disk; paragraph [0221] describes an AI apparatus sends a data obtaining request to the storage device to obtain the service data); send Remote Direct Memory Access (RDMA) initialization information to a communication management device in local communication with the remote sensor (paragraphs [0221]-[0225] describe a method for obtaining service data from another storage device for an AI apparatus. An AI processor of the AI apparatus generates a data access request and sends the data access request (i.e. an RDMA request) including the metadata of the service data to another storage device (i.e. a communication management device); paragraph [0189] describes a client (i.e. a remote sensor) sends the service data to the storage device); paragraphs [0201]-[0202] describe an AI processor sends a data obtaining request to a processor, the data obtaining request is used to request the service data stored in the hard disk and carries an identifier of the service data), the RDMA initialization information including at least addresses in the memory (paragraph [0212] describes metadata indicates a start address and an end address of the service data; paragraph [0225] describes the data access request includes a destination address in the AI memory); transmit, using the communication management device, the data over the RDMA connection with at least a portion of the RDMA initialization information (paragraph [0227] describes the storage device establishes an RDMA path between the storage device and the AI memory in the AI apparatus based on the data access request. The storage device generates an RMDA message based on the destination address and the service data in the AI memory, and sends the RDMA message to the AI apparatus); and store the data, associated with the remote sensor and received over the RDMA connection, to the memory according to the addresses (paragraph [0227] describes the another storage device establishes an RDMA path between the another storage device and the AI memory in the AI apparatus based on the second data access request or the destination address and the service data, and write the service data into the AI memory in the AI apparatus through the RDMA path). Liu fails to teach wherein a client is a sensor; wherein a Remote Directory Memory Access (RDMA) initialization information having instructions to initialize transmission over a RMDA connection to be initialized. Epstein discloses wherein a client is a sensor (paragraphs [0023] and [0036] describe peripheral device as a sensor or represents multiple sensors); One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Epstein for collecting data from multiple sensors. The teachings of Epstein, when implemented in the Liu system, will allow one of ordinary skill in the art to generate a robust system that processes sensor data from a plurality of sensors. One of ordinary skill in the art would be motivated to utilize the teachings of Epstein in the Liu system in order to provide a target device with a large quantity of sample data to accurately evaluate the context of the sample data. The combined system of Liu and Epstein fails to teach wherein a Remote Directory Memory Access (RDMA) initialization information having instructions to initialize transmission over a RMDA connection to be initialized. Voloshin discloses wherein a Remote Directory Memory Access (RDMA) initialization information having instructions to initialize transmission over a RMDA connection to be initialized (paragraphs [0072]-[0074] describe an RDMA network device communicates with different protocol stacks through specific protocol drivers. The RDMA driver includes a control path module that includes instructions for processing an “INIT state create queue pair” computer system command to create an RDMA queue pair in an initialized state; a “RTS state queue pair state transition” computer command to provide RDMA transmit operation information and RDMA receive operation information for the RDMA queue pair from the host processing unit to the computer system (see Fig. 4) and transition the RDMA queue pair from the initialized state to a ready to send (RTS) state). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Voloshin for sending instructions to a network device. The teachings of Voloshin, when implemented in the Liu and Epstein system, will allow one of ordinary skill in the art transform a computer device to an initialized state. One of ordinary skill in the art would be motivated to utilize the teachings of Voloshin in the Liu and Epstein system in order to provide a target device with information to be in one of the initialized state, the ready to receive state, and the ready to send state (Voloshin: paragraph [0059]). As for claim 2, the combined system of Liu and Epstein fails to teach wherein one or more circuits are further to: enter a Ready-to-Receive (RTR) state; and send instructions to a communication management device to enter a Ready-to-Send (RTS) state. Voloshin discloses wherein one or more circuits are further to: enter a Ready-to-Receive (RTR) state (paragraph [0074] describes a control path module includes instructions for processing different computer system commands to provide RDMA transmit operation information and RDMA receive operation information from a RDMA queue pair from a host processing unit to a computer system and transition the RDMA queue pair from the initialized state to a ready to send (RTR) state); and send instructions to a communication management device to enter a Ready-to-Send (RTS) state (paragraph [0074] describes a control path module includes instructions for processing different computer system commands to provide RDMA transmit operation information and RDMA receive operation information from a RDMA queue pair from a host processing unit to a computer system and transition the RDMA queue pair from the initialized state to a ready to send (RTS) state). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Voloshin for provide instructions for processing computer system command to create an RDMA queue pair. The teachings of Voloshin, when implemented in the Liu and Epstein system, will allow one of ordinary skill in the art to process RDMA work queue elements. One of ordinary skill in the art would be motivated to utilize the teachings of Voloshi in the Liu and Epstein system in order to process RDMA work queue elements provided by a host processing unit to a computer system via a queue pair. As for claim 3, the combined system of Liu, Epstein and Voloshin teaches wherein the communication management device comprises a Field Programmable Gate Array (FPGA) or an Application-Specific Integrated Circuit (ASIC) (Liu: paragraph [0168] describes the storage apparatus includes a processor and the processor is implemented in an FPGA). As for claim 4, the combined system of Liu, Epstein and Voloshin teaches wherein the one or more circuits are further to: receive the data transmitted over the RDMA connection to a Network Interface Card (NIC) (Epstein: paragraphs [0021] and [0025] describe an external NIC. Peripheral data from a peripheral node is provided to a target node. The data transmission comprises establishing RDMA settings for and RDMA connection with the external NIC), wherein the data is stored to the memory using the NIC (Epstein: paragraph [0022] describes the target node configures encoded messages indicating memory locations of the target node’s memory for storage of data from a peripheral nodes. The data is transferred directly from memory of each peripheral node to memory of the target node). cause the data to be processed from the memory (Liu: paragraphs [0227] and [0233] describe the AI apparatus receives the service data that are written into the destination address in the AI memory and performs AI computing on the service data). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of for provide instructions for transmission to each of the peripheral nodes. The teachings of Epstein, when implemented in the Liu and Voloshin system, will allow one of ordinary skill in the art to transmit data to a target node. One of ordinary skill in the art would be motivated to utilize the teachings of Epstein the Liu and Voloshin system in order to facilitate data exchange between source nodes and a destination node. As for claim 5, the combined system of Liu, Epstein and Voloshin teaches wherein the data is processed at least in part to train a model or perform inferences (Liu: paragraphs [0192]-[0196] describe service data is processed at a storage device to perform model training and service data inference through an AI model). As for claim 6, the combined system of Liu, Epstein and Voloshin teaches wherein the communication management device transmits the data over the RDMA connection as payload in one or more packets (Liu: paragraph [0227] describes the storage device generates an RDMA message based on the destination address and the service data, and sends the RDMA message to the AI apparatus). As for claim 7, the combined system of Liu and Voloshin fails to teach wherein one or more circuits are further to: store additional data, sent over the RDMA connection and associated with at least one additional remote sensor, to the memory according to the addresses. Epstein discloses wherein one or more circuits are further to: store additional data, sent over the RDMA connection and associated with at least one additional remote sensor, to the memory according to the addresses (paragraphs [0016]-[0019] describe a peripheral node is a part of a sensor network comprising a plurality of sensors. Data collected by the peripheral node, in accordance with an RDMA over Converged Ethernet (RoCE) protocol are directly received by the memory of a target node. A message that is sent to the peripheral node that includes the RDMA settings also includes a port of the target node and a memory map of the memory indicating available memory locations of the target node; paragraphs [0022]-[0023] describe peripheral data from a plurality nodes is provided to a target node). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Epstein for collecting data from multiple sensors. The teachings of Epstein, when implemented in the Liu and Voloshin system, will allow one of ordinary skill in the art to generate a robust system that processes sensor data from a plurality of sensors. One of ordinary skill in the art would be motivated to utilize the teachings of Epstein in the Liu and Voloshin system in order to provide a target device with a large quantity of sample data to accurately evaluate the context of the sample data. As for claim 8, the combined system of Liu, Epstein and Voloshin teaches wherein the communication management device includes a networking switch to receive the data and the additional data (Liu: paragraph [0110] describes a network is used to transmit service data between two storage devices and a switch is disposed in the network; Epstein: paragraph [0022] describes streamed data from a plurality of peripheral nodes). Claims 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 2022/0179560) in view of Epstein (US 2020/0311014). As for claim 9, Liu teaches a computer-implemented method, comprising: sending initialization data, associated with a host device, to one or more circuits on a remote device configured to transmit data (paragraphs [0221]-[0225] describe a method for obtaining service data from a storage device for an AI apparatus. An AI processor of the AI apparatus generates a data access request and sends the data access request (i.e. an RDMA request) including the received metadata of the service data to the storage device (i.e. a remote device); paragraph [0189] describes a client sends the service data to the storage device); causing the host device to allow for transmission of the data (paragraphs [0225]-[0226] describe the data access request includes the metadata of the service data and the storage device sends the service data to the AI apparatus in response to the data access request); sending commands from the host device to the one or more circuits to allow for transmission of the data (paragraph [0227] describes the storage device generates an RDMA message that includes the service data and the destination address in the AI memory, and sends the RDMA message to the AI apparatus, the network interface of the AI apparatus receives the RDMA message, and parse the RDMA message to obtain the service data and the destination address in the AI memory); and transmitting, based on at least a portion of the identification values, the data direct to memory between the remote device and the host device (paragraphs [0227]-[0228] describe the AI apparatus receives the RDMA message and parses the RDMA message to obtain the service data and the destination address in the AI memory, the storage device accesses the AI memory and writes the service data into the destination address in the AI memory). Liu fails to teach wherein data is a streaming data. Epstein discloses wherein data is a streaming data (paragraph [0022] describes a target node is configured to receive streamed data from a plurality of peripheral nodes). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Epstein for collecting data from multiple sensors. The teachings of Epstein, when implemented in the Liu system, will allow one of ordinary skill in the art to generate a robust system that processes sensor data from a plurality of sensors. One of ordinary skill in the art would be motivated to utilize the teachings of Epstein in the Liu system in order to provide a target device with a large quantity of sample data to accurately evaluate the context of the sample data. As for claim 10, the combined system of Liu and Epstein teaches allocating the memory in preparation of the transmission of the streaming data (Liu: paragraph [0256] describes a storage device receives the service data from the client and write the service data into a hard disk. An AI apparatus in the storage device sends a data obtaining request to the processor in the storage device to obtain the service data; Epstein: paragraph [0022] describes data being streamed); including, in the initialization data, addresses in the allocated memory (Liu: paragraph [0212] describes metadata indicates a start address and an end address of the service data; paragraph [0225] describes the data access request includes a destination address in the AI memory); and determining one or more routes, for transmitting the streaming data to the allocated memory based on the addresses (Epstein: paragraphs [0014]-[0015] describe a target node receives peripheral data from a peripheral node. The target node’s NIC establishes RDMA settings for an RDMA connection which allows direct memory access of memory of the peripheral device (i.e. external memory); paragraph [0022] describes the RDMA connection comprises the RDMA settings and the MAC address of the external NIC for transmission to each of the peripheral nodes. Each of the encoded message indicates one or more memory locations of the memory of the target node for storage of data from an associated one of the peripheral nodes; Liu: paragraph [0009] describes a DMA path is established between the AI apparatus and a hard disk, so that the AI apparatus and the hard disk can quickly exchange the service data with each other through the DMA path; paragraph [0212] describes metadata indicates a start address and an end address of the service data; paragraph [0225] describes the data access request includes a destination address in the AI memory). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Epstein for collecting data from multiple sensors. The teachings of Epstein, when implemented in the Liu system, will allow one of ordinary skill in the art to generate a robust system that processes sensor data from a plurality of sensors. One of ordinary skill in the art would be motivated to utilize the teachings of Epstein in the Liu system in order to provide a target device with a large quantity of sample data to accurately evaluate the context of the sample data. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 2022/0179560) in view of Epstein (US 2020/0311014) further in view of Suthar et al. (US 2024/0106750), hereinafter Suthar. As for claim 11, the combined system of Liu and Epstein teaches wherein individual ones of the addresses are included in packets along with portions of streaming data to be transmitted. Suthar discloses wherein individual ones of the addresses are included in packets along with portions of streaming data to be transmitted (paragraph [0024] describes a packet can include one or more headers and a payload. A header can be used to control a flow of the packet through a network to a destination; paragraph [0126] describes a method of creating multiple sub-streams of a data stream, the multiple sub-streams created based on adding header information to packets of a sub-stream that indicates a source port from which the sub-stream is to be transmitted). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Suthar for prepare packets with header information. The teachings of Suthar, when implemented in the Liu and Epstein system, will allow one of ordinary skill in the art to transmit data packets to a destination. One of ordinary skill in the art would be motivated to utilize the teachings of Suthar in the Liu and Epstein system in order to create multiple sub-streams of a data stream and send each of the sub-streams to its intended target. As for claim 12, the combined system of Liu and Epstein teaches wherein the one or more circuits comprises at least a FPGA or an ASIC (Liu: paragraph [0168] describes the storage apparatus includes a processor and the processor is implemented in an FPGA). As for claim 13, the combined system of Liu and Epstein teaches sending additional commands from the host device to the one or more circuits to prepare the streaming data as packets compatible with the transmission direct to the memory (Epstein: paragraph [] describes ). As for claim 14, the combined system of Liu and Epstein teaches wherein the transmission of the streaming data direct to the memory bypasses an OS associated with the memory (Epstein: paragraph [0003] describes RDMA is a transferring mechanism that bypasses the operating system and kernel during the reading and writing process). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Epstein for applying RDMA transferring mechanism. The teachings of Epstein, when implemented in the Liu system, will allow one of ordinary skill in the art to provide data between sensor devices and a target device. One of ordinary skill in the art would be motivated to utilize the teachings of Epstein in the Liu system in order to provide a data transmission mechanism that decreases transmission latencies (Epstein: paragraph [0003]). As for claim 15, the combined system of Liu and Epstein teaches wherein the streaming data is continuously generated by one or more sensors (Epstein: paragraph [0010] describes devices/sensors stream data). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Epstein for collecting data from multiple sensors. The teachings of Epstein, when implemented in the Liu system, will allow one of ordinary skill in the art to generate a robust system that processes sensor data from a plurality of sensors. One of ordinary skill in the art would be motivated to utilize the teachings of Epstein in the Liu system in order to provide a target device with a large quantity of sample data to accurately evaluate the context of the sample data. As for claim 16, Liu teaches a system comprising (Fig. 1, Distributed storage system 1): one or more processors to establish a communication path between a remote device receiving data (Fig. 1, processors 1101; paragraph [0227] describes a storage device establishes an RDMA path between the storage device and the AI memory in the AI apparatus; paragraphs [0189]-[0190] and [0256] describe a client send the service data (i.e. face image set) to the storage device), and a destination device configured to communicate with the remote device (paragraph [0224] describes the AI apparatus sends a data access request to the storage device), wherein the communication path is established by the destination device to write at least a portion of the sensor stream to memory of at least the destination device (paragraph [0227] describes the storage device establishes an RDMA path between the storage device and the AI memory in the AI apparatus, and writes the service data into the AI memory in the AI apparatus through the RDMA path). Liu fails to teach wherein data is a sensor stream. Epstein discloses wherein data is a sensor stream (paragraph [0022] describes a target node is configured to receive streamed data from a plurality of peripheral nodes). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Epstein for collecting data from multiple sensors. The teachings of Epstein, when implemented in the Liu system, will allow one of ordinary skill in the art to generate a robust system that processes sensor data from a plurality of sensors. One of ordinary skill in the art would be motivated to utilize the teachings of Epstein in the Liu system in order to provide a target device with a large quantity of sample data to accurately evaluate the context of the sample data. As for claim 17, the combined system of Liu and Epstein teaches wherein the communication path may be a physical Ethernet connection or an InfiniBand Connection (Liu: paragraph [0138] describes the AI apparatus communicates with the processor of the storage device through a high-speed Ethernet). As for claim 18, the combined system of Liu and Epstein teaches wherein the one or more processors are further to: cause the destination device to send instructions to the remote device in order to establish the communication path (Liu: paragraphs [0225] describe the AO processor determines that the service data is stored in the storage device, the AO processor sends a data access request (i.e. an RMDA request) which includes the metadata of the service data). As for claim 19, the combined system of Liu and Epstein teaches wherein the sensor stream includes data from a plurality of sensors (Epstein: paragraph [0022] describes a target node is configured to receive streamed data from a plurality of peripheral nodes). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Epstein for collecting data from multiple sensors. The teachings of Epstein, when implemented in the Liu system, will allow one of ordinary skill in the art to generate a robust system that processes sensor data from a plurality of sensors. One of ordinary skill in the art would be motivated to utilize the teachings of Epstein in the Liu system in order to provide a target device with a large quantity of sample data to accurately evaluate the context of the sample data. As for claim 20, the combined system of Liu and Epstein teaches wherein the system comprises at least one of: a system for performing simulation operations; a system for performing simulation operations to test or validate autonomous machine; a system for performing digital twin operations; a system for performing light transport simulation; a system for rendering graphical output; a system for performing deep learning operations (Liu: paragraph [0373] describes an application that is implemented by the AI apparatus includes deep learning); a system for performing generative AI operations using a large language model (Liu: paragraph [0003] describes service data is a sample set used for training a speech recognition model); a system implemented using an edge device; a system for generating or presenting virtual reality (VR) content; a system for generating or presenting augmented reality (AR) content; a system for generating or presenting mixed reality (MR) content; a system incorporating one or more Virtual Machines (VMs); a system implemented at least partially in a data center; a system for performing hardware testing using simulation; a system for performing generative operations using a language model (LM); a system for synthetic data generation; a collaborative content creation platform for 3D assets; or a system implemented at least partially using cloud computing resources (Liu: paragraph [0124] describes the storage system includes a host and the host is an elastic cloud server). Conclusions The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Friedman et al. (US 2023/0325075) teach methods for managing memory buffer usage while processing computer system operations Nesbit et al. (US 9,164,702) teach single-sided distributed cache system Fineberg et al. (US 2007/0078940) teach remote configuration of persistent memory system ATT tables. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to L. T N. whose telephone number is (571)272-1013. The examiner can normally be reached M & Th 5:30 am - 2:30 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TONIA DOLLINGER can be reached at 571-272-4170. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /L. T. N/ Examiner, Art Unit 2459 /TONIA L DOLLINGER/Supervisory Patent Examiner, Art Unit 2459
Read full office action

Prosecution Timeline

Jul 31, 2024
Application Filed
Oct 31, 2025
Non-Final Rejection — §103, §112
Dec 30, 2025
Examiner Interview Summary
Dec 30, 2025
Applicant Interview (Telephonic)
Jan 05, 2026
Response Filed
Jan 29, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598105
Software-Defined Device Tracking in Network Fabrics
2y 5m to grant Granted Apr 07, 2026
Patent 12592984
MULTIMODAL VEHICLE SENSOR FUSION AND STREAMING
2y 5m to grant Granted Mar 31, 2026
Patent 12580987
USING CONTEXTUAL INFORMATION FOR VEHICLE TRIP LOSS RISK ASSESSMENT SCORING
2y 5m to grant Granted Mar 17, 2026
Patent 12574790
REDUCING LATENCY OF EXTENDED REALITY (XR) APPLICATION USING HOLOGRAPHIC COMMUNICATION NETWORK AND MOBILE EDGE COMPUTING (MEC)
2y 5m to grant Granted Mar 10, 2026
Patent 12562989
FLOW-TRIMMING BASED CONGESTION MANAGEMENT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
96%
With Interview (+26.0%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 354 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month