Prosecution Insights
Last updated: April 19, 2026
Application No. 18/949,468

REAL-TIME RDMA FABRIC

Non-Final OA §103
Filed
Nov 15, 2024
Examiner
PENA-SANTANA, TANIA M
Art Unit
2443
Tech Center
2400 — Computer Networks
Assignee
Oracle International Corporation
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
66%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
176 granted / 245 resolved
+13.8% vs TC avg
Minimal -6% lift
Without
With
+-6.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
29 currently pending
Career history
274
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
17.6%
-22.4% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 245 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims Status Claims 1-20 are pending and have been rejected. Information Disclosure Statement The information disclosure statement (IDS) submitted on 02/19/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4, 7-13 & 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (U.S. Publication 2019/0114247), hereinafter “Chen” in view of Shilimkar et al. (WO 2022/146470), hereinafter “Shilimkar”. As to claims 1, 10 & 16, Chen discloses a method, a computing device and one or more computer readable non-transitory media, comprising: obtaining, by a controller, performance metric data of one or more hardware components included in a network fabric (Chen, see [0070-0075], data collected by controller can include performance information of each component, wherein performance metrics can be provided to or obtained by controller); collecting, by the controller, flow information of one or more workloads that are executed on the network fabric (Chen, see [0092], controller determines if there is data flow between two vertices of the window-specific system metrics graph); and Chen is silent to applying, by the controller, a configuration policy to the one or more hardware components of the network fabric based on the performance metric data and the flow information of the one or more workloads, wherein application of the configuration policy modifies at least one operational parameter of the one or more hardware components of the network fabric. However, Shilimkar discloses applying, by the controller, a configuration policy to the one or more hardware components of the network fabric based on the performance metric data and the flow information of the one or more workloads, wherein application of the configuration policy modifies at least one operational parameter of the one or more hardware components of the network fabric (Shilimkar, see [0206], if it is determined in that the packet indicates congestion, then the RoCE NJC sends a response to the sender of the packet (e.g., to the RoCE NIC on the source host machine) indicative of the congestion and requesting the sender to slow down the data transfer rate). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Chen in view of Shilimkar in order to further modify the method of monitoring and management of network components of a distributed streaming system from the teachings of Chen with the method of multi-tenancy for RDMA over converted ethernet from the teachings of Shilimkar. One of ordinary skill in the art would have been motivated because it would allow to provide data transfers using the RoCE protocol (Shilimkar — Paragraph 0002). As to claims 2, 11 & 17, Chen in view of Shilimkar discloses everything disclosed in claims 1, 10 & 16. Shilimkar further discloses wherein the performance metric data of a first hardware component of the one or more hardware components comprises information indicating: (a) whether a port of the first hardware component is congested, (b) a packet drop-rate of a buffer associated with the first hardware component, (c) a packet fill-rate of the buffer associated with the first hardware component (Shilimkar, see [0196], each networking device in the switch fabric that receives and forwards a Layer-3 encapsulated RoCE packet may, upon experiencing congestion, signal the congestion by marking a field of an outer header of the Layer-3 encapsulating wrapper of the packet). As to claims 3, 12 & 18, Chen in view of Shilimkar discloses everything disclosed in claims 1, 10 & 16, Shilimkar further discloses wherein the flow information of each of the one or more workloads is characterized by a five tuple including: a source host machine of a workload, a destination host machine of the workload, a protocol being implemented by the workload, a number of packets associated with the workload, a start-time/ end-time of the workload (Shilimkar, see 8C & 9A, a source, a destination, a protocol, a number of packets, time to live) As to claims 4, 13 & 19, Chen in view of Shilimkar discloses everything disclosed in claims 1, 10 & 16. Shilimkar further discloses wherein the network fabric is a remote direct memory access (RDMA) fabric, and the one or more hardware components correspond to switches included in the RDMA fabric, the switches being arranged in a hierarchical CLOS network (Shilimkar, see [0049-0050] & fig. 6, techniques that support multiple RDMA tenants (also known as "public cloud customers") such that the queue configurations in the CLOS Fabric are transparent to the end customer host (the cloud customer), wherein the QoS queue information as well as ECN markings through the CLOS fabric, it can be desired to ensure that the QoS queue information is carried across multiple network domains, which can be from a Layer-2 port to a host, from a Layer-3 port to another switch, or from a VxLAN virtual Layer-2 port to another VxLAN interface on another switch). As to claim 7, Chen in view of Shilimkar discloses everything disclosed in claim 1. Shilimkar further discloses further comprising: responsive to applying the configuration policy to the one or more hardware components of the network fabric, observing a state of the network fabric within a predetermined time-period; and responsive to the predetermined time-period expiring, repeating the steps of the obtaining, the collecting, and the applying (Shilimkar, paragraph [0207], the sender can use an algorithm that calculates a percentage reduction in the data transmission rate. Upon receiving a first CNP packet, the sender (e.g., the RoCE NIC on the source host machine) can cut its transmission rate by a certain percentage. Upon receiving another CNP packet, it can further cut its transmission rate by an additional percentage amount, and so on. The sender can perform adaptive rate control in response to receiving the CNP packets.). As to claim 8, Chen in view of Shilimkar discloses everything disclosed in claim 1. Shilimkar further discloses wherein the configuration policy is applied to all hardware components of the network fabric (Shilimkar, see [0206], if it is determined in that the packet indicates congestion, then the RoCE NJC sends a response to the sender of the packet (e.g., to the RoCE NIC on the source host machine) indicative of the congestion and requesting the sender to slow down the data transfer rate). As to claim 9, Chen in view of Shilimkar discloses everything disclosed in claim 1. Shilimkar further discloses wherein the at least one operational parameter corresponding to the one or more hardware components of the network fabric include: a size of a buffer, a queue marking policy, or a priority assignment corresponding to different customers that is assigned to the queue (Shilimkar, see [0219], the customer or tenant, via the QoS information set for a packet, can control which priority queue is to be used for routing their traffic. On a networking device in the switch fabric having multiple queues (e.g., a plurality of queues) for transmission of packets, a percentage of the queues are set aside for RDMA traffic. If a switch in the switch fabric has eight queues, six of the queues can be set aside for RDMA traffic. These RDMA queues can be weighted-round-robin queues that each get a share of the network bandwidth but cannot starve each other (e.g., in order to provide fairness across the RDMA applications). In one such scheme, each of the RDMA queues is equally weighted, so that each of the RDMA queues is serviced once per dequeuing cycle. The 95% of the capacity of the link (which is shared by the traffic allocated to the different queues) can be allocated to the six ROMA queues, with each queue getting a sixth of the 95% (e.g., via a weighted-round-robin scheme with equal weighting). It can be desired to ensure that the switch fabric is not oversubscribed, such that there is enough bandwidth to handle the traffic being communicated via the switch fabric. Traffic from different customers or tenants can be assigned to the same RDMA queue but is differentiated based upon the VLAN ID and/or VNI encoded in the packet). Claims 5, 6, 14, 15 & 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (U.S. Publication 2019/0114247), hereinafter “Chen” in view of Shilimkar et al. (WO 2022/146470), hereinafter “Shilimkar” and Yang et al. (U.S. Publication 2018/0026856), hereinafter “Yang”. As to claims 5, 14 & 20, Chen in view of Shilimkar discloses everything disclosed in claims 1, 10 & 16. Chen further discloses determining, by the controller, the configuration policy from a plurality of configuration polices that is to be applied to the one or more hardware components of the network fabric, the determining including: calculating a weighted performance vector corresponding to a current state of the network fabric (Chen, see [0074-0075], data collected by controller can include performance information of each component, wherein performance metrics can be provided to or obtained by controller) However, Yang discloses obtaining a policy table including a plurality of predefined vectors, each predefined vector corresponding to a unique configuration policy included in the plurality of configuration policies (Bao, see [0064], the set of policies assigned to an EPG can dictate how the health score for the specific EPG is to be calculated by the controller), calculating a similarity measure between the weighted performance vector and each of the plurality of predefined vectors (Bao, see [0064], determine which policies to be used based on health and performance of the EPG), and selecting, by the controller a first predefined vector of the plurality of predefined vectors based on the calculating (Yang, see [0064], selecting policy to be used in order to modify the EPG to achieve the desired performance of the EPG). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Chen in view of Shilimkar and Yang in order to further modify the method of monitoring and management of network components of a distributed streaming system from the teachings of Chen with the method of multi-tenancy for RDMA over converted ethernet from the teachings of Shilimkar and the method of micro-service deployment based on network policy health from the teachings of Yang. One of ordinary skill in the art would have been motivated because it would allow to enforce desired policies in order to monitor performance of an EPG (Yang — Paragraph 0064). As to claims 6 & 15, Chen in view of Shilimkar and Yang discloses everything disclosed in claims 5 & 14. Yang further discloses wherein the controller selects the first predefined vector based on the first predefined vector having a highest similarity measure to the weighted performance vector (Yang, see [0062], a set of policies can be applied to an EPG to achieve a desired performance or intent for the micro-service containers included in the EPG). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. This includes: U.S. Publication 2005/0238035, which describes remote direct memory access over a network switch fabric. U.S. Publication 2010/0046368, which describes distributed quality of service enforcement. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TANIA M PENA-SANTANA whose telephone number is (571)270-0627. The examiner can normally be reached Monday - Friday 8am to 4pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nicholas R Taylor can be reached at 5712723889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TANIA M PENA-SANTANA/Examiner, Art Unit 2443 /NICHOLAS R TAYLOR/Supervisory Patent Examiner, Art Unit 2443
Read full office action

Prosecution Timeline

Nov 15, 2024
Application Filed
Feb 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592924
SMART HUB QUANTUM KEY DISTRIBUTION AND SECURITY MANAGEMENT IN ADVANCED NETWORKS
2y 5m to grant Granted Mar 31, 2026
Patent 12585754
TRUSTED ROOT RECOVERY
2y 5m to grant Granted Mar 24, 2026
Patent 12574343
SYSTEMS AND METHODS FOR MULTI-AGENT CONVERSATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12574260
CONSENSUS PROCESSING METHOD, APPARATUS, AND SYSTEM FOR BLOCKCHAIN NETWORK, DEVICE, AND MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12561477
AUTOMATED SPARSITY FEATURE SELECTION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
66%
With Interview (-6.0%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 245 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month