Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The amendment filed 01/16/2023 has been entered. Claims 1-20 are pending. Claims 1, 9, 10, 13, and 20 have been amended. No claim is cancelled or added.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/15/2026 was in compliant with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
Applicant's arguments filed 01/16/2026 have been fully considered but they are not persuasive.
In that remark, the applicant argued in substance:
That: Kasichainula and MEDAGLIANI individually fails to disclose or render obvious “distributing, by a woken-up distribution thread on the network device, packets in a packet receiving queue bound to the distribution thread, so as to distribute packets belonging to DetNet flow (DT)in the packet receiving queue to a corresponding DetNet-flow Buffering Queue DBQ; putting, by a woken-up DetNet Forwarding Thread DFT on the network device, packets in a DBQ bound to the DFT into a corresponding DetNet flow Queue DTQ; putting, by a woken-up Cycle Forwarding Thread CFT on the network device, packets in a DTQ bound to the CFT into a corresponding Cyclic Specific Queue CSQ, and selecting, by the CFT, a packet from a Sending Queue SQ and forwarding the packet through an out interface used for forwarding packets, where the SQ is a CSQ currently pointed by a Sending Queue Pointer SQP corresponding to the CFT”
In response to the applicant’s argument Kasichainula in [0037] teaches express traffic with the highest priority (e.g., time-sensitive data with stringent QoS requirements) may be mapped to queue 7, and queue 7 may be mapped to traffic class 7 (TC.sub.S), which is used for real-time traffic, best effort traffic with the lowest priority or QoS may be mapped to queue 0, and queue 0 may be mapped to traffic class 0 (TC.sub.0), which is used for traffic with the lowest priority, and in [0045]) best effort traffic with the lowest priority or QoS may be mapped to queues 0-4 (Q.sub.0-O.sub.4), queues 0-4 (00-04) may be respectively mapped to traffic classes 0-4 (TC.sub.0-TC.sub.4), and the data flow on traffic classes 0-4 (TC.sub.0-TC.sub.4) may be routed over virtual channel 0 (VC.sub.0), which is used for low-priority traffic, and express traffic with the highest priority (e.g., time-sensitive data with stringent QoS requirements) may be mapped to queues 5-7 (Q.sub.5-Q.sub.7), queues 5-7 (Q.sub.5-Q.sub.7) may be respectively mapped to traffic classes 5-7 (TC.sub.5-TC.sub.7), and the data flow on traffic classes 5-7 (TC.sub.5-TC.sub.7) may be routed over virtual channel 1 (VC.sub.1), which is reserved for real-time traffic, and also see [0043]-[0060] and [0177]-[0219]).
Therefore, Kasichainula clearly teaches that a deterministic packet scheduling and DMA for time sensitive networks. Fast traffic with highest priority (e.g., time sensitive data with strict QoS requirements) is mapped to queue 5-7 (Q5-Q7), queue 5-7 (Q5-Q7) may be mapped over virtual channel 1 (VC1) to data streams (Te5-Tc7) on traffic class 5-7 (TC5-Tc7) and traffic class 5-7 (TC), respectively, best effort traffic with lowest priority or QoS may be mapped to queue 0-4 (Q0-04), queue 0-4 (00-04) may be mapped over virtual channel 0 (VCO) to data streams on traffic class 0-4 (TCO-Tc4), and data streams 0-Tc4 on traffic class 0-4 (TC), respectively. Fast traffic and best effort traffic are handled using two separate I/O interfaces 212a-b, completion queues 216a-b, and data paths 209a-b from the DMA engine 208 back to the packet transmit queues 202, respectively. The data path to the inner packet transmit queue buffer 202 is also split across traffic classes to eliminate collisions. The DMA engines for descriptors and packet data can be separated into multiple separate DMA engines, each of which can be a single DMA engine that logically supports separate and independent DMA threads for descriptor and packet data DMA.
In addition, Kasichainula in [0038] teaches dynamically assign traffic to queues is particularly important for applications in dynamic environments, such as industrial applications where workloads constantly change as field devices are added or go down (e.g., fail).
However, Kasichainula does not explicitly disclose distributing, by a woken-up distribution thread on the network device, packets in a packet receiving queue bound to the distribution thread; putting, by a woken-up Cycle Forwarding Thread CFT on the network device, packets in a DTQ bound to the CFT into a corresponding Cyclic Specific Queue CSQ, and selecting, by the CFT, a packet from a Sending Queue SQ and forwarding the packet through an out interface used for forwarding packets, where the SQ is a CSQ currently pointed by a Sending Queue Pointer SQP corresponding to the CFT.
However, MEDAGLIANI in [0074], teaches traffic is scheduled according to such a configuration which predefines for each flow all the times when packets pertaining to the flow that are stored in a queue in a node are to be transmitted, and to which node they are to be transmitted. Different schemes can be used to map traffic to queues. For example, traffic can be prioritized and packets from each traffic class can be mapped to the corresponding queues, in [0093], an initial configuration according to CSQF (Cycle Specific Queuing and Forwarding) standard with three queues being configured for QoS traffic and five queues being configured for BE traffic is shown in the middle. Starting from the initial configuration, the number of queues for QoS traffic may be increased, in [0081], the transmitting queue sends all the packets stored during the previous reception phase. Any incoming packet can be inserted only in one of the two queues in reception mode. The specification of which queue a packet is inserted into is referred to as scheduling.
Therefore, MEDAGLIANI clearly teaches different schemes can be used to map traffic to queues using standards such as Cycle Specific Queuing and Forwarding that is used for queuing different traffics for example three queues being configured for QoS traffic and five queues being configured for BE traffic.
Based on Kasichainula in view of MEDAGLIANI, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of MEDAGLIANI to the system of Kasichainula in order to ensure reliability in packet delivery.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s)1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kasichainula (US 20210014177) hereinafter Kasichainula in view of MEDAGLIANI et al (US 20220150159 A1) hereinafter MEDAGLIANI.
Regarding claim 1, Kasichainula teaches a method for transmitting Deterministic Network (DetNet) flow, the method being applied to a network device (i.e. performing traffic-class-based scheduling and DMA for deterministic packet transmissions on a network interface controller (NIC), [0061]), and the method comprising: so as to distribute packets belonging to DetNet flow (DT) in the packet receiving queue to a corresponding DetNet-flow Buffering Queue DBQ (i.e. express traffic with the highest priority (e.g., time-sensitive data with stringent QoS requirements) may be mapped to queue 7, and queue 7 may be mapped to traffic class 7 (TC.sub.S), which is used for real-time traffic, [0037]), and distribute packets belonging to best-effort flow to a corresponding Best- effort flow Queue BTQ (i.e. best effort traffic with the lowest priority or QoS may be mapped to queue 0, and queue 0 may be mapped to traffic class 0 (TC.sub.0), which is used for traffic with the lowest priority, [0037] and best effort traffic with the lowest priority or QoS may be mapped to queues 0-4 (Q.sub.0-O.sub.4), queues 0-4 (00-04) may be respectively mapped to traffic classes 0-4 (TC.sub.0-TC.sub.4), and the data flow on traffic classes 0-4 (TC.sub.0-TC.sub.4) may be routed over virtual channel 0 (VC.sub.0), which is used for low-priority traffic, [0045]); where the packets in the packet receiving queue refer to packets received through a local interface of the network device from outside (i.e. NIC 200 includes multiple packet transmit queues 202, a MAC layer engine 204 for scheduling and transmitting packets over a network 230 via a transmission interface, [0043]); wherein the DT refers to service flow with DetNet service demands transmitted in a DetNet (i.e. a service-flow and is associated with a transaction. The transaction details the overall service requirement for the entity consuming the service, as well as the associated services for the resources, workloads, workflows, and business functional and business level requirements, [0169] and [0168]); putting, by a woken-up DetNet Forwarding Thread DFT on the network device, packets in a DBQ bound to the DFT into a corresponding DetNet flow Queue DTQ (i.e. express traffic with the highest priority (e.g., time-sensitive data with stringent QoS requirements) may be mapped to queues 5-7 (Q.sub.5-Q.sub.7), queues 5-7 (Q.sub.5-Q.sub.7) may be respectively mapped to traffic classes 5-7 (TC.sub.5-TC.sub.7), and the data flow on traffic classes 5-7 (TC.sub.5-TC.sub.7) may be routed over virtual channel 1 (VC.sub.1), which is reserved for real-time traffic, [0045]) also see [0043]-[60]).
However, Kasichainula does not explicitly disclose distributing, by a woken-up distribution thread on the network device, packets in a packet receiving queue bound to the distribution thread; putting, by a woken-up Cycle Forwarding Thread CFT on the network device, packets in a DTQ bound to the CFT into a corresponding Cyclic Specific Queue CSQ, and selecting, by the CFT, a packet from a Sending Queue SQ and forwarding the packet through an out interface used for forwarding packets, where the SQ is a CSQ currently pointed by a Sending Queue Pointer SQP corresponding to the CFT.
However, MEDAGLIANI teaches distributing, by a woken-up distribution thread on the network device, packets in a packet receiving queue bound to the distribution thread (i.e. traffic is scheduled according to such a configuration which predefines for each flow all the times when packets pertaining to the flow that are stored in a queue in a node are to be transmitted, and to which node they are to be transmitted. Different schemes can be used to map traffic to queues. For example, traffic can be prioritized and packets from each traffic class can be mapped to the corresponding queues, [0074]); putting, by a woken-up Cycle Forwarding Thread CFT on the network device, packets in a DTQ bound to the CFT into a corresponding Cyclic Specific Queue CSQ (i.e. an initial configuration according to CSQF (Cycle Specific Queuing and Forwarding) standard with three queues being configured for QoS traffic and five queues being configured for BE traffic is shown in the middle. Starting from the initial configuration, the number of queues for QoS traffic may be increased, [0093]), and selecting, by the CFT, a packet from a Sending Queue SQ and forwarding the packet through an out interface used for forwarding packets, where the SQ is a CSQ currently pointed by a Sending Queue Pointer SQP corresponding to the CFT (i.e. the transmitting queue sends all the packets stored during the previous reception phase. Any incoming packet can be inserted only in one of the two queues in reception mode. The specification of which queue a packet is inserted into is referred to as scheduling, [0081]).
Based on Kasichainula in view of MEDAGLIANI, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of MEDAGLIANI to the system of Kasichainula in order to ensure reliability in packet delivery.
Regarding claim 2, Kasichainula does not explicitly disclose wherein distributing, by the distribution thread, packets in a packet receiving queue bound to the distribution thread comprises: traversing packet receiving queues bound to the distribution thread, determining a currently traversed packet receiving queue as a current queue, and checking whether there is a packet in the current queue; if there is a packet in the current queue, traversing packets in the current queue, and determining a packet being traversed as a current packet; if the current packet belongs to DetNet flow, putting the current packet into a DBQ corresponding to the current queue; and if the current packet belongs to best-effort flow, putting the current packet into a BTQ corresponding to the current queue; after that, when there is still an untraversed packet, continuing to traverse an untraversed packet and returning to the step of determining a packet being traversed as a current packet; if there is no packet in the current queue or there is no untraversed packet in the current queue, when there is still an untraversed packet receiving queue in all the packet receiving queues bound to the distribution thread, continuing to traverse packet receiving queues bound to the distribution thread that have not been traversed, and returning to the step of determining a currently traversed packet receiving queue as a current queue.
However, MEDAGLIANI teaches wherein distributing, by the distribution thread, packets in a packet receiving queue bound to the distribution thread comprises: traversing packet receiving queues bound to the distribution thread, determining a currently traversed packet receiving queue as a current queue, and checking whether there is a packet in the current queue (i.e. traffic is scheduled according to such a configuration which predefines for each flow all the times when packets pertaining to the flow that are stored in a queue in a node are to be transmitted, [0074]); if there is a packet in the current queue, traversing packets in the current queue, and determining a packet being traversed as a current packet (i.e. Given the packet size and the packet transmission frequency (which may be referred to, in a more general manner, as the packet transmission pattern), it is possible to map in which queue and on which transmission port the packet will be sent, by respecting the capacity of each port, [0083]); if the current packet belongs to DetNet flow, putting the current packet into a DBQ corresponding to the current queue (i.e. three queues are reserved for traffic with guaranteed QoS. During the period of activation, the transmitting queue sends all the packets stored during the previous reception phase. Any incoming packet can be inserted only in one of the two queues in reception mode, [0081]); and if the current packet belongs to best-effort flow, putting the current packet into a BTQ corresponding to the current queue (i.e. an initial configuration according to CSQF with three queues being configured for five queues being configured for BE traffic, [0093]); after that, when there is still an untraversed packet, continuing to traverse an untraversed packet and returning to the step of determining a packet being traversed as a current packet (i.e. A set of labels, the label stack, is assigned to each packet of a flow in order to instruct the nodes on the path on the desired route of the packet. That is, the route of the flow is defined by the label stack. In particular, the label stack may indicate the series of queues to be utilized for each of the nodes A, B and C. Each intermediary node consumes its label, [0117]); if there is no packet in the current queue or there is no untraversed packet in the current queue, when there is still an untraversed packet receiving queue in all the packet receiving queues bound to the distribution thread, continuing to traverse packet receiving queues bound to the distribution thread that have not been traversed, and returning to the step of determining a currently traversed packet receiving queue as a current queue (i.e. transmitting data packets according to a current configuration indicating a resource distribution between guaranteed QoS and BE service classes, [0033] and the transmitting queue sends all the packets stored during the previous reception phase. Any incoming packet can be inserted only in one of the two queues in reception mode, [0081]). Therefore, the limitations of claim 2 are rejected in the analysis of claim 1 above, and the claim is rejected on that basis.
Regarding claim 3, Kasichainula does not explicitly disclose the distribution thread is set with a corresponding thread polling flag, and the thread polling flag has been set to FALSE when the distribution thread is woken up; when detecting that there is a packet in the current queue, further setting the thread polling flag to TRUE to instruct to disable specified function that the distribution thread is set to; disabling the specified function is used to instruct to prevent the distribution thread from entering a sleep state; after all the packet receiving queues bound to the distribution thread have been traversed, the method further comprises: if the thread polling flag is TRUE, setting the thread polling flag to FALSE, and returning to the step of traversing the packet receiving queues bound to the distribution thread; if the thread polling flag is FALSE, instructing to enable the specified function, and enabling the specified function is used to instruct the distribution thread to wait for being woken up.
However, MEDAGLIANI teaches the distribution thread is set with a corresponding thread polling flag, and the thread polling flag has been set to FALSE when the distribution thread is woken up (i.e. calculating, for each entry node to the network among the one or more nodes, a label distribution sequence indicating one or more label distributions with associated activation timing information so as to reduce jitter of packet loss, [0026]); when detecting that there is a packet in the current queue, further setting the thread polling flag to TRUE to instruct to disable specified function that the distribution thread is set to (i.e. one queue has its gate opened for transmission and closed for reception, while the other two have their gates closed for transmission and opened for reception, [0081]); disabling the specified function is used to instruct to prevent the distribution thread from entering a sleep state; after all the packet receiving queues bound to the distribution thread have been traversed (i.e. The three queues reserved for time guaranteed traffic are activated in a round robin fashion (i.e., one queue has its gate opened for transmission and closed for reception, while the other two have their gates closed for transmission and opened for reception). During the period of activation, the transmitting queue sends all the packets stored during the previous reception phase, [0081]), the method further comprises: if the thread polling flag is TRUE, setting the thread polling flag to FALSE, and returning to the step of traversing the packet receiving queues bound to the distribution thread (i.e. for each flow traversing the node, the new labels are computed, [0168] and the maximum upstream path duration for all flows traversing the node is computed, [0169]); if the thread polling flag is FALSE, instructing to enable the specified function, and enabling the specified function is used to instruct the distribution thread to wait for being woken up (i.e. one queue has its gate opened for transmission and closed for reception, while the other two have their gates closed for transmission and opened for reception, [0081]). Therefore, the limitations of claim 3 are rejected in the analysis of claim 1 above, and the claim is rejected on that basis.
Regarding claim 4, Kasichainula does not explicitly disclose when the distribution thread distributes the packets belonging to the DetNet flow in the packet receiving queue to the corresponding DBQ, the method further comprises: if a DFT wake- up flag of the DFT bound to the DBQ is set to FALSE, setting the DFT wake-up flag to TRUE, and waking up the DFT with the DFT wake-up flag being TRUE; and/or, or after all the packet receiving queues bound to the distribution thread have been traversed, the method further comprises: for a DFT with a DFT wake-up flag being TRUE, wakening up the DFT, and setting the DFT wake-up flag of the DFT to FALSE.
However, MEDAGLIANI teaches when the distribution thread distributes the packets belonging to the DetNet flow in the packet receiving queue to the corresponding DBQ, the method further comprises: if a DFT wake- up flag of the DFT bound to the DBQ is set to FALSE, setting the DFT wake-up flag to TRUE, and waking up the DFT with the DFT wake-up flag being TRUE; or after all the packet receiving queues bound to the distribution thread have been traversed, the method further comprises: for a DFT with a DFT wake-up flag being TRUE, wakening up the DFT, and setting the DFT wake-up flag of the DFT to FALSE (i.e. for each flow traversing the node, the new labels are computed, [0168] and the maximum upstream path duration for all flows traversing the node is computed, [0169]) and the control device calculates, for each entry node, a label distribution sequence indicating one or more label distributions (for instance, label stacks) with associated activation timing information so as to reduce jitter or packet loss, [0122] and the sequence of label stacks is associated with timing information, wherein the timing information indicated an activation time (for instance, in terms of a number of cycles) of the label stacks. This facilitates for ensuring that packets arriving at the node subject to reconfiguration at the time of reconfiguration are associated with a label stack taking the new resource repartition into account, [0126]). Therefore, the limitations of claim 4 are rejected in the analysis of claim 1 above, and the claim is rejected on that basis.
Regarding claim 5, Kasichainula teaches putting, by a woken-up DetNet forwarding thread DFT on the network device, packets in a DBQ bound to the DFT into a corresponding DetNet flow queue DTQ (i.e. express traffic with the highest priority (e.g., time-sensitive data with stringent QoS requirements) may be mapped to queues 5-7 (Q.sub.5-Q.sub.7), queues 5-7 (Q.sub.5-Q.sub.7) may be respectively mapped to traffic classes 5-7 (TC.sub.5-TC.sub.7), and the data flow on traffic classes 5-7 (TC.sub.5-TC.sub.7) may be routed over virtual channel 1 (VC.sub.1), which is reserved for real-time traffic, [0045]) also see [0043]-[60]) comprises: traversing DBQs bound to the DFT, determining a DBQ being traversed as a current DBQ, and checking whether the current DBQ has a packet (i.e. schedules packets for transmission based on the availability of credits for the respective queues. Once again, this method focuses on determining when to launch a packet on the line. The descriptors, however, are always prefetched regardless of the availability of credits for a queue—the scheduler will start initiating the descriptor prefetches as soon as the tail point of the queue is advanced and new descriptors become available to prefetch, [0026]); if there is a packet in the current DBQ, traversing packets in the current DBQ, determining a packet being traversed as a current DBQ packet (i.e. in order to monitor packets precisely at runtime, it is essential to tag these packets with a unique transaction ID. FIG. 8A illustrates an example of the transaction encoding scheme. This encoding scheme provides the ability to uniquely identify a packet based on queue or channel number, type of data (e.g., descriptor or data payload), and type of transaction (e.g., transmit (Tx) or receive (Rx) transaction or upstream read or write transaction), [0140]), and when invoking a preset packet duplication elimination and sorting PREOF function to identify that the current DBQ packet is not a duplicate packet, generating a packet output chain (i.e. since the descriptor prefetching is completely decoupled from the packet data transactions, the additional latency from intermixing descriptors with time-sensitive packet payload data transactions is eliminated, thus providing more deterministic packet transmissions with less jitter in a short cycle time, [0089]).
However, Kasichainula does not explicitly disclose performing packet encapsulation on each DBQ packet in the packet output chain, and putting the encapsulated DBQ packet into a DTQ corresponding to an out interface for forwarding the DBQ packet; after that, when there is still an untraversed packets in the DBQ, continuing to traverse an untraversed packets in the current DBQ and returning to the step of determining a packet being traversed as a current DBQ packet; if there is no packet in the current DBQ or there is no untraversed packet in the current DBQ, when there is still an untraversed DBQ in all DBQs bound to the DFT, continuing to traverse untraversed DBQs and returning to the step of determining the currently traversed DBQ as a current DBQ.
However, MEDAGLIANI teaches performing packet encapsulation on each DBQ packet in the packet output chain (i.e. entry node may perpend a header to packets that contains a list of segments or labels, which represent instructions that are to be executed on subsequent nodes in the network. These instructions may be forwarding instructions, such as an instruction to forward a packet to a particular destination, interface and/or, in particular, to a particular queue of a port of a network node, [0084] and, and putting the encapsulated DBQ packet into a DTQ corresponding to an out interface for forwarding the DBQ packet (i.e. packet of demand di may be transmitted from source node 0 to node 2 in cycle 0, from node 2 to node 4 in cycle 1, from node 4 to node 7 in cycle 2 and from node 7 to the destination node 8 in cycle 3. The packet flow is illustrated as shaded areas and corresponding arrows. The described path is an exemplary transmission path of a packet of the flow according to demand do, [0100]); after that, when there is still an untraversed packets in the DBQ, continuing to traverse an untraversed packets in the current DBQ and returning to the step of determining a packet being traversed as a current DBQ packet (i.e. A set of labels, the label stack, is assigned to each packet of a flow in order to instruct the nodes on the path on the desired route of the packet. That is, the route of the flow is defined by the label stack. In particular, the label stack may indicate the series of queues to be utilized for each of the nodes A, B and C. Each intermediary node consumes its label, [0117]); if there is no packet in the current DBQ or there is no untraversed packet in the current DBQ, when there is still an untraversed DBQ in all DBQs bound to the DFT, continuing to traverse untraversed DBQs and returning to the step of determining the currently traversed DBQ as a current DBQ (i.e. transmitting data packets according to a current configuration indicating a resource distribution between guaranteed QoS and BE service classes, [0033] and the transmitting queue sends all the packets stored during the previous reception phase. Any incoming packet can be inserted only in one of the two queues in reception mode, [0081]). Therefore, the limitations of claim 5 are rejected in the analysis of claim 1 above, and the claim is rejected on that basis.
Regarding claim 6, Kasichainula teaches the packet output chain is generated by the following steps: when identifying that the current DBQ packet is not an out-of-order packet by invoking the PREOF function, putting the current DBQ packet into the packet output chain (i.e. The order in which packets are transmitted from the respective queues 102 is determined based on a scheduling algorithm implemented by a scheduler 105, such as time based scheduling (TBS), credit based shaper (CBS) (IEEE 802.1Qav), or enhancements for scheduled traffic (EST), [0039]); when identifying that the current DBQ packet is an out-of-order packet by invoking the PREOF function, determining a service flow to which the current DBQ packet belongs (i.e. transactions can go on different virtual channels and at different times due to various priorities and depending on the launch time. Also, the completions may come in out of order based on which virtual channel (VC) the queue is mapped to, [0144]); if the current DBQ packet satisfies a condition, the condition referring to that the current DBQ packet has been successfully sequenced with at least one recorded packet which belongs to the service flow and has been processed by invoking the PREOF function within a preset time window, putting the current DBQ packet, at least one recorded packet which has been successfully sequenced with the current DBQ packet, recorded packets which belong to the service flow and have been processed by invoking the PREOF function outside the preset time window into the packet output chain (i.e. this will force the MAC to fetch the descriptor for a packet at the time the packet is scheduled for transmission, which means the DMA operations for the descriptor and data payload of a packet will be sequential, thus eliminating the possibility of several descriptor requests getting ahead of a data payload request, [0085] and If the packet contains express traffic, however, the flowchart then proceeds to block 310, where the DMA engine circuitry sends a descriptor request, and subsequently a data DMA request (e.g., once the descriptor is returned), for the packet over an express I/O interface, [0066]); and if the current DBQ packet does not satisfy the condition, putting the recorded packets which belong to the service flow and have been processed by invoking the PREOF function outside the preset time window into the packet output chain (i.e. NIC 400 includes multiple packet transmit queues 402, a MAC layer engine 404 for scheduling and transmitting packets over a network 430 via a transmission interface 417 (e.g., using scheduler 405, VLAN to TC mapping register 406, and time-aware arbiter 407), a mux 403 to select packets from the queues 402 to feed to the MAC layer engine 404, DMA engine(s) 408a-b to retrieve DMA descriptors and data for packet transmissions, [0092]).
Regarding claim 7, Kasichainula does not explicitly disclose the DFT is set with a corresponding DFT polling flag, and the DFT polling flag has been set to FALSE when the DFT is woken up; when detecting that there is a packet in the current DBQ, the DFT polling flag is further set to TRUE, and the specified function that the DFT is set to is disabled, and disabling the specified function is used to instruct to prevent the DFT from entering a sleep state; after all the DBQs bound to the DFT have been traversed, the method further comprises: if the DFT polling flag is TRUE, setting the thread polling flag to FALSE, and returning to the step of traversing DFTs bound to the woken-up DBQ; if the DFT polling flag is FALSE, enabling the specified function, and enabling the specified function is used to instruct the DFT to wait for being woken up.
However, MEDAGLIANI teaches the DFT is set with a corresponding DFT polling flag, and the DFT polling flag has been set to FALSE when the DFT is woken up; when detecting that there is a packet in the current DBQ, the DFT polling flag is further set to TRUE, and the specified function that the DFT is set to is disabled, and disabling the specified function is used to instruct to prevent the DFT from entering a sleep state; after all the DBQs bound to the DFT have been traversed, the method further comprises: if the DFT polling flag is TRUE, setting the thread polling flag to FALSE (i.e. for each flow traversing the node, the new labels are computed, [0168] and the maximum upstream path duration for all flows traversing the node is computed, [0169]) and the control device calculates, for each entry node, a label distribution sequence indicating one or more label distributions (for instance, label stacks) with associated activation timing information so as to reduce jitter or packet loss, [0122] and the sequence of label stacks is associated with timing information, wherein the timing information indicated an activation time (for instance, in terms of a number of cycles) of the label stacks. This facilitates for ensuring that packets arriving at the node subject to reconfiguration at the time of reconfiguration are associated with a label stack taking the new resource repartition into account, [0126]), and returning to the step of traversing DFTs bound to the woken-up DBQ; if the DFT polling flag is FALSE, enabling the specified function, and enabling the specified function is used to instruct the DFT to wait for being woken up (i.e. for each flow traversing the node, the new labels are computed, [0168] and the maximum upstream path duration for all flows traversing the node is computed, [0169]; and (i.e. one queue has its gate opened for transmission and closed for reception, while the other two have their gates closed for transmission and opened for reception, [0081]). Therefore, the limitations of claim 7 are rejected in the analysis of claim 1 above, and the claim is rejected on that basis.
Regarding claim 8, Kasichainula does not explicitly disclose after the DFT puts the DBQ packet into a DTQ corresponding to an out interface for forwarding the DBQ packet, the method further comprises: if a CFT wake-up flag set for the CFT bound to the DTQ is FALSE, setting the CFT wake-up flag to TRUE, and waking up the CFT with the CFT wake-up flag being TRUE; after all the DBQs bound to the DFT have been traversed, the method further comprises: for a CFT with a CFT wake-up flag being TRUE, waking up the CFT, and setting the CFT wake-up flag of the CFT to FALSE.
However, MEDAGLIANI teaches after the DFT puts the DBQ packet into a DTQ corresponding to an out interface for forwarding the DBQ packet, the method further comprises: if a CFT wake-up flag set for the CFT bound to the DTQ is FALSE, setting the CFT wake-up flag to TRUE, and waking up the CFT with the CFT wake-up flag being TRUE (i.e. for each flow traversing the node, the new labels are computed, [0168] and the maximum upstream path duration for all flows traversing the node is computed, [0169]) and the control device calculates, for each entry node, a label distribution sequence indicating one or more label distributions (for instance, label stacks) with associated activation timing information so as to reduce jitter or packet loss, [0122] and the sequence of label stacks is associated with timing information, wherein the timing information indicated an activation time (for instance, in terms of a number of cycles) of the label stacks. This facilitates for ensuring that packets arriving at the node subject to reconfiguration at the time of reconfiguration are associated with a label stack taking the new resource repartition into account, [0126]); after all the DBQs bound to the DFT have been traversed, the method further comprises: for a CFT with a CFT wake-up flag being TRUE, waking up the CFT, and setting the CFT wake-up flag of the CFT to FALSE (i.e. The three queues reserved for time guaranteed traffic are activated in a round robin fashion (i.e., one queue has its gate opened for transmission and closed for reception, while the other two have their gates closed for transmission and opened for reception). During the period of activation, the transmitting queue sends all the packets stored during the previous reception phase, [0081]). Therefore, the limitations of claim 8 are rejected in the analysis of claim 1 above, and the claim is rejected on that basis.
Regarding claim 9, Kasichainula does not explicitly disclose putting, by a woken-up cycle forwarding thread CFT on the network device, packets in a DTQ bound to the CFT into a corresponding cyclic specific queue CSQ comprises: traversing DTQs bound to the CFT, determining a DTQ being traversed as a current DTQ, and checking whether there is a packet in the current DTQ, if there is a packet in the current DTQ, traversing packets in the current DTQ, determining a packet being traversed as a current DTQ packet, and determining a target CSQ corresponding to the current DTQ packet according to a cycle specified queue Cycle parameter carried by the current DTQ packet, if the target CSQ is between the SQ and an RQ, putting the current DTQ packet into the RQ, if the target CSQ is between a TQ and the SQ, putting the current DTQ packet into the TQ, if the target CSQ is between the RQ and the TQ, putting the current DTQ packet into the target CSQ; wherein the RQ is a CSQ currently pointed by a receiving queue pointer RQP corresponding to the CFT; the TQ is a CSQ currently pointed by a tolerating queue pointer TQP corresponding to the CFT; after that, when there is an untraversed packet in the current DTQ, traversing an untraversed packet, and returning to the step of determining a packet being traversed as a current DTQ packet; if there is no packet in the current DTQ, or there is no untraversed packet in the current queue, when there is still an untraversed DTQ in all DTQs bound to the CFT, continuing to traverse an untraversed DTQ, and returning to determining a DTQ being traversed as a current DTQ.
However, MEDAGLIANI teaches putting, by a woken-up cycle forwarding thread CFT on the device, packets in a DTQ bound to the CFT into a corresponding cyclic specific queue CSQ comprises: traversing DTQs bound to the CFT, determining a DTQ being traversed as a current DTQ, and checking whether there is a packet in the current DTQ (i.e. traffic is scheduled according to such a configuration which predefines for each flow all the times when packets pertaining to the flow that are stored in a queue in a node are to be transmitted, [0074]), if there is a packet in the current DTQ, traversing packets in the current DTQ, determining a packet being traversed as a current DTQ packet (i.e. Given the packet size and the packet transmission frequency (which may be referred to, in a more general manner, as the packet transmission pattern), it is possible to map in which queue and on which transmission port the packet will be sent, by respecting the capacity of each port, [0083]), and determining a target CSQ corresponding to the current DTQ packet according to a cycle specified queue Cycle parameter carried by the current DTQ packet, if the target CSQ is between the SQ and an RQ, putting the current DTQ packet into the RQ, if the target CSQ is between a TQ and the SQ, putting the current DTQ packet into the TQ, if the target CSQ is between the RQ and the TQ, putting the current DTQ packet into the target CSQ (i.e. the bandwidth partitioning may relate to partitioning of a cycle period in a time-sensitive network between QoS and BE traffic, wherein assigning different amounts of transmission time to either time-critical QoS or BE traffic facilitates for accepting additional demands for QoS traffic or providing resources not used for QoS traffic to BE traffic, [0014] and resource repartition between traffic with guaranteed QoS and BE traffic according to a CSQF embodiment. FIG. 3 shows details on queue distribution and FIG. 4 illustrates bandwidth repartition between traffic with guaranteed quality of service and best-effort service, [0080]); wherein the RQ is a CSQ currently pointed by a receiving queue pointer RQP corresponding to the CFT; the TQ is a CSQ currently pointed by a tolerating queue pointer TQP corresponding to the CFT (i.e. three queues are reserved for traffic with guaranteed QoS, and five queues are reserved for BE traffic. The three queues reserved for time guaranteed traffic are activated in a round robin fashion (i.e., one queue has its gate opened for transmission and closed for reception, while the other two have their gates closed for transmission and opened for reception), [0081]); after that, when there is an untraversed packet in the current DTQ, traversing an untraversed packet (i.e. A set of labels, the label stack, is assigned to each packet of a flow in order to instruct the nodes on the path on the desired route of the packet. That is, the route of the flow is defined by the label stack. In particular, the label stack may indicate the series of queues to be utilized for each of the nodes A, B and C. Each intermediary node consumes its label, [0117]), and returning to the step of determining a packet being traversed as a current DTQ packet; if there is no packet in the current DTQ, or there is no untraversed packet in the current queue, when there is still an untraversed DTQ in all DTQs bound to the CFT, continuing to traverse an untraversed DTQ, and returning to determining a DTQ being traversed as a current DTQ (i.e. transmitting data packets according to a current configuration indicating a resource distribution between guaranteed QoS and BE service classes, [0033] and the transmitting queue sends all the packets stored during the previous reception phase. Any incoming packet can be inserted only in one of the two queues in reception mode, [0081]). Therefore, the limitations of claim 9 are rejected in the analysis of claim 1 above, and the claim is rejected on that basis.
Regarding claim 10, Kasichainula does not explicitly disclose putting, by a woken-up cycle forwarding thread CFT on the network device, packets in a DTQ bound to the CFT into a corresponding cyclic specific queue CSQ is performed when the SQ is empty.
However, MEDAGLIANI teaches putting, by a woken-up cycle forwarding thread CFT on the device, packets in a DTQ bound to the CFT into a corresponding cyclic specific queue CSQ is performed when the SQ is empty (i.e. Any incoming packet can be inserted only in one of the two queues in reception mode. The specification of which queue a packet is inserted into is referred to as scheduling, [0081]). Therefore, the limitations of claim 10 are rejected in the analysis of claim 1 above, and the claim is rejected on that basis.
Regarding claim 11, Kasichainula does not explicitly disclose selecting, by the CFT, a packet from a sending queue SQ and forwarding the packet through an out interface used for forwarding packets, comprises: when determining that a packet sending cycle has been updated, readjusting a cyclic specific queue pointer CSQP corresponding to the CFT, wherein the CSQP at least comprises a SQP; traversing packets in a SQ pointed by the adjusted SQP; determining a packet being traversed as a current SQ packet; and forwarding the current SQ packet by invoking an out interface for forwarding the current SQ packet; after that, when there is still an untraversed packet in the SQ, traversing an untraversed packet and returning to the step of determining a packet being traversed as a current SQ packet; when determining that the packet sending cycle has not been updated, traversing packets in the SQ; determining a packet being traversed as a current SQ packet; and forwarding the current SQ packet by invoking an out interface for forwarding the current SQ packet; after that, when there is still an untraversed packet in the SQ, traversing an untraversed packet and returning to the step of determining a packet being traversed as a current SQ packet.
However, MEDAGLIANI teaches selecting, by the CFT, a packet from a sending queue SQ and forwarding the packet through an out interface used for forwarding packets, comprises: when determining that a packet sending cycle has been updated, readjusting a cyclic specific queue pointer CSQP corresponding to the CFT, wherein the CSQP at least comprises a SQP; traversing packets in a SQ pointed by the adjusted SQP (i.e. Once the DetNet node receives the new configurations, it adapts its resource distribution, for instance, its queue distribution, by modifying the time-critical and BE queue repartition, and its bandwidth distribution by changing the weights of the internal schedulers used to switch between transmissions of time-critical and BE traffic, [0135]); determining a packet being traversed as a current SQ packet; and forwarding the current SQ packet by invoking an out interface for forwarding the current SQ packet (i.e. traffic is scheduled according to such a configuration which predefines for each flow all the times when packets pertaining to the flow that are stored in a queue in a node are to be transmitted, [0074]); after that, when there is still an untraversed packet in the SQ, traversing an untraversed packet (i.e. A set of labels, the label stack, is assigned to each packet of a flow in order to instruct the nodes on the path on the desired route of the packet. That is, the route of the flow is defined by the label stack. In particular, the label stack may indicate the series of queues to be utilized for each of the nodes A, B and C. Each intermediary node consumes its label, [0117]) and returning to the step of determining a packet being traversed as a current SQ packet; when determining that the packet sending cycle has not been updated, traversing packets in the SQ; determining a packet being traversed as a current SQ packet (i.e. Once the DetNet node receives the new configurations, it adapts its resource distribution, for instance, its queue distribution, by modifying the time-critical and BE queue repartition, and its bandwidth distribution by changing the weights of the internal schedulers used to switch between transmissions of time-critical and BE traffic, [0135]); and forwarding the current SQ packet by invoking an out interface for forwarding the current SQ packet (i.e. the node device performs its dedicated tasks in transmitting data packets within the network according to its current configuration, [0146]); after that, when there is still an untraversed packet in the SQ, traversing an untraversed packet and returning to the step of determining a packet being traversed as a current SQ packet (i.e. transmitting data packets according to a current configuration indicating a resource distribution between guaranteed QoS and BE service classes, [0033] and the transmitting queue sends all the packets stored during the previous reception phase. Any incoming packet can be inserted only in one of the two queues in reception mode, [0081]). Therefore, the limitations of claim 11 are rejected in the analysis of claim 1 above, and the claim is rejected on that basis.
Regarding claim 12, Kasichainula teaches wherein the CSQP further comprises a receiving queue pointer RQP and a tolerating queue pointer TQP; the RQP points to a CSQ which is a receiving queue RQ, and the TQP points to a CSQ which is a tolerating queue TQ (i.e. ince the descriptor holds the address pointer to the packet payload data that needs to be transmitted, the descriptor needs to be fetched prior to the generation of a DMA data request for the packet payload, [0040] and the MAC layer makes a burst request that is equal to the head pointer minus the tail pointer. To overcome this, in the test procedure prefetching is turned off and the tail pointer is incremented just one descriptor at a time, [0125]; wherein a sequence number of the CSQ pointed by the readjusted SQP is determined by a remainder from a modulo operation on a number d of CSQs bound to CFT according to a current processing cycle variable CFT_Jiffies set for the CFT (i.e. the subtractor logic 617 reads timestamps t.sub.1 through t.sub.5 from the FIFO of the monitor 616 for a given channel and computes the transmit latency and transmit jitter based on the control bits and the equations noted above, [0148]); a sequence number of the CSQ pointed by the readjusted RQP is determined according to a result by performing a modulo operation on the d according to a sum of the remainder and a specified jitter cycle number Jitter; a sequence number of the CSQ pointed by the readjusted TQP is determined according to the following formula: (d-1+Rem-Jitter) mod d, where Rem is the remainder (i.e. Based on the CTL bit in the descriptor (e.g., via control register 615), the monitor logic 616 calculates the packet latency (e.g., using subtractor 617) based on the following equation: Packet Transmit Latency (PTL)=CTL*(descriptor(t.sub.2−t.sub.1)+parsing(t.sub.3−t.sub.2))+data(t.sub.4−t.sub.3)+xmt(t.sub.5−t.sub.4); The monitoring logic 616 also computes launch time jitter by subtracting the scheduled launch time from actual launch time (t.sub.5) (e.g., using subtractor 617), as shown by the following equation: Launch Time Jitter (LTJ)=actual launch time (t.sub.5)−scheduled launch time (t.sub.5′), [0146]- [0147])).
Regarding claim 13, Kasichainula does not explicitly disclose after the woken-up cycle forwarding thread CFT on the network device has put all packets in all DTQs bound to the CFT into a corresponding cyclic specific queue CSQ, the method further comprises: detecting an event under which a packet needs to be forwarded in time in the CSQ bound to the CFT; if so, returning to the step of selecting, by the CFT, a packet from a sending queue SQ and forwarding the packet through an out interface used for forwarding packets; and if not, waiting to be woken up again.
However, MEDAGLIANI teaches after the woken-up cycle forwarding thread CFT on the device has put all packets in all DTQs bound to the CFT into a corresponding cyclic specific queue CSQ, the method further comprises: detecting an event under which a packet needs to be forwarded in time in the CSQ bound to the CFT (i.e. This decision may be performed periodically on the basis of statistics or triggered on particular events, for instance, a request coming from a node. For instance, a new queue distribution and/or a bandwidth ratio may be determined according to a current and/or future demand for QoS service, [0139]); if so, returning to the step of selecting, by the CFT, a packet from a sending queue SQ and forwarding the packet through an out interface used for forwarding packets; and if not, waiting to be woken up again (i.e. the transmitting queue sends all the packets stored during the previous reception phase. Any incoming packet can be inserted only in one of the two queues in reception mode. The specification of which queue a packet is inserted into is referred to as scheduling, [0081]). Therefore, the limitations of claim 13 are rejected in the analysis of claim 1 above, and the claim is rejected on that basis.
Regarding claim 14, Kasichainula teaches wherein the event under which a packet needs to be forwarded in time is determined according to a current processing cycle variable CFTJiffies set for the CFT and a current packet receiving cycle variable CFTRcvJiffies, when a difference between a value of CFTJiffies and a value of CFT_RcvJiffies is less than or equal to a specified difference, determining that there is an event under which the packet needs to be forwarded in time (i.e. the MAC layer makes a burst request that is equal to the head pointer minus the tail pointer. To overcome this, in the test procedure prefetching is turned off and the tail pointer is incremented just one descriptor at a time, [0125] and the packet transmit time is calculated based on the time difference between the tail pointer update (e.g., based on the TSC clock) and the transmission via the physical layer/GMII interface (e.g., based on the PTP timestamp). Similarly, the jitter is calculated by subtracting the scheduled launch time from actual launch time (e.g., based on the TSC clock and PTP clock, respectively), [0123]), otherwise, determining that there is no event under which the packet needs to be forwarded in time (i.e. detect events or changes in its environment and send the information (sensor data) about the detected events to some other a device, module, subsystem, [0211]); wherein when the CFT is woken up, or when detecting that there is an event under which a packet needs to be forwarded in time in the CSQ bound to the CFT, the value of the CFT_Jiffies is updated to a set current value of CSQFJiffies (i.e. then sends the packet transmit latency and launch time jitter values to the MAC layer 604, which internally updates the descriptor fields by closing the descriptor at a later time, [0149]), where CSQFJiffies is used to indicate implementing specified cyclic queuing and forwarding cycle counting based on a segment routing; when checking that there is a packet in the current DTQ, the value of CFT_Rcv_Jiffies is updated to the value of CFT_Jiffies (i.e. When the completion arrives at the NIC 600, the monitor logic 616 identifies the corresponding queue 0 descriptor completion with RID with the same value of 5′b00010 and takes another snapshot of the PTP timestamp at time ‘t.sub.2’ and stores it in a first-in first-out (FIFO) buffer and When the completion corresponding to the channel 0 data payload arrives at NIC 600, the monitoring block 616 identifies this transaction with the same ID value of 5′b00011 and takes a PTP timestamp at time ‘t.sub.4’ and pushes the timestamp into a FIFO, [0142]-[0142]).
Regarding claim 15, Kasichainula teaches whether the packet sending cycle is updated is determined by: checking whether the value of the current processing cycle variable CFT_Jiffies set for the CFT is equal to the value of a historical cycle variable CFT_PrevJiffies (i.e. The monitoring logic 616 then sends the packet transmit latency and launch time jitter values to the MAC layer 604, which internally updates the descriptor fields by closing the descriptor at a later time, [0149]), if so, determining that the packet sending cycle has not been updated, if not, determining that the packet sending cycle has been updated (i.e. the proposed solution provides more reliable data and can guarantee up to nine 9's of accuracy (e.g., 99.9999999% accuracy), which is required for very low cycle time and hard real-time industrial applications, [0133]); after readjusting the cyclic specific queue pointer CSQP corresponding to the CFT, the method further comprises: updating the value of the CFT_Prev_Jiffies to the current value of the CFT_Jiffies (i.e. calculate the packet transmit latency and jitter based on timestamps t.sub.1-t.sub.5, and then to block 1014 to update the descriptor fields with the packet latency and jitter calculations, [0158]).
Regarding claims 16-20, the limitations of claims 16-20 are similar to the limitations of claims 1-5. Kasichainula further teaches an electronic device, comprising: a processor (i.e. The platform 1400 includes processor circuitry 1402. The processor circuitry 1402 includes circuitry such as, but not limited to one or more processor cores and one or more of cache memory, [0179]) and a machine-readable storage medium; wherein the machine-readable storage medium stores machine-executable instructions executable by the processor (i.e. programming instructions (or data to create the instructions) may be disposed on computer-readable transitory storage media, [0188]). Therefore, the limitations of claims 16-20 are rejected in the analysis of claims 1-5 above, and the claims are rejected on that basis.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AYELE F WOLDEMARIAM whose telephone number is (571)270-5196. The examiner can normally be reached M_F 8:30AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joon H Hwang can be reached at 571-272-4036. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AW/
AYELE F. WOLDEMARIAM
Examiner
Art Unit 2447
2/26/2026
/SURAJ M JOSHI/Primary Examiner, Art Unit 2447