Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is in response to communication filed on 10/24/2025.
Claims 1, 3-8, 10-15 and 17-20 are pending.
Claims 1, 3-6, 8, 10-13, 15 and 17-20 have been amended.
Claims 2, 9 and 16 have been canceled.
Response to Arguments
Applicant’s argument(s) filed on 10/24/2025 with respect to claim(s) 1, 3-8, 10-15 and 17-20 have been considered but are moot in view of the new ground(s) of rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
1. Claim(s) 1, 3, 7-8, 10, 14-15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Masputra (US 20130204965 A1) in view of SINGH (US 20190199646 A1).
With respect to independent claims:
Regarding claim(s) 1 Masputra teaches a packet control system comprising at least one processor, the at least one processor carrying out: (Masputra, [0233], the data processing system 2400 includes the processing system 2420, which may include one or more microprocessors and/or a system on an integrated circuit.)
an acquisition process of acquiring packets which are respectively associated with sending timings; (Masputra, [0010], receiving a packet to be transmitted from the client device to a destination over a network socket; classifying the packet according to an implicit packet service classification provided by the networking layer or a user-specific packet service classification explicitly specified by an application, the implicit classification having a default traffic classification queue and default scheduler associated therewith and the user-specified classification having a user-specified traffic classification and user-specified scheduler associated therewith; and enqueuing and scheduling the packet for transmission according to either the default or the user-specific traffic classifications.[0050], a packet scheduler 209 for scheduling packet transmission. [examiner notes: user-specified scheduler interprets to be the sending timings.])
an enqueuing process of enqueuing the packets into a plurality of queues in accordance with the sending timings which are associated with the packets; and (Masputra, [0050], per-hop queuing and forwarding behavior in wireless networking refer to the process by which individual network nodes, such as routers or access points, manage the transmission of data packets as they traverse the network. This process involves queuing incoming packets, making forwarding decisions, and determining the order in which packets are transmitted out of the node's interfaces. At each hop or network node, packets are typically placed into different queues based on their priority, traffic class, or other criteria defined by the network's quality of service (QOS) policies. This queuing process allows the node to prioritize certain packets over others, ensuring that critical traffic, such as real-time applications or voice calls, receives preferential treatment in terms of bandwidth allocation and transmission delay. [0051], per-hop queuing mechanisms can vary depending on the specific QoS policies implemented by the network. Common queuing techniques include First-In-First-Out (FIFO), where packets are transmitted in the order they were received, and priority queuing, where higher-priority packets are dequeued and transmitted ahead of lower-priority packets. Other advanced queuing schemes, such as Weighted Fair Queuing (WFQ) or Deficit Round Robin (DRR), provide more granular control over packet scheduling and bandwidth allocation, allowing network administrators to prioritize traffic based on its importance and requirements. Switches and routers also employ active queue management (AQM) to drop certain packets before queuing buffers become full to avoid congestion and achieve improve end-to-end latency.)
a first sending process of dequeuing the packets from the plurality of queues in accordance with dequeue timings occurring at specified intervals for the plurality of queues and sending the packets to a network; and (Masputra, [0045], regardless of how the packet is queued, it may be dequeued differently depending on whether the driver- or network stack-managed scheduling. For driver-managed scheduling, determined at 254, the driver performs a dequeue operation from a specified service class at 255. For example, if the driver is implementing 802.11n, then it may choose to perform the scheduling using the four service classes defined by WMM (see, e.g., FIG. 7 illustrating the 10:4 mapping between service classes and queue instances. Alternatively, for other network interface types (e.g., Ethernet, 3G, etc) scheduling may be performed at the network layer (see, e.g., FIG. 6 illustrating a 1:1 mapping between service classes and queue instances. Thus, at 260, the network layer performs the dequeue operation from the selected service class. At 270, the packet is provided to the driver layer which transmits the packet at 271. [0186], in the driver managed model all of the queues may be set up as with the network stack managed model, but the scheduling is performed by the driver scheduler 160. As such, the driver-based scheduler will then request a number of packets for each dequeue operation for a particular class based on priority (i.e., using the 4 classes for WMM). [examiner notes: the scheduling interprets to be the dequeue timing.])
Masputra does not teach a second sending process of, based on a packet queued in a first queue not being sent to the network at a corresponding dequeue time, dequeuing the packet in the first queue at a dequeue timing of a second queue so that the packet in the first queue is sent before a packet in the second queue, the second queue being different than the first queue, and the dequeue timing of the second queue being next to the dequeue timing of the first queue.
SINGH however in the same field of computer networking teachesa second sending process of, based on a packet queued in a first queue not being sent to the network at a corresponding dequeue time, dequeuing the packet in the first queue at a dequeue timing of a second queue so that the packet in the first queue is sent before a packet in the second queue, the second queue being different than the first queue, and the dequeue timing of the second queue being next to the dequeue timing of the first queue. (SINGH, [0033], transmit bandwidth can be divided among all traffic classes 0 to n but a highest traffic class 0 can be allocated a highest amount of bandwidth, a next highest class (e.g., class 1) is allocated a next highest amount of bandwidth, and so forth. Within a traffic class, a highest amount of bandwidth can be allocated to highest priority user ID 0, a next highest amount of bandwidth can be allocated to highest priority user ID 1, and so forth. For example, a scheduling instance 308-0 associated with a highest priority ring 306-0 can scan a highest priority ring 306-0 with a variety of user IDs 0-10. Scheduling instance 308-0 can allocate packets associated with user IDs 0-4 but lacks bandwidth for user IDs 5-10. In that case, scheduling instance 308-0 can mark packets associated with user IDs 5-10 for a next time slot. Scheduling instance 308-0 can indicate in metadata that packet is transmission ready or a candidate for transmission in a next time slot or dropped. For users that have exceeded bandwidth needs or do not have sufficient bandwidth, scheduling instance 308-0 can set metadata state for those some packets that can be transmitted to transmit in next time slot or drop packets. [0040], FIG.4A; a bandwidth deficit can correspond to insufficient bandwidth to transmit packets available for transmission for the current traffic class level. Packets not afforded transmit bandwidth allocation can be transmitted in a later round or timeslot or dropped, depending on an applied congestion management policy for the traffic class or user priority level. At 406, a determination is made as to whether any traffic class levels are remaining in the current round or timeslot for which transmission scheduling has not taken place. If any priority levels are remaining in the current round or timeslot for which transmission scheduling has not taken place, then 408 can follow. If no priority levels are remaining in the current round or timeslot for which transmission scheduling has not taken place, then 412 can follow. [examiner notes: Masputra teaches scheduling packets for transmission/dequeuing form multiple queues based on First-In-First-Out (FIFO), priority queuing, Weighted Fair Queuing (WFQ) or Deficit Round Robin (DRR) at [0051]. SINGH teaches reschedule a packet for transmission/dequeuing if the packet cannot be transmitted due to insufficient bandwidth.])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Masputra by incorporating the teachings of SINGH. The motivation/suggestion would have been because there is a need to scaling Quality of Service (QoS) traffic management mechanism across multiple cores, while potentially achieving throughput goals, latency goals, packet ordering goals, and individual subscriber SLAs (SINGH, [0019]).
Claim(s) 8 and 15 is/are substantially similar to claim 1, and is thus rejected under substantially the same rationale.
With respect to dependent claims:
Regarding claim(s) 3, the packet control system as set forth in claim 1,
Masputra-SINGH teach wherein: in the enqueuing process, the at least one processor determines, based on a number of enqueued packets enqueued in a queue corresponding to a sending timing associated with the packet, whether or not to enqueue the packet into the queue.
(Masputra, [0092], a queuing discipline or algorithm module manages a single instance of a class queue; a queue simply consists of one or more packets (mbufs). The algorithm is responsible for determining whether or not a packet should be enqueued or dropped. [0190], Thus, the number of packets currently queued for each flow is known and is used to perform flow control for that flow. In one embodiment, if a particular flow has packets queued beyond a specified threshold (as indicated by counter C>=FADV THRESHOLD in FIG. 10A), then a probability value for that flow is incremented by some interval. Once the probability reaches its limit (e.g., 1), this indicates that there are too many packets (i.e., the application is sending too fast) and further packets for this flow are dropped.)
Regarding claim(s) 7, the packet control system as set forth in claim 1,
Masputra-SINGH teach wherein: the at least one processor further carries out a calculation process of calculating, with a calculation technique corresponding to an application type of a packet which has been acquired in the acquisition, process, a sending timing of the packet. (Masputra, [0048], FIGS. 3A-B illustrate different network driver models in accordance with different embodiments of the invention. In FIG. 3A, the application 301 sends packets to be transmitted to the network stack 302 (1) which then sends the network packets to the IO networking interface 303 of the driver (2). In one embodiment, the IO networking interface 303 classifies the packet and places the classified packet in an appropriate IO Output Queue 304 based on the packet classification (3). [0116], in addition to marking the queue/buffer (mbuf) with a MBUF_SC value, in one embodiment, the module performing packet classification 202 also associates one or more tags with the packet, in order to assist the rest of the system in identifying the type or flow of the packet. In one embodiment, these tags reside within the built-in pf_mtag sub-structure of the mbuf, and are set regardless of how the classification is performed (explicit or implicit).)
Claim(s) 10 and 17 is/are substantially similar to claim 3, and is thus rejected under substantially the same rationale.
Claim(s) 14 is/are substantially similar to claim 7, and is thus rejected under substantially the same rationale.
2. Claim(s) 4, 5, 11, 12, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Masputra in view of SINGH further in view of Tuffs (US 20190235932 A1).
Regarding claim(s) 4, the packet control system as set forth in claim 3,
Masputra-SINGH teach wherein: in the enqueuing process, the at least one processor carries out dequeuing at a dequeue timing specified for each of the plurality of queues; (Masputra, [0045], regardless of how the packet is queued, it may be dequeued differently depending on whether the driver- or network stack-managed scheduling. For driver-managed scheduling, determined at 254, the driver performs a dequeue operation from a specified service class at 255. For example, if the driver is implementing 802.11n, then it may choose to perform the scheduling using the four service classes defined by WMM (see, e.g., FIG. 7 illustrating the 10:4 mapping between service classes and queue instances. Alternatively, for other network interface types (e.g., Ethernet, 3G, etc) scheduling may be performed at the network layer (see, e.g., FIG. 6 illustrating a 1:1 mapping between service classes and queue instances. Thus, at 260, the network layer performs the dequeue operation from the selected service class. At 270, the packet is provided to the driver layer which transmits the packet at 271. [0186], in the driver managed model all of the queues may be set up as with the network stack managed model, but the scheduling is performed by the driver scheduler 160. As such, the driver-based scheduler will then request a number of packets for each dequeue operation for a particular class based on priority (i.e., using the 4 classes for WMM). [examiner notes: the scheduling interprets to be the dequeue timing.])
and; in the enqueuing process, the at least one processor specifies, in accordance with the number of enqueued packets in a queue corresponding to the sending timing associated with the packet, a queue into which the packet is to be enqueued. (Masputra, [0010], receiving a packet to be transmitted from the client device to a destination over a network socket; classifying the packet according to an implicit packet service classification provided by the networking layer or a user-specific packet service classification explicitly specified by an application, the implicit classification having a default traffic classification queue and default scheduler associated therewith and the user-specified classification having a user-specified traffic classification and user-specified scheduler associated therewith; and enqueuing and scheduling the packet for transmission according to either the default or the user-specific traffic classifications.[0050], a packet scheduler 209 for scheduling packet transmission.)
Masputra-SINGH do not teach in the enqueuing process, the at least one processor estimates a number of prospective packets that are sendable to the network;
Tuffs however in the same field of computer networking teaches in the enqueuing process, the at least one processor estimates a number of prospective packets that are sendable to the network (Tuffs, [0024], in monitoring the queue length, data processing management system 140 may determine trends in the length of the queue over a time period. For example, based on the time of day or time of the week, data processing management system 140 may be able to identify the queue length at that time and predict the queue length moving forward. Thus, if an organization had a similar queue length or processing requirement at a similar time each day, data processing management system 140 may be capable of predicting the queue length based on the similarity.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Masputra by incorporating the teachings of Tuffs. The motivation/suggestion would have been because there is a need to wherein: in the enqueuing process, the at least one processor estimates, for each of the queues, a current number of prospective packets with reference to the number of prospective packets which has been previously estimated for that queue (Tuffs, [0002]).
Regarding claim(s) 5, the packet control system as set forth in claim 4,
Masputra-SINGH do not teach wherein: in the enqueuing process, the at least one processor estimates, for each of the plurality of queues, a current number of prospective packets with reference to the number of prospective packets which has been previously estimated for that queue.
Tuffs however in the same field of computer networking teaches wherein: in the enqueuing process, the at least one processor estimates, for each of the plurality of queues, a current number of prospective packets with reference to the number of prospective packets which has been previously estimated for that queue. (Tuffs, [0035], as depicted in graph 600, as the predicted queue length increases, the quantity of data processing systems required also increases. If the predicted queue length falls below the first predicted value, then the data processing management system may decrease the number of data processing systems that are active in the environment. Similarly, if the predicted queue length goes above the second predicted value, then the data processing management system may increase the number of data processing systems that are active in the environment. [0040], once the predicted object processing time 720 is determined, required instances operation 752 may be used to process the predicted object processing time 720 with the desired object processing time 722 and the current quantity of instances 714 to determine required quantity of instances 730. In particular, if the predicted object processing time 720 were a threshold amount greater than desired object processing time 722, then required instance operation 752 may identify a required quantity of instances that is greater than the current quantity of instances. In contrast, if the predicted object processing time were lower than the desired object processing time, then required instance operation 752 may identify a required quantity of instances that is less than the current quantity of instances [examiner notes: the examiner interprets the limitation as “estimate/predict, for each for each of the queues, a current number of prospective packets or a current queue length decreasing or deceasing based on a a threshold/reference value. The threshold interprets to be a reference value.])
The same motivation to combine as the dependent claim 4 applies here.
Claim(s) 11 and 18 is/are substantially similar to claim 4, and is thus rejected under substantially the same rationale.
Claim(s) 12 and 19 is/are substantially similar to claim 5, and is thus rejected under substantially the same rationale.
3. Claim(s) 6, 13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Masputra in view of SINGH in view of Tuffs further in view of Goringe (US 20040095893 A1).
Regarding claim(s) 6, the packet control system as set forth in claim 4,
Masputra-SINGH-Tuffs teach wherein: the plurality of queues are classified into a plurality of levels in accordance with delay granularity; and (Masputra, [0089], as illustrated in FIG. 6, the QFQ configuration used in one embodiment for network layer-managed scheduling provides a 1:1 mapping between packet service classes 601-610 and packet queue instances 611-621, respectively. As illustrated, the 10 service levels are roughly divided into 4 groups 630, 640, 650, 660, and prioritization is provided within each group. The groups are defined based upon the characteristics of the classified traffics, in terms of the delay tolerance (low-high), loss tolerance (low-high), elastic vs. inelastic flow, as well as other factors such as packet size and rate. As described herein, an “elastic” flow is one which requires a relatively fixed bandwidth whereas an “inelastic” flow is one for which a non-fixed bandwidth is acceptable. The illustrated 1:1 mapping allows for the networking stack to achieve full control over the behavior and differentiation of each service class: a packet is enqueued directly into one of the queues according to how it was classified; during dequeue, the scheduler determines the packet that is to be transmitted from the most eligible queue.)
Masputra-Short-Tuffs do not teach in a case where the number of enqueued packets in the first queue which corresponds to the sending timing associated with the packet is equal to or greater than the number of prospective packets, the enqueuing process, the at least one processor enqueues the packet into the second queue which is at a level identical with that of the first queue.
Goringe however in the same field of computer networking teaches in a case where the number of enqueued packets in a first queue which corresponds to the sending timing associated with the packet is equal to or greater than the number of prospective packets, the enqueuing process, the at least one processor enqueues the packet into the second queue which is at a level identical with that of the first queue. (Goringe, [0008], If the number of data packets within an identified queue exceeds a predetermined amount, the insertion of test packets into the network under test may be delayed or may be made from another queue on the router. [0029], although the example set further above in connection with FIG. 5 describes a different predetermined number (i.e., x, y or z) with respect to each of the queues, it should be appreciated that the predetermined number for some or all of the queues may be the same.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Masputra by incorporating the teachings of Goringe. The motivation/suggestion would have been because there is a need to solving the of problems such as measurement of end to end performance typically includes the effects of the customer's network at one or both ends of the communication (Goringe, [0004]- [0006]).
Claim(s) 13 and 20 is/are substantially similar to claim 6, and is thus rejected under substantially the same rationale.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WUJI CHEN whose telephone number is (571)270-0365. The examiner can normally be reached on 9am-6pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VIVEK SRIVASTAVA can be reached on (571) 272-7304. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WUJI CHEN/
Examiner, Art Unit 2449
/VIVEK SRIVASTAVA/Supervisory Patent Examiner, Art Unit 2449