Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to communications filed 10/23/2025.
Claims 1-20 are pending and presented for examination.
Response to Arguments
Applicant’s arguments, see remarks pgs. 6-8, filed 10/23/25, with respect to claims 1-20 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made below.
Note: This case has been transferred from former examiner to Examiner Divecha.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jain et al. (hereinafter Jain, 2009/0037601 A1) in view of Choi et al. (hereinafter Choi, Us 2018/0357291 A1).
As per claim 1, Jain teaches A method for updating a hardware (HW) nexthop table in a network device [0009], the method comprising:
receiving a creation request that specifies a next hop entry to be added to the HW nexthop table, the next hop entry specifying one or more next hops [[0011], [0023]: receiving a link state advertisement to update the routing table stored in router’s memory, [0005]: link state message includes hops information from the routers, [0033]: link state includes info about state of one or more paths or links in network];
when a level of utilization of the HW nexthop table is equal to or exceeds a threshold value [[0010], [0023]: check for level of utilization or available resources is performed, [0034-0035]: Insufficient resources or no space available based on table or routers capacity],
when the level of utilization of the HW nexthop table is less than the threshold value, adding the next hop entry specified in the creation request to the HW nexthop table [[0010], [0023], [0034-0036]: when available resources at the router is sufficient, e.g. 100 entries max, 75 entries present, space or increment size is 10, in which case, 10 entries can be added to the table].
However, Jain does not teach adding the creation request to a backlog table that contains a plurality of creation requests when a level of utilization of HW nexthop table is equal to or exceeds a threshold value and periodically draining the backlog table by repeatedly setting a timer and, upon expiration of the timer, processing a number of creation requests stored in the backlog table that is based on a level of utilization of the nexthop table, wherein processing a creation request includes adding a next hop entry specified in the creation request to the HW nexthop table.
Note: Jain does not teach the backlog or a request queue to temporary hold or store the requests.
Choi, from the same field of endeavor, teaches adding the database requests to a backlog table [Request queue] that contains a plurality of database requests when a level of utilization of database [table] is equal to or exceeds a threshold value [fig. 13 step # 1345: ADD to request queue when queueing threshold is exceeded, fig. 14B, [0030], [0125]] and periodically draining (i.e. dequeue) the backlog table by repeatedly setting a timer [fig. 13 step #1350: Dequeue interval + Queue Threshold exceeded check, if not exceeded, Dequeue Request at 1360 and execute the request at 1365, [0030], [0120]: Dequeue interval] and, upon expiration of the timer [fig. 13 step 1350: Wait for next dequeue interval], processing a number of database requests stored in the backlog table that is based on a level of utilization of the database table, wherein processing a database request includes adding a next entry specified in the database request to the database [table] [fig. 13: step 1350, 1355, 1360 and 1365: Wait for dequeue interval, check queue threshold, dequeue request from request queue and execute the request, [0120], [0137], [0026]: database requests includes INSERT, UPDATE or DELETE an entry in the database].
Therefore, it would have been obvious to a person of ordinary skilled in the art, before the effective filing of the claimed invention, to modify Jain in view of Choi in order to utilize request queue or backlog queue and add the creation requests [database requests] to the request queue for temporarily holding or queueing the requests when the level of utilization of the database or the routing table is high.
One of ordinary skilled in the art would have been motivated because it would have reduced the possibility of a database system or a routing table becoming overloaded, thus avoiding the routing node or database failure [Choi: [0004]].
As per claim 2, Jain-Choi discloses the method of claim 1, as set forth above.
Jain further teaches receiving update requests to update next hope entries in the HW nexthop table and deletion requests to delete next hope entries from the HW nexthop table [[0010]: The update request for link failure will require update to routing table. Some requests will require deleting old entries while updating the table].
However, Jain does not teach wherein the update requests and deletion requests are performed without delay (i.e. they are not stored in the backlog queue).
Choi teaches prioritizing certain requests that are immediately executed/processed (i.e. not queued in request queue) based on their type including commit requests or roll back request (i.e. delete and revert back to previous) or drop an index, close set, disconnect, etc. [[0026]: INSERT, UPDATE, DELETE transactions, [0144-0145]].
Therefore, it would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to modify Jain in view of Choi in order to perform, process or execute update request to update next hop entries in the table and delete request to delete next hop entries from the table without a delay or without storing in backlog queue or immediately.
One of ordinary skilled in the art would have been motivated because this technique would have not increased or add to the current utilization of the database and would have reduce resource consumption [Choi: 0145].
As per claim 3, Jain-Choi discloses the method of claim 1, wherein the level of utilization of the HW nexthop table is the number of used table entries in the HW nexthop table [[0011], [0035-0036]: routing table that contains 75 entries and can hold a maximum of 100 entries].
However, Jain-Choi does not teach expressing used table entries as a percentage of the total number of table entries in the HW nexthop table.
But, expressing any statistics in form of percentage is well known in the art and therefore obvious. Therefore, it would have been obvious to a person of ordinary skill in the art to modify Jain to express the 75 entries in form of percentage of the used entries in the table.
One of ordinary skilled in the art would have been motivated because percentages are commonly used to indicate various performance metrics or data that simplifies complex data making it easier to compare, interpret and analyze changes relative to base or original values.
As per claim 4, Jain-Choi discloses the method of claim 1, wherein a period of time between successive runs of draining the backlog table to add next hop entries to the HW nexthop table varies depending on the level of utilization of the HW nexthop table [Choi: [120], [0133], fig. 13 step 1350-1355-1370-1350-1355-1360: In first iteration, if queue threshold is not exceeded, request from request queue is dequeued when the first interval expires, in second iteration, 2nd dequeue interval is checked and if expired, queue threshold is checked, If queue threshold is exceeded, the process goes back to step 1370 and process repeats, in which case the total duration for second iteration to dequeue is longer …Thus, the total interval or duration between the successive runs of dequeuing the request queue varies based on queue threshold, [0121], [0150], [0163]]: The dequeuing interval in some cases can dynamically adjust (i.e. varies) based on factors such as number of incoming or queued requests, the type of incoming request or queued requests, the performance information, other factors, etc.]. Same rationale as in claim 1 applies.
As per claim 5, Jain-Choi discloses the method of claim 1, wherein draining the backlog table to add entries to the HW nexthop table runs for a period of time that varies depending on the level of utilization of the HW nexthop table [Choi: fig. 13, [0120]: The dequeuing process continues until all the pending requests are completed. The whole process of dequeuing depends on dequeue interval and queue threshold or available resources. As such, the time period for each iteration of draining the request queue depends on the number of requests, resource availability, etc. Thus, each iteration of dequeuing varies or changes depending on available resources, [0163]: The dequeue interval is dynamic based on various factors]. Same rationale as in claim 1 applies.
As per claim 6, Jain-Choi discloses the method of claim 5, wherein the first period of time is an integral number of processing time quanta of the network device [fig. 13, [0120], [0028]: Dequeuing depends on available resources such as CPU or memory resources. As such, the process of draining, i.e. dequeuing will take time that also depends on availability of processing capacity. For example: If CPU is highly congested, it is not available to process the data base requests until it becomes available or decongested. As such, the period of time it takes draining the backlog table will vary based on CPU availability or CPU processing time].
As per claim 7, Jain-Choi discloses the method of claim 1, wherein next hop entries in the backlog table are added to the HW nexthop table in priority order from high priority to low priority [Choi: [0122]: reorder the queue based on timeout values [time is importance, thus prioritized], [0125-0127]: priority queue that is reordered, [0130]], wherein a priority of a next hop is based on how many routes reference that next hop entry [Jain: [0037]: routing table entries are ranked based on importance. For example: a link that is more vital, how many customers links affects, desired QoS, etc.].
Therefore, it would have been obvious to a person of ordinary skilled in the art before the effective date of the claimed invention to modify Jain in view of Chen in order to prioritize the request queue in priority order from high to low wherein the priority is based on the number of routes is impacted by the next hop entry from the nexthop in the pending request.
One of ordinary skilled in the art would have been motivated because it would have ensured that important links that are vital to the network or critical updates to the routing table are made first [Jain: 0037].
As per claim 8, Jain-Choi teaches the method of claim 7, further comprising reprogramming previously programmed routes that now reference next hop entries that are added to the hardware table, wherein the previously programmed routes are reprogrammed in priority order according to the priorities of the added next hop entries [Jain: [0037-0038]: The routing table is updated [i.e. reprogrammed] based on N highest ranked entries which readjusts the entries based on its ranking including previous and new entries].
As per claim 14, Jain-Choi teaches the network device of claim 13, wherein the duration is based on (1) time OR (2) a number of buffered requests [Choi: fig. 13, [0120]: The dequeuing process continues until all the pending requests are completed. The whole process of dequeuing depends on dequeue interval and threshold or available resources. As such, the time period for each iteration of draining the request queue depends on the number of requests, resource availability, etc. Thus, the duration of draining inherently is based on how many requests are in the request queue. For example: 2 requests will be completed in shorter duration than 5 requests which will inherently take longer duration to complete, etc.]. Same rationale as in claim 1 applies.
As per claims 9-13, 15-20, they do not teach or further define over the limitations in claims 1-8, 14. Therefore, claims 9-13, 15-20 are rejected for the same reasons as set forth in claims 1-8, 14.
Pertinent Prior Arts
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Hughes, Jr. US 7,277,399 B1: Hardware-based route cache using Prefix Length: See Col. 2 L9-35
RAKIC, US 2008/0168103 A1: Database Management Methodology
Xiong et al., US 2019/0050198 A1: Ring Buffer including a preload buffer
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAMAL B DIVECHA whose telephone number is [571-272-5863]. The examiner can normally be reached IFP Normal Hours M-F: 8am-4.30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Colleen Fauz can be reached at 5712721667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
KAMAL B. DIVECHA
Primary Patent Examiner
Art Unit 2453
/KAMAL B DIVECHA/Supervisory Patent Examiner, Art Unit 2453