Prosecution Insights
Last updated: April 19, 2026
Application No. 18/898,619

System And Method For Joint Dynamic Forwarding And Caching In Content Distribution Networks

Non-Final OA §103
Filed
Sep 26, 2024
Examiner
DOAN, HIEN VAN
Art Unit
2449
Tech Center
2400 — Computer Networks
Assignee
California Institute Of Technology
OA Round
1 (Non-Final)
51%
Grant Probability
Moderate
1-2
OA Rounds
4y 2m
To Grant
84%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
89 granted / 176 resolved
-7.4% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
19 currently pending
Career history
195
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
49.9%
+9.9% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
21.2%
-18.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 176 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim status: claims 1-9 are pending in this Office Action. DETAILED ACTION Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1,148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre- AIA 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 4, 6, and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Kling (US20120072526), in view of Zehavi (US20120209942) Regarding claim 1: Kling teaches A computer-implemented method comprising: in a computer network comprising a plurality of nodes and links between the nodes (Kling Fig. 1 [0018] a CDN 100. The network comprises a number of edge nodes, also called cache nodes 101-106 … end user computer 107, also called client, is connected to only one edge node 104 … 108 is stored in two edge nodes 102, 103), and operable to transmit one or more interest packets associated with requested data objects and to transmit one or more data packets associated with the requested data objects (Kling, Fig. 5, 505. [0036] “a client sends a request for a particular content”. [0038] “receiving content requests from cache nodes and clients”. [0022] packets that are being sent between the cache nodes and extracts information from packets. Note: requested multimedia/content is data object; cached content is interest packets), computer-executable instructions for: generating a virtual interest packet (VIP) corresponding to each requested data object (Kling, [0006] “In each of the cache nodes represented by the virtual node statistics regarding content requests is collected”. [0038] “receiving content requests from cache nodes and clients [0038] “upon request for a particular piece of content, redirect to the appropriate cache node that has that piece of content and return the address of that cache node as a redirect reply”. [0022] the cache nodes send packets to each other. Note: redirect the content request to the appropriate cache node that has that piece of content is generating a virtual interest packet (VIP) corresponding to each requested data object); maintaining a count of VIPs associated with a same requested data object at each node in the network, the VIP count variable over time at each node with demand for each requested data object (Kling, see fig. 6 “number of requests” of the same content on each cache node 407-409. [0008] “a counter for collecting statistics regarding content requests in each of the cache nodes that is represented by the virtual node. A processor is included for determining whether specific content is to be cached in the virtual node or not. This determination is based on statistics gathered from all the cache nodes represented by the virtual node. The entity also includes a transmitter for causing specific content, by sending a proposal to affected cache nodes, to be cached in one or more of the cache nodes represented by the virtual node”. [0019] decisions regarding when a particular content is to be cached in a cache node or not is made on statistics gathered over a period of time. [0024] streaming content … packet. Fig.7, 730. [0022] the cache nodes send packets to each other. Note: number of requests on each node 407-409 is maintaining a count of VIPs associated with a same requested data object at each node); determining incoming VIP transmission rates and outgoing VIP transmission rates at one or more nodes in the network ([0018] The cost for different links can vary significantly depending on e.g. the connection and the distance between the nodes. … The cost is a measure of the communication cost, and may include e.g. capacity, bandwidth constrains, jitter, delay, and average packet loss rate. [0022] The cost factor is dependent on the physical topology of the network and also on the conditions of the links which connect the various cache nodes. … when the cache nodes send packets to each other and monitor certain parameters such as bandwidth, jitter, delay, number of hops and average packet lost. See Fig. 6, cost (representing transmission rates – see above cost including capacity, bandwidth constrains, jitter, delay, and average packet loss rate) for retrieving content on each cache node. Note: capacity, bandwidth constrains, jitter, delay, and average packet loss rate are transmission rates; The cache nodes send packets to each other (packets incoming and outgoing from one cache node then incoming and outgoing to another cache node) wherein cost (represent transmission rates) are measured on each cache node (see fig.6). It would obviously teach determining incoming VIP transmission rates and outgoing VIP transmission rates at one or more nodes in the network); Kling does not teach each VIP comprising an identification of a data object, transmitting the virtual interest packets to balance distribution of VIPs across the nodes of the network, satisfying demand for data objects by caching a portion of the data packets in transitory cache memory at one or more nodes to balance demand for the data packets across the network. Zehavi teaches each VIP comprising an identification of a data object ([0002-0003] A content provider is one who delegates Uniform Resource Locator (URL) name space for web objects to be distributed … web clients running on users' machines use HTTP (Hyper Text Transport Protocol) to request objects from web servers. [0010] routing of network packets between client-server applications [0016] sends a DNS request to resolve the IP address for the name of the service it wants to access (for instance www.domain.com). The request is eventually sent to a DNS (Domain Name System) server 204 (directly or through a caching DNS server provided by the ISP. [0011] When an edge forward proxy receives a request, it may cache the first retrieved copy of the content in disk storage assuming that the next request will be served from the cache storage so as to reduce upstream traffic. Note: a requested web site/object from a server is provided/displayed to the client side is VIP (client can view a website from a server would obviously is virtual) transmitting the virtual interest packets to balance distribution of VIPs across the nodes of the network (Zehavi, figs 6-8. [0074] determine whether a request is for a service provided by the CDN or by the edge forward proxy … enables an implementation of a system in which a front-end IP address based load-balancer directs requests for the CDN IPs to the CDN module, and all other requests to an edge forward proxy module. Fig. 6, 602 [0056-0057] whether a destination domain indicated within the received request is served by a CDN. If yes, then module 604 directs control flow to CDN module 504, which implements the process of FIG. 7, discussed below. If no, then module 606 directs control flow to edge forward proxy module 506, which implements the process of FIG. 8 … If yes, then module 704 responds to the user device request by providing the cached content to the requester. If no, then module 706 directs control flow to HTTP(S) client module 510 which forwards the request over the Internet content in accordance with determinations by the configuration module 516 to an server that can provide the. [0058] decision module 802 determines whether a second storage region within the cache storage 410 allocated to the edge forward proxy 506 contains a cached copy of the requested content that is fresh. If yes, then module 804 responds to the user device request by providing the cached content to the user device If no, and the request is not SSL encrypted, then module 806 directs control flow to the HTTP(S) client module 510, which forwards the request to a destination server (not shown) accessible over the public Internet indicated by the request) and satisfying demand for data objects by caching a portion of the data packets in transitory cache memory at one or more nodes to balance demand for the data packets across the network (Zehavi [0053] determined that the request is to be served from cache, decision module 513 determines whether the requested content is already cached locally. If the requested content is cached locally in cache storage 410, then the content is retrieved from cache storage 410 and is sent to the requester of the content. [0065] cacheable content returned from an origin server (not shown) is stored in cache storage 1020. [0049] determine whether the requested content is cached within the cached content storage … determine whether the requested content is cached within the cached content storage. [0050] The selector 502 makes the above selection based upon header information in a request received from a user device 302. The following is example header information from an HTTP request, for instance--an illustration of a portion of the request header: [0052] he selector 502 may direct the request to individual ones of those CDN proxies based upon HTTP header contents. [0074] determine whether a request is for a service provided by the CDN or by the edge forward proxy … enables an implementation of a system in which a front-end IP address based load-balancer directs requests for the CDN IPs to the CDN module, and all other requests to an edge forward proxy module. Fig. 6, 602 [0056-0057] whether a destination domain indicated within the received request is served by a CDN. If yes, then module 604 directs control flow to CDN module 504, which implements the process of FIG. 7, discussed below. If no, then module 606 directs control flow to edge forward proxy module 506, which implements the process of FIG. 8 … If yes, then module 704 responds to the user device request by providing the cached content to the requester. If no, then module 706 directs control flow to HTTP(S) client module 510 which forwards the request over the Internet content in accordance with determinations by the configuration module 516 to an server that can provide the. [0058] decision module 802 determines whether a second storage region within the cache storage 410 allocated to the edge forward proxy 506 contains a cached copy of the requested content that is fresh. If yes, then module 804 responds to the user device request by providing the cached content to the user device If no, and the request is not SSL encrypted, then module 806 directs control flow to the HTTP(S) client module 510, which forwards the request to a destination server (not shown) accessible over the public Internet indicated by the request. Note: sent to the requester of the content from cache is satisfying demand for data objects); content header is cached in a storage is caching a portion of the data packets in transitory cache memory) It would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to take the teachings of Zehavi and apply them on the teachings of Kling to further implement each VIP comprising an identification of a data object, transmitting the virtual interest packets to balance distribution of VIPs across the nodes of the network, satisfying demand for data objects by caching a portion of the data packets in transitory cache memory at one or more nodes to balance demand for the data packets across the network. One would be motivated to do so because in order to improve better system and method to provide Uniform Resource Locator (URL) name space for web objects, load-balancer directs requests and determined if the requested content is cached locally in cache storage, then the content is retrieved from cache storage and is sent to the requester of the content (Zehavi, [0016][0053][0074). Regarding claim 2: Kling-Zehavi teaches The computer-implemented method of Claim 1, further comprising using a same VIP count for both transmitting the interest packets and caching the data packets (Kling [0020] For each request for a certain piece of content, the gain of storing that content is incremented in all relevant counters for all relevant cache nodes … When a caching decision is made the gain of storing a particular file … the gain for caching a particular piece of content is equal to the number of requests for the particular piece of content). Regarding claim 4: Kling-Zehavi teaches The computer-implemented method of Claim 1, further comprising updating the VIP count associated with each requested data object over a time slot (Kling, [0019] “statistics gathered over a period of time … allocator nodes 120 that can survey all requests made in the network and keep special gain counters for particular content for each cache node”. [0024] “the time between content request”). Regarding claim 6: Kling-Zehavi teaches The computer-implemented method of Claim 1, further comprising incrementing the VIP count by 1 for each requested data object (Kling, [0020] “For each request for a certain piece of content, the gain of storing that content is incremented. [0038] “a counter 730 for counting all content requests from all cache nodes”). Regarding claim 9: Kling-Zehavi teaches The computer-implemented method of Claim 1, wherein the computer network includes a named data network, a content-centric network (Kling [0001] a content distribution network [0006] caching content in a content delivery network (CDN) is provided), an information centric network (Kling [0006] caching content in a content delivery network (CDN) is provided), a content distribution network(Kling [0006] a content delivery network (CDN)), a data center (Kling fig. 1 [0006] caching content in a content delivery network (CDN) is provided), a cloud computing architecture(Kling fig 2 [0006] caching content in a content delivery network (CDN) is provided), or a peer-to-peer network (Kling, fig. 1).. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Kling (US20120072526), in view of Zehavi (US20120209942), further in view of Clark (US20040258088) Regarding claim 3: Kling-Zehavi teaches The computer-implemented method of Claim 1, Kling-Zehavi does not explicitly teach further comprising maintaining a separate VIP queue for each data object, the VIP queue having a size equivalent to the VIP count for an associated data object Clark teaches further comprising maintaining a separate VIP queue for each data object, the VIP queue having a size equivalent to the VIP count for an associated data object (Clark, fig. 1. [0017] The node 120 … Any injected packets received over the injected data path 123 are stored in the inject queue 124 … stores these forwarded data packets in a forward queue 122 [0018-0019] “queue count path 143 conveys indicia of the number of data packets in the forward data path. Generally, the snapshot engine 130 dynamically determines and employs a ratio between each injected data packet in the inject queue 124 and the number of forwarded packets in the forward queue 122 … the number of forwarded packets present in the forward queue is read over the forward queue count path 143”. [0020] “forwarded packets and injected packets to be transmitted”). It would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to take the teachings of Clark and apply them on the teachings of Kling-Zehavi to further implement wherein the network device is configured to maintain a separate VIP queue for each data object, the VIP queue having a size equivalent to the VIP count for an associated data object. One would be motivated to do so because in order to improve better system and method to provide any injected packets received over the injected data path are stored in the inject queue.These forwarded data packets in a forward queue (Clark [0017]]). . Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Kling (US20120072526), in view of Zehavi (US20120209942), further in view of Shopiro (US7680875) Regarding claim 5: Kling-Zehavi teaches The computer-implemented method of Claim 4, adding a number of incoming exogenous requests for the requested data object received during the time slot (Kling, [0019] “statistics gathered over a period of time … allocator nodes 120 that can survey all requests made in the network and keep special gain counters for particular content for each cache node”. [0024] “the time between content request”);; Kling-Zehavi does not explicitly teach subtracting an outgoing VIP transmission rate summed over neighbor nodes over the time slot from the VIP count at the beginning of the time slot, adding an incoming VIP transmission rate summed over neighbor nodes received during the time slot, if a data object is cached at a node at that time slot, subtracting a maximum rate in data objects per time slot at which copies of a data packet can be produced from transitory cache memory at the node. Shopiro teaches wherein updating the VIP count includes: subtracting an outgoing VIP transmission rate summed over neighbor nodes over the time slot from the VIP count at the beginning of the time slot (Shopiro, Fig 1 shows a series of cache nodes. Col 4 line 55- col 5 line 2 “The proxy cache 1 has a "hit" in the event that the requested file is stored therein, and in that event intercepts the request directed to origin server 104 and transfers the requested file to browser software 120 … in the event that browser 120 then requests the same file from origin server 104, proxy cache 1 software 122 will have a hit, and will transfer the file from its storage to browser 120”. Col 2 lines 18- 32 “By the time that the first filter is finished with the data of the first buffer, the second buffer may be filled … By moving from buffer to buffer … line speed … data rate. Note: cache node 1 has a hit and transfer the requested file from its storage to browser 120, it does not need to transfer the request to neighbor nodes (cache nodes 2-n) or server 104 would obviously teach subtract an outgoing VIP transmission rate summed over neighbor nodes over the time slot ); adding an incoming VIP transmission rate summed over neighbor nodes received during the time slot (Shopiro, Fig 1 shows a series of cache nodes. Col 4 line 55- col 5 line 2 “The proxy cache 1 has a "hit" in the event that the requested file is stored therein, and in that event intercepts the request directed to origin server 104 and transfers the requested file to browser software 120 … in the event that browser 120 then requests the same file from origin server 104, proxy cache 1 software 122 will have a hit, and will transfer the file from its storage to browser 120”. Col 2 lines 18- 32 “By the time that the first filter is finished with the data of the first buffer, the second buffer may be filled … By moving from buffer to buffer … line speed … data rate. Note: look up the requested content at cache node 2 in case cache 1 does not has a hit and transfer the requested file from cache node 2 to browser 120 would obviously teach add an incoming VIP transmission rate summed over neighbor nodes received during the time slot); and if a data object is cached at a node at that time slot, subtracting a maximum rate in data objects per time slot at which copies of a data packet can be produced from transitory cache memory at the node (Shopiro, Fig 1 shows a series of cache nodes. Col 4 line 55- col 5 line 2 “The proxy cache 1 has a "hit" in the event that the requested file is stored therein, and in that event intercepts the request directed to origin server 104 and transfers the requested file to browser software 120 … in the event that browser 120 then requests the same file from origin server 104, proxy cache 1 software 122 will have a hit, and will transfer the file from its storage to browser 120. In this second instance, transfer of messaging through Internet cloud 102 to origin server 104 is avoided”)”. Col 2 lines 18- 32 “By the time that the first filter is finished with the data of the first buffer, the second buffer may be filled … By moving from buffer to buffer … line speed … data rate. Note: transfer of messaging through Internet cloud 102 to origin server 104 is avoided would obviously teach subtract a maximum rate in data objects per time slot at which copies of a data packet can be produced from transitory cache memory at the node). It would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to take the teachings of Shopiro and apply them on the teachings of Kling-Zehavi to further implement subtracting an outgoing VIP transmission rate summed over neighbor nodes over the time slot from the VIP count at the beginning of the time slot, adding an incoming VIP transmission rate summed over neighbor nodes received during the time slot, if a data object is cached at a node at that time slot, subtracting a maximum rate in data objects per time slot at which copies of a data packet can be produced from transitory cache memory at the node. One would be motivated to do so because in order to improve better system and method to provide in the event that browser requests the same file from origin server, proxy cache will have a hit, and will transfer the file from its storage to browser (Shopiro, Col 4 line 55- col 5 line 2). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Kling (US20120072526), in view of Zehavi (US20120209942), further in view Igawa (US6772193) Regarding claim 7: Kling-Zehavi teaches The computer-implemented method of Claim 1, Kling-Zehavi does not explicitly disclose further comprising reducing the VIP count for a requested data object by a maximum rate at a node in an event a requested data object is stored in transitory cache memory, the maximum rate including a maximum rate in data objects per time slot at which copies of a data packet can be produced from transitory cache memory at the node. Igawa teaches further comprising reducing the VIP count for a requested data object by a maximum rate at a node in an event a requested data object is stored in transitory cache memory (Igawa,fig. 8, 803-804 col 7 lines 35-39 “such a cache file (step 803). When there is such a cache file, the user request processing program 170 reads the video data required by the user terminal 50 from the video cache file 22 and sends it to the user terminal 50 (step 804)). Col 2 lines 36-40 “A network cache … means for holding that highest speed or maximum network throughput for each of plural servers at which data can be flown between that server and a network cache …real-time”. Note: a cache node sends the requested file (does not need to go to the destination server) is reduce the VIP count for a requested data object by a maximum rate at a node in an event a requested data object is stored in transitory cache memory), the maximum rate including a maximum rate in data objects per time slot at which copies of a data packet can be produced from transitory cache memory at the node (Igawa,Col 2 lines 36-40 “A network cache … means for holding that highest speed or maximum network throughput for each of plural servers at which data can be flown between that server and a network cache …real-time”). It would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to take the teachings of Igawa and apply them on the teachings of Kling-Zehavi to further implement wherein the network device is configured to reduce the VIP count for a requested data object by a maximum rate at a node in an event a requested data object is stored in transitory cache memory, the maximum rate including a maximum rate in data objects per time slot at which copies of a data packet can be produced from transitory cache memory at the node. One would be motivated to do so because in order to improve better system and method to provide when there is such a cache file, the user request processing program reads the video data required by the user terminal from the video cache file and sends it to the user terminal (Igaw, fig. 8 803-804). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Kling (US20120072526), in view of Zehavi (US20120209942), in view Igawa (US6772193), further in view Jacobson (US20090287835) Regarding claim 8: Kling-Zehavi-Igawa teaches The computer-implemented method of Claim 7, Kling-Zehavi-Igawa does not explicitly disclose wherein the data packet includes a data name and signature data. Jacobson teaches wherein the data packet includes a data name, data content, and signature data (Jacobson, [0069] “a piece of CCN content includes the name along with the signature on that name”). It would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to take the teachings of Jacobson and apply them on the teachings of Kling-Zehavi-Igawa to further implement wherein the data packet includes a data name and signature data. One would be motivated to do so because in order to improve better system and method to provide CCN content includes the name along with the signature on that name (Jacobson, [0069]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HIEN VAN DOAN whose telephone number is (571)272-4317. The examiner can normally be reached on M-F: 8:00am-5-00pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VIVEK SRIVASTAVA can be reached on (571)272-7304. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HIEN V DOAN/Examiner, Art Unit 2449 /VIVEK SRIVASTAVA/Supervisory Patent Examiner, Art Unit 2449
Read full office action

Prosecution Timeline

Sep 26, 2024
Application Filed
Jan 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12542722
AUTOMATED INITIATION OF HELP SESSION IN A VIDEO STREAMING SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12470569
ANOMALY DETECTION RELATING TO COMMUNICATIONS USING INFORMATION EMBEDDING
2y 5m to grant Granted Nov 11, 2025
Patent 12443717
METHODS & PROCESSES TO SECURELY UPDATE SECURE ELEMENTS
2y 5m to grant Granted Oct 14, 2025
Patent 12367296
NATIVE MULTI-TENANT ROW TABLE ENCRYPTION
2y 5m to grant Granted Jul 22, 2025
Patent 12328367
Method and Apparatus for Establishing Session, and Related Device
2y 5m to grant Granted Jun 10, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
51%
Grant Probability
84%
With Interview (+33.3%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 176 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month