Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), filed on 11/24/2025 in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/24/2025 has been entered. Claims 1-7 and 21-33 are pending.
Response to Arguments
2. Applicant's arguments have been fully considered but they are not persuasive. The applicant argues the following issues.
(A) Rejection under 35 U.S.C. 103(a)
Issue 1: The applicant argues with respect to the independent claims that the amended limitations overcome the current rejection.
Examiner respectfully disagrees. See Examiner’s response in the corresponding rejection section below.
Issue 2: The applicant’s arguments regarding the dependent claims are based on the arguments for the respective independent claims.
See Examiner’s response above.
Claim Rejections - 35 USC § 103
3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection`n, would be the same under either status.
4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
6. Claims 1-4, 6-7 and 21-24, 26-33 are rejected under 35 U.S.C. 103 as being unpatentable over Moreman et al (US 2015/0271072) in view of Prahlad et al (US 2010/0333116).
As to claim 1, Moreman discloses a method comprising:
receiving, at a first computing device, via a second computing device, a request for content (figure 3, step 30, “receive request for content at cache node”; and also [0041]-[0042], wherein the cache node next hop to the client device is a second computing device and its upstream cache node is a first computing device; see figure 1 and [0013], “The network comprises one or more cache nodes 12, 13 located on a data path between an adaptive streaming client (e.g., endpoint, request originator, content receiver, ABR video client) 10 and a content source (e.g., server, endpoint, CDN node) 14. In the example shown in FIG. 1, cache node 12 includes cache storage for storing received content (data) and a network performance module (e.g., code, logic, or other mechanism) 18 operable to measure and store network performance characteristics between the cache node and content source (e.g., upstream cache node 13, source 14),” wherein any of the adaptive streaming client and the preceding cache nodes can be a second computing device that the request is received via),
determining, based on a processing delay associated with the request (figure 3, step 32, “measure time taken to retrieve content from content source”, wherein the time taken to retrieve content from content source is a processing delay associated with the content request),
at least one delay parameter associated with at least one rate adaptation decision and comprising information indicative of the processing delay (figure 3, step 34, “store time to retrieve and associate with object”; [0042], “In one embodiment, in addition to providing an ABR fragment to the client or next hop downstream cache, the cache node 12 may also provide detailed information as to what is limiting the cache's response time. This information may include, for example, whether the limitation is due to network capacity upstream from the cache node, limitations with the next hop upstream cache, limitations due to the source streamer, or limitations of the cache's own capacity. This information may be cascaded through multiple cache nodes recursively toward the client”, wherein the cache’s limited response time and it’s cause such as network capacity upstream from the cache node in combination can be considered a delay parameter indicative of the processing delay. See also [0021]. Also see [0038]-[0041], “the retrieval time is identified by measuring the time to retrieve HTTP chunks (ABR fragments) on cache ingest at the cache node 12.… the provisioned maximum downstream bit rate is determined and used as an upper bound…the maximum bitrate for the content currently cached on the cache node may be determined. If the client is currently fetching a lower bitrate than the maximum rate locally stored, the bitrate associated with the content may be used rather than the bitrate associated with the current rate. This allows the client to detect that an up shift is desirable” indicating that a current cache node is sent with the delay parameter. See also claims 8-10); and
sending, to the second computing device, the content and the at least one delay parameter ([0042], “In one embodiment, in addition to providing an ABR fragment to the client or next hop downstream cache, the cache node 12 may also provide detailed information as to what is limiting the cache's response time. This information may include, for example, whether the limitation is due to network capacity upstream from the cache node, limitations with the next hop upstream cache, limitations due to the source streamer, or limitations of the cache's own capacity. This information may be cascaded through multiple cache nodes recursively toward the client”, wherein a downstream cache node is a second computing device. Alternatively, see [0038]-[0041], “the retrieval time is identified by measuring the time to retrieve HTTP chunks (ABR fragments) on cache ingest at the cache node 12.… the provisioned maximum downstream bit rate is determined and used as an upper bound…the maximum bitrate for the content currently cached on the cache node may be determined. If the client is currently fetching a lower bitrate than the maximum rate locally stored, the bitrate associated with the content may be used rather than the bitrate associated with the current rate. This allows the client to detect that an up shift is desirable,” indicating that a current cache node as a downstream cache node is sent with the delay parameter. See also claims 8-10), wherein the at least one delay parameter causes the second computing device to disregard an amount of time associated with the delay parameter in the at least one rate adaptation decision (see citation above, wherein determining the maximum bit rate is at least one rate adaptation decision, which disregards the delay parameters that the current cache node (downstream cache node that serves the client device) receives from the upstream cache node. See also [0011]-[0012], wherein it is purposeful to disregard the delay in order to set the maximum bit rate).
However, Moreman does not expressly disclose that the at least one delay parameter comprising an amount of time representing the processing delay. Prahlad discloses a concept for a content response to include at least one delay parameter to comprise an amount of time representing a processing delay associated with a request, for processing the content retrieval ([0263], “In step 1450, the system retrieves the archived content, which may utilize the data restoration methods discussed herein. Additionally or alternatively, the system may provide an estimate of the time required to retrieve the archived content and add this information to the selected search result. …the system provides the search results in response to the search query. For example, the user may receive the search results through a web page that lists the search results, or the search results may be provided to another system for additional processing through an API”).
Before the effective filing date of the invention, it would have been obvious for an ordinary skilled in the art to combine Moreman with Prahlad. The suggestion/motivation of the combination would have been to provide retrieval time for content to requester to improve user friendliness (Prahlad, [0263]).
As to claim 21, Moreman discloses a method comprising:
receiving, at a first computing device via a second computing device, a request for content (see similar rejection to claim 1. It is to be noted that the claim merely requires “associated with” without requiring that the client be the requester);
determining, based on a processing priority associated with the request (see citation in rejection to claim 1, and claim 8, “wherein the time to retrieve the content comprises a measured round trip time to retrieve the content for a cache-miss” wherein retrieving cache-missed content from another source is a processing priority associated with the request. See also figure 3, step 32, “measure time taken to retrieve content from content source”),
at least one delay parameter associated with at least one rate adaption decision and comprising information indicative of the processing priority (see citation in rejection to claim 1); and
sending, to the second computing device, the at least one delay parameter comprising the information, wherein the at least one delay parameter causes the second computing device to disregard the amount of time in the at least one rate adaptation decision (see similar rejection to claim 1).
However, Moreman does not expressly disclose that the at least one delay parameter comprising an amount of time associated with the processing delay. Prahlad discloses a concept for a content response to include at least one delay parameter to comprise an amount of time associated with a processing delay for processing the content retrieval ([0263], “In step 1450, the system retrieves the archived content, which may utilize the data restoration methods discussed herein. Additionally or alternatively, the system may provide an estimate of the time required to retrieve the archived content and add this information to the selected search result. …the system provides the search results in response to the search query. For example, the user may receive the search results through a web page that lists the search results, or the search results may be provided to another system for additional processing through an API”).
Before the effective filing date of the invention, it would have been obvious for an ordinary skilled in the art to combine Moreman with Prahlad. The suggestion/motivation of the combination would have been to provide retrieval time for content to requester to improve user friendliness (Prahlad, [0263]).
As to claim 28, Moreman discloses a method comprising:
sending, to a computing device, a request for content, wherein the computing device determines a processing delay associated with the request (see citation in rejection to claim 1, wherein the upstream cache nodes that’s next hop to the cache node serving the client device is a computing device);
receiving, based on the request, at least one delay parameter associated with a service metric and comprising information indicative of the processing delay (see similar rejection to claim 1); and
based on the at least one delay parameter, determining the service metric without the delay information comprised in the delay parameter, wherein the service metric is associated with receiving the content (see similar rejection to claim 1).
However, Moreman does not expressly disclose that the at least one delay parameter comprising an amount of time associated with the processing delay. Prahlad discloses a concept for a content response to include at least one delay parameter to comprise an amount of time associated with a processing delay for processing the content retrieval ([0263], “In step 1450, the system retrieves the archived content, which may utilize the data restoration methods discussed herein. Additionally or alternatively, the system may provide an estimate of the time required to retrieve the archived content and add this information to the selected search result. …the system provides the search results in response to the search query. For example, the user may receive the search results through a web page that lists the search results, or the search results may be provided to another system for additional processing through an API”).
Before the effective filing date of the invention, it would have been obvious for an ordinary skilled in the art to combine Moreman with Prahlad. The suggestion/motivation of the combination would have been to provide retrieval time for content to requester to improve user friendliness (Prahlad, [0263]).
As to claim 2, Moreman-Prahlad discloses the method of claim 1, further comprising:
determining, at a first time, based on the content not being available at the first computing device or a cache associated with the first computing device, the processing delay (Moreman, claim 8, “wherein the time to retrieve the content comprises a measured round trip time to retrieve the content for a cache-miss” wherein “for a cache-miss” indicates based on the content not being available at the first computing device i.e., the cache node); and
retrieving, at a second time, the content from at least one other computing device (Moreman, figure 3, step 32, “measure time taken to retrieve content from content source”, wherein content source is at least one other computing device), wherein the at least one delay parameter is indicative of the content not being available at the first computing device or the cache associated with the first computing device (Moreman, claim 8, “wherein the time to retrieve the content comprises a measured round trip time to retrieve the content for a cache-miss”), and wherein the amount of time is based on the first time and the second time (Moreman, claim 8, “wherein the time to retrieve the content comprises a measured round trip time to retrieve the content for a cache-miss”; Prahlad, [0263]).
As to claim 22, Moreman-Prahlad discloses the method of claim 21, further comprising:
determining, at a first time, based on the processing priority, that the request is to be processed (Moreman, figure 3, step 32, “measure time taken to receive content from content source”; claim 8, “wherein the time to retrieve the content comprises a measured round trip time to retrieve the content for a cache-miss”, determine to retrieve content is determining to process the content request); and
sending, at a second time, the content to the second computing device (Moreman, figure 3, step 36, “Stream content at rate limited based on time to retrieve content”; [0038]-[0042]).
A to claim 23, Moreman-Prahlad discloses the method of claim 22, wherein the amount of time is based on the first time and the second time (Moreman, [0042]; claim 8, “wherein the time to retrieve the content comprises a measured round trip time to retrieve the content for a cache-miss”; Prahlad, [0263]).
As to claim 24, Moreman-Prahlad discloses the method of claim 22, further comprising:
determining, prior to be the request being processed, that the content is not available at the first computing device or a cache associated with the first computing device (Moreman, claim 8, “cache-miss”); and
retrieving, prior to the second time, the content from at least one other computing device (Moreman, claim 8; figure 3).
As to claim 30, Moreman-Prahlad discloses the method of claim 28, wherein the at least one delay parameter is indicative of the content not being available at the computing device or a cache associated with the computing device (Moreman, figure 8, “cache-miss”).
As to claim 3, Moreman-Prahlad discloses the method of claim 1, further comprising:
determining, based on a prioritization scheme, a processing priority for the request, wherein the processing delay is based on the processing priority (Moreman, claim 8, “wherein the time to retrieve the content comprises a measured round trip time to retrieve the content for a cache-miss” wherein retrieving cache-missed content from another source while using locally cached content for a cache-hit is a prioritization scheme, and wherein the retrieving the cache-missed content is a processing priority for the request); and
determining, at a first time, based on the processing priority, that the request is to be processed, wherein the content and the at least one delay parameter are sent at a second time, and wherein the at least one delay parameter is indicative of the processing priority (Moreman, figure 3; claim 8; [0038]-[0042], especially [0042], “provide detailed information as to what is limiting the cache's response time” wherein the cache’s response time is indicative of the processing priority. See also Prahlad, [0263]).
As to claim 31, Moreman-Prahlad discloses the method of claim 28, further comprising determining, by the computing device, a processing priority for the request (Moreman, claim 8, “wherein the time to retrieve the content comprises a measured round trip time to retrieve the content for a cache-miss” wherein retrieving cache-missed content from another source while using locally cached content for a cache-hit is a prioritization scheme, and wherein the retrieving the cache-missed content is a processing priority for the request), wherein the processing delay is based on the processing priority, and wherein the at least one delay parameter is indicative of the processing priority (Moreman, figure 3; claim 8; [0038]-[0042]).
As to claim 4, Moreman-Prahlad discloses the method of claim 3, further comprising:
determining, prior to be the request being processed, that the content is not available at the first computing device or a cache associated with the first computing device (Moreman, claim 8, “cache-miss”); and
retrieving, prior to the second time, the content from at least one other computing device (Moreman, claim 8).
As to claim 6, Moreman-Prahlad discloses the method of claim 1, wherein sending the content and the at least one delay parameter comprises:
determining that the amount of time representing the processing delay meets or exceeds a threshold amount (Moreman, [0041]-[0042] and [0043], “if the measured arrival is very slow, rather than rate limiting a cached chunk to play out at a rate significantly below the native rate of the chunk, a lower bound is applied. In one example, the lower bound is the maximum of the measured cache arrival rate or the native rate of the chunk”, wherein determining “very slow” implies meets or exceeds a threshold of delay time); and
sending, based on the amount of time meeting or exceeding the threshold amount, the at least one delay parameter to the second computing device (Moreman, [0041]-[0042] and [0043], “if the measured arrival is very slow, rather than rate limiting a cached chunk to play out at a rate significantly below the native rate of the chunk, a lower bound is applied. In one example, the lower bound is the maximum of the measured cache arrival rate or the native rate of the chunk”).
As to claim 26, Moreman-Prahlad discloses the method of claim 21, wherein sending the at least one delay parameter comprises determining that the amount of time meets or exceeds a threshold amount (Moreman, [0041]-[0042] and [0043], “if the measured arrival is very slow, rather than rate limiting a cached chunk to play out at a rate significantly below the native rate of the chunk, a lower bound is applied. In one example, the lower bound is the maximum of the measured cache arrival rate or the native rate of the chunk”, wherein determining “very slow” implies meets or exceeds a threshold of delay time).
As to claim 27, Moreman-Prahlad discloses the method of claim 26, wherein sending the at least one delay parameter comprises sending, based on the amount of time meeting or exceeding the threshold amount, the at least one delay parameter to the second computing device (Moreman, [0041]-[0042] and [0043], “if the measured arrival is very slow, rather than rate limiting a cached chunk to play out at a rate significantly below the native rate of the chunk, a lower bound is applied. In one example, the lower bound is the maximum of the measured cache arrival rate or the native rate of the chunk”, wherein determining “very slow” implies meets or exceeds a threshold of delay time).
As to claim 33, Moreman-Prahlad discloses the method of claim 28, further comprises:
determining, by the computing device, that the amount of time representing the processing delay meets or exceeds a threshold amount; and based on the amount of time meeting or exceeding the threshold amount, sending, by the computing device, the at least one delay parameter (Moreman, [0041]-[0042] and [0043], “if the measured arrival is very slow, rather than rate limiting a cached chunk to play out at a rate significantly below the native rate of the chunk, a lower bound is applied. In one example, the lower bound is the maximum of the measured cache arrival rate or the native rate of the chunk”, wherein determining “very slow” implies meets or exceeds a threshold of delay time).
As to claim 7, Moreman-Prahlad discloses the method of claim 1, further comprising:
determining, by the second computing device, based on the at least one delay parameter, a service metric associated with receiving the content (Moreman, [0010], “An adaptive streaming client ( e.g., ABR video client) can access chunks stored on servers using a Web paradigm ( e.g., HTTP (Hypertext Transfer Protocol) operating over a TCP (Transmission Control Protocol)/IP (Internet Protocol) transport) and make a decision about which specific representation ( e.g., video encoding rate) of any given content it will request from the server. The decision may be based on various parameters or observations, including, for example, current bandwidth ( e.g., based on monitored delivery rate). Throughout the duration of a given viewing experience, the ABR video client may upshift to a higher encoding rate to obtain better quality when available bandwidth increases, or downshift to a lower encoding rate when available bandwidth decreases”; [0021], “the performance characteristics may comprise the time to retrieve content (e.g., chunk fetch time, maximum bit rate) from the content source (e.g., cache node 13, content source 14. The performance characteristics are associated with a specific video asset (content) requested by the endpoint 10”; [0041], “In one embodiment, the maximum bitrate for the content currently cached on the cache node may be determined. If the client is currently fetching a lower bitrate than the maximum rate locally stored, the bitrate associated with the content may be used rather than the bitrate associated with the current rate. This allows the client to detect that an upshift is desirable.”); and
determining, by the second computing device, based on the service metric, that one or more of:
the content is to be requested from a content source that differs from the first computing device, a different representation of the content is to be requested, or the content is to be requested at a different bitrate (Moreman, [0041]).
As to claim 29, Moreman-Prahlad discloses the method of claim 28, further comprising determining, based on the service metric, that one or more of:
the content is to be requested from a content source that differs from the computing
device, a different representation of the content is to be requested, or
the content is to be requested at a different bitrate (Moreman, [0041]).
8. Claims 5, 25, and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Moreman-Prahlad, as applied to claim 3 above, and further in view of Stockhammer et al (US 2018/0316740).
As to claim 5, Moreman-Prahlad discloses the claimed invention substantially as discussed in claim 3, but does not expressly disclose that the second computing device comprises a playback buffer, wherein the request comprises an indication of a status of the playback buffer, and wherein determining the processing priority for the request comprises: determining, based on the status of the playback buffer, a buffer starvation time; and determining, based on the buffer starvation time and the prioritization scheme, the processing priority. Stockhammer discloses a second computing device comprising a playback buffer, wherein a request comprises an indication of a status of the playback buffer, and wherein determining a processing priority for the request comprises: determining, based on the status of the playback buffer, a buffer starvation time; and determining, based on the buffer starvation time and the prioritization scheme, a processing priority (Stockhammer, [0171]-[0172]).
Before the effective filing date of the invention, it would have bene obvious for an ordinary skilled in the art to combine Moreman-Prahlad with Stockhammer. The suggestion/motivation of the combination would have been to avoid buffer overrun (Stockhammer, [0010]).
As to claim 25, Moreman-Prahlad-Stockhammer discloses the method of claim 21, wherein the second computing device comprises a playback buffer, wherein the request comprises an indication of a status of the playback buffer, and wherein the method further comprises: determining, based on the status of the playback buffer, a buffer starva…tion time; and determining, based on the buffer starvation time, the processing priority (see similar rejection to claim 5).
As to claim 32, Moreman-Prahlad-Stockhammer discloses the method of claim 28, wherein the request comprises an indication of a status of a playback buffer, and wherein the method further comprises:
determining, by the computing device, based on the status of the playback buffer, a buffer starvation time; and determining, based on the buffer starvation time, a processing priority (see similar rejection to claim 5, Stockhammer, [0171]-[0172]), wherein the at least one delay parameter is indicative of the processing priority (Moreman, [0038]-[0042]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUA FAN whose telephone number is (571)270-5311. The examiner can normally be reached on 9-6.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached at 571-270-3037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUA FAN/Primary Examiner, Art Unit 2458