DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office correspondence is in response to the application filed on June 27, 2024. Claims 21-80 are canceled as per preliminary amended 09/11/2024.
Claims 1-20 are pending.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/11/2024 (2), 10/28/2025, and 01/07/2026 was filed after the mailing date of the instant application on 06/27/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Mukesh Taneja (US Publication 2025/0088464) hereafter Taneja, in view of Phillips et al. (US Publication 2025/0385944) hereafter Phillips.
As per claim 1, Taneja discloses a computer-implemented method, comprising: receiving, from a client device over a network, a first urgency parameter in relation to a first portion of a network resource, wherein networking equipment associated with the network provides a first queue for preferential network traffic and a second queue for non-preferential network traffic (paragraphs 9-17, 168: classifies each as L4S or non-L4S traffic and resource allocation with policy); identifying, by a server, a second urgency parameter in relation to the first portion of the network resource (paragraphs 0023-26: identify QoS flow traffic with another corresponding resource); based on at least one of the first urgency parameter or the second urgency parameter, determining whether to transmit the first portion to the client device over the network using the first queue or the second queue (paragraphs 0046, 0168: determine queues capability for packets in either Classic queue (for non-L4S packets) and an L4S queue); and based on the determining, causing the first portion of the network resource to be transmitted preferentially to the client device over the network using the first queue instead of the second queue (paragraphs 46, 173, 250: determine and send the packet with priority value according to QoS characteristic). Although, Taneja discloses resource allocation of coexistence of low latency, low loss and scalable throughput (L4S) traffic and non-L4S traffic in a system, Taneja fails to expressly disclose a first urgency parameter in relation to a first portion of a network resource, and a second urgency parameter in relation to the first portion of the network resource.
However, in the same field of endeavor, Phillips discloses the claimed limitation of a first urgency parameter in relation to a first portion of a network resource, and a second urgency parameter in relation to the first portion of the network resource (paragraphs 0064-65, 0084: streams delivered via HTTP with streams buffer and bandwidth measurement).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Phillips teaching with Taneja. One would be motivated to identify and make use of urgency parameters (HTTP) for transmitting content to increase quality and decrease the cost over the network.
As per claim 2, Taneja discloses the computer-implemented method further comprising: receiving, from the client device over the network, a third urgency parameter in relation to a second portion of the network resource (paragraphs 0168, 221); identifying, by the server, a fourth urgency parameter in relation to the second portion of the network resource; based on at least one of the third urgency parameter or the fourth urgency parameter, determining whether to transmit the second portion to the client device over the network using the first queue or the second queue (paragraphs 168, 221-222); and based on the determining, causing the second portion of the network resource to be transmitted non-preferentially to the client device over the network using the second queue instead of the first queue (paragraphs 9-17, 168).
As per claim 3, Taneja discloses the computer-implemented method wherein the first urgency parameter corresponds to the second urgency parameter (paragraphs 168, 203, 266-267).
As per claim 4, Taneja discloses the computer-implemented method wherein the determining whether to transmit the first portion preferentially or non-preferentially to the client device over the network is based on the second urgency parameter and is not based on the first urgency parameter (paragraphs 9-17, 23-26, 168).
As per claim 5, Taneja discloses the computer-implemented method wherein the determining whether to transmit the first portion preferentially or non-preferentially to the client device over the network is based on a combination of the first urgency parameter and the second urgency parameter (paragraphs 9-17, 23-26, 168).
As per claim 6, Taneja discloses the computer-implemented method further comprising: selecting, as a particular urgency parameter for the first portion of the network resource, the first urgency parameter, the second urgency parameter, or an urgency parameter obtained based on the first urgency parameter and the second urgency parameter; wherein the determining whether to transmit the first portion preferentially or non- preferentially to the client device over a network is based on comparing a value of the particular urgency parameter of the first portion to a threshold (paragraphs 009-17, 0027-38).
As per claim 7, Taneja discloses the computer-implemented method further comprising: receiving, from the client device and in relation to the first portion of the network resource, a first incremental parameter; and identifying, by the server and in relation to the first portion of the network resource, a second incremental parameter (paragraphs 168, 203, 266-267); wherein the determining whether to transmit the first portion preferentially or non-preferentially to the client device over the network is further based on at least one of the first incremental parameter or the second incremental parameter, wherein a first priority parameter comprises the first urgency parameter and the first incremental parameter, wherein a second priority parameter comprises the second urgency parameter and the second incremental parameter (paragraphs 9-17, 168-169). Although, Taneja discloses resource allocation of coexistence of low latency, low loss and scalable throughput (L4S) traffic and non-L4S traffic in a system, but fails to expressly disclose wherein each of the first and second incremental parameters is a Boolean value.
However, in the same field of endeavor, Phillips discloses the claimed limitation of wherein each of the first and second incremental parameters is a Boolean value (paragraphs 4-7, 61, 137).
The same motivation that was utilized in the combination of claim 1 applies equally as well to claim 7.
As per claim 8, Taneja discloses the computer-implemented method further comprising: selecting, as a particular urgency parameter for the first portion of the network resource, the first urgency parameter, the second urgency parameter, or an urgency parameter obtained based on the first urgency parameter and the second urgency parameter (paragraphs 9-17, 168); selecting, as a particular urgency parameter for the second portion of the network resource, the third urgency parameter, the fourth urgency parameter, or an urgency parameter obtained based on the third urgency parameter and the fourth urgency parameter (paragraphs 168, 221-222); and based on determining that the particular urgency parameter for the first portion of the network resource matches the particular urgency parameter for the second portion of the network resource, causing the determination of whether to transmit the first portion preferentially or non-preferentially to the client device over the network to be further based on at least one of a first incremental parameter or a second incremental parameter for the first portion (paragraphs 168, 203, 266-267).
As per claim 9, although, Taneja discloses resource allocation of coexistence of low latency, low loss and scalable throughput (L4S) traffic and non-L4S traffic in a system, but fails to expressly disclose wherein the first urgency parameter is an HTTP urgency parameter for retrieving an HTML document, an XML document, a JavaScript resource, or any combination thereof, corresponding to the network resource.
However, in the same field of endeavor, Phillips discloses the claimed limitation of wherein the first urgency parameter is an HTTP urgency parameter for retrieving an HTML document, an XML document, a JavaScript resource, or any combination thereof, corresponding to the network resource (paragraphs 0064-65, 0084).
The same motivation that was utilized in the combination of claim 1 applies equally as well to claim 9.
As per claim 10, Taneja discloses the computer-implemented method wherein causing the first portion of the network resource to be transmitted preferentially to the client device over the network using the first queue instead of the second queue is based at least in part on a file size or expected tonnage associated with the first portion of the network resource (paragraph 201; Phillips: 65-66, 69-71).
The same motivation that was utilized in the combination of claim 1 applies equally as well to claim 10.
As per claim 11, Taneja discloses the computer-implemented method wherein the network resource comprises a plurality of portions including a plurality of short-form videos, the method further comprising: selecting, as a particular urgency parameter for the first portion of the network resource, the first urgency parameter, the second urgency parameter, or an urgency parameter obtained based on the first urgency parameter and the second urgency parameter (paragraphs 0171, 0201); while a particular short-form video of the plurality of short-form videos corresponding to the first portion is being played at the client device, setting the particular urgency parameter for the particular short-form video higher than a plurality of urgency parameters for the other of the plurality of short-form videos (paragraphs 0203, 0171); prebuffering the plurality of short-form videos based at least in part on the plurality of urgency parameters; and based on current network conditions, adjusting one or more of the plurality of urgency parameters (paragraphs 212-216).
As per claim 12, Taneja discloses the computer-implemented method further comprising: selecting, as a particular urgency parameter for the first portion of the network resource, the first urgency parameter, the second urgency parameter, or an urgency parameter obtained based on the first urgency parameter and the second urgency parameter (paragraphs 0168, 221); and setting the particular urgency parameter for the first portion of the network resource based at least in part on a location of the client device in relation to a plurality of locations proximate to the location of the client device (paragraphs 168, 221-222) .
As per claim 13, Taneja discloses the computer-implemented method further comprising: causing the first portion of the network resource to be transmitted preferentially to the client device over the network using the first queue instead of the second queue based on a type of the first portion of the network resource (paragraphs 168, 203, 266-267).
As per claim 14, Taneja discloses the computer-implemented method further comprising: based on determining that an attempt to transmit the first portion of the network resource was unsuccessful, increasing a value of at least one of the first urgency parameter or the second urgency parameter for the first portion for a subsequent attempt to transmit the first portion to the client device (paragraphs 0032-36, 168: scheduling metric to serve L4S or non-L4S attempting to serve usage of data using the resources, the prioritized resources, and the shared resources).
As per claim 15, Taneja discloses the computer-implemented method wherein the first portion comprises an advertisement, and a value of at least one of the first urgency parameter or the second urgency parameter for the first portion for the advertisement is increased based on an indication received from a provider of the advertisement (paragraphs 9-17, 168-169, 270).
As per claim 16, Taneja discloses the computer-implemented method wherein the first portion comprises an application available for download, and a value of at least one of the first urgency parameter or the second urgency parameter for the first portion for the application available for download is increased based on an indication received from a provider of the application (paragraphs 168-169, 222, 270).
Claim 17 is an Independent claim with similar limitation but different in preamble and hence are rejected based on the rejection provided in claim 1.
Claims 18-20 are listed all the same elements of claims 2-4 respectively. Therefore, the supporting rationales of the rejection to claims 2-4 apply equally as well to claims 18-20, respectively.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Joseph Y. Chen (US Publication 2013/0179541 A1) discloses a content delivery server may provide content to a requesting client device using a streamlined HTTP enhancement proxy delivery technique. For example, an HTTP proxy server may receive a request for video content or a fragment of video content from a client device. The request may be associated with a timeout scheduled to occur if no content has been received after a specified amount of time. The server may then transmit a request for the content to a remote server, such as an upstream cache server in the proxy server's CDN. When the proxy server receives a portion of the requested content from the remote server, the proxy server begins transmitting the portion to the client device before the requested content has been completely received and buffered. The client device may then begin receiving data from the proxy server before timeout has occurred
Gupta et al. (US Publication 2025/0212201 A1) mapping Low Latency, Low Loss, and Scalable throughput (L4S) traffic are described herein. A device may determine a mapping policy and configure the mapping policy on a wireless device. The mapping policy can facilitate mapping L4S data flows with User Priority (UP) values and/or Traffic Identifier (TID) values. The wireless device can generate and share a restricted Target Wake Time (rTWT) schedule with the device. The rTWT schedule may be indicative of a service interval and a service period. The wireless device may also transmit one or more rTWT TID values for L4S communication during the service period. The wireless device and the device may map the L4S data flows with the one or more rTWT TID values and can transmit and/or receive the L4S data flows during the service period. The device may prioritize the L4S data flows over non-L4S data flows.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARZANA B HUQ whose telephone number is (571)270-3223. The examiner can normally be reached Monday - Friday: 8:30-5:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emmanuel L Moise can be reached at 571-272-3865. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FARZANA B HUQ/Primary Examiner, Art Unit 2455