Prosecution Insights
Last updated: April 19, 2026
Application No. 18/564,541

DEVICE, METHOD, SYSTEM AND PROGRAM FOR VIDEO TRANSMISSION ACCORDING TO APPLICATION STATUS

Non-Final OA §103
Filed
Nov 27, 2023
Examiner
PARK, JUNG H
Art Unit
2411
Tech Center
2400 — Computer Networks
Assignee
Nippon Telegraph and Telephone Corporation
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
93%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
854 granted / 969 resolved
+30.1% vs TC avg
Minimal +4% lift
Without
With
+4.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
45 currently pending
Career history
1014
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
54.7%
+14.7% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 969 resolved cases

Office Action

§103
DETAILED ACTION Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, and 4-8 are rejected under 35 U.S.C. 103 as being unpatentable over Sullivan (US 2017/0311000, “Sullivan”) in view of Sridhar et al. (US 2021/0014725, “Sridhar”). Regarding claim 1, Sullivan discloses a device which acquires a status of an application using video transmission (See Fig.2a-b and ¶.6-7, low latency video transmission; See ¶.20, latency sensitive real-time communication application such as for remote desktop conferencing, video telephony, video surveillance, web camera video); - selects an encoding scheme and a decoding scheme of a video signal generated by the application according to the status of the application (See ¶.7, the detailed description presents techniques and tools for reducing latency in video encoding and decoding; See ¶.40, capable of operating in any of multiple encoding modes such as a low-latency encoding mode for real-time communication; See claim 23, selecting, from multiple available encoding modes, a low-latency encoding mode for real-time communication, wherein the constraint on delay is set as part of the low-latency encoding mode; See ¶.48, it can be a special-purpose decoding tool adapted for one such decoding mode. The decoder system can be implemented as an operating system module. …The coded data can include one or more syntax elements that indicate a constraint on latency to facilitate reduced-latency decoding; See ¶.51, the decoder includes multiple decoding modules that perform decoding and the exact operations performed by the decoder can vary depending on compression format); and - secures resources of a data transmission network that transmits the video signal according to the selection result (See ¶.5, an encoder can increase encoding time and increase resources used during encoding to find the most efficient way to compress video; See ¶.59-60, determining memory capacity needed in a decoder; Examiner’s Note: Sridhar discloses the limitation “secures resources of a data transmission network”). Sullivan discloses that an encoder can increase encoding time and increase resources used during encoding to find the most efficient way to compress video and the method of determining memory capacity needed in a decoder as cited in the paragraph above. However, Sullivan does not explicitly disclose what Sridhar discloses “secures resources of a data transmission network (Sridhar, See ¶.6, predict a likely encoding rate that will be used by the application that generates the conversational video traffic, and allocate resources to the conversational video traffic based on the likely encoding rates and other network conditions; See ¶.30, the resource allocation controller may allocate resources for the uplink and the downlink with the user equipment based on a predicted future encoding rate of the uplink data flow and/or the downlink data flow). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to apply the method of “securing resources of a data transmission network” as taught by Sridhar into the system of Sullivan, so that it provides a way of optimally allocating resources to video traffic in order to improve quality of service for that traffic (Sridhar, See ¶.3). Regarding claim 2, Sullivan does not explicitly disclose what Sridhar discloses “resources of the data transmission network allocated to low-delay applications are larger than resources of the data transmission network allocated to non-low-delay applications (Sridhar, See ¶.61, allocate the additional resource for latency-sensitive traffic).” Therefore, this claim is rejected with the similar reasons and motivation set forth in the rejection of claim 1. Regarding claim 4, Sullivan discloses “the status of the application is acquired by analyzing a video signal generated by the application (See ¶.6, consider use scenarios such as remote desktop conferencing, surveillance video, video telephony and other real-time communication scenarios. Such applications are time-sensitive. Low latency between recording of input pictures and playback of output pictures is a key factor in performance; See ¶.20, delay can be reduced for remote desktop conferencing, video telephony, video surveillance, web camera video and other real-time communication applications; See ¶.34, each real-time communication (“RTC”) tool includes both an encoder and a decoder for bidirectional communication. A given encoder can produce output compliant with H.264 or AVC, HEVC standard, another standard, or a proprietary format, with a corresponding decoder accepting encoded data from the encoder).” Regarding claim 5, it is a non-transitory computer readable medium claim corresponding to a device claim 1 and is therefore rejected for the similar reasons set forth in the rejection of the claim. Regarding claim 6, it is a method claim corresponding to a device claims 1 & 4 and is therefore rejected for the similar reasons set forth in the rejection of the claims. Regarding claim 7, it is a system claim corresponding to the method claim 6, except the limitation “a video signal source that executes an application that uses video transmission (Sullivan, See 310 Fig.3, video source); a data transmission network that transmits a video signal generated by the application (Sullivan, See Fig.3-4, sending encoded data over a channel to receiving side having decoder); and a controller that controls resources of the data transmission network (Sridhar, See 330 Fig.3 and ¶.27-29, a resource allocation controller to allocating resource based on encoding rate; See ¶.51, the resource allocation controller may test a data flow to determine whether it contains latency-sensitive bidirectional data flows. As illustrated, the resource allocation controller may determine whether the traffic is associated with conversational video specifically. The resource allocation controller may determine the application type (e.g., whether the application type is conversational video or some other latency-sensitive data flow) based on statistics vectors of bearer metrics and locating points on a label map corresponding to the statistics vectors)” and is therefore rejected for the similar reasons set forth in the rejection of the claim. Regarding claim 8, Sullivan and Sridhar disclose “a transmission-side media converter (MC) that converts the video signal from the video signal source into a format that can be transmitted over the data transmission network (Sullivan, See 310, 330, & 340 Fig.3, video source, selector, and encoder within the sender/source side); and a receiver-side MC that converts the video signal transmitted over the data transmission network into a video signal from the video signal source (Sullivan, See 450 Fig.4, decoder in receiving side), wherein the controller is configured to select, according to the status of the application (See the rejection of the selecting step in claim 1), an encoding scheme for conversion into a format that can be transmitted over the data transmission network in the transmission-side MC (Sullivan, See 380 Fig.3, channel coder to send out coded data over channel; Examiner’s Note: Sridhar discloses the limitation “a controller” as rejected in claim 7), and a decoding scheme for converting into the video signal from the video signal source in the receiver-side MC (as shown in Fig.4), the transmission-side MC encodes the video signal from the video signal source using the encoding scheme selected by the controller (See claim 1 for selecting encoding scheme), and the receiver-side MC decodes the video signal transmitted over the data transmission network using the decoding scheme selected by the controller (Sullivan, See Fig.4 for decoding; See ¶.48, it can be a special-purpose decoding tool adapted for one such decoding mode. The decoder system can be implemented as an operating system module. …The coded data can include one or more syntax elements that indicate a constraint on latency to facilitate reduced-latency decoding; See ¶.51, the decoder includes multiple decoding modules that perform decoding and the exact operations performed by the decoder can vary depending on compression format). Therefore, this claim is rejected with the similar reasons and motivation set forth in the rejection of claim 1. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Sullivan in view of Sridhar and further in view of Awais (US 2005/02014414, “Awais”). Regarding claim 3, Sullivan and Sridhar do not explicitly disclose what Awais discloses “in a case of low-delay applications, uncompressed or low-compression encoding and decoding schemes are selected, and in a case of non-low-delay applications, high-compression encoding and decoding schemes are selected (Awais, See ¶.18, if the RTBM determines that sufficient bandwidth is available, to select a low compression, low latency CODEC).” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to apply “in a case of low-delay applications, uncompressed or low-compression encoding and decoding schemes are selected, and in a case of non-low-delay applications, high-compression encoding and decoding schemes are selected” as taught by Sridhar into the system of Sullivan, so that it provides the best QoS achievable over the current end-to-end available bandwidth (Awais, See ¶.18).” Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jung H Park whose telephone number is 571-272-8565. The examiner can normally be reached M-F: 7:00 AM-3:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Derrick Ferris can be reached on 571-272-3123. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JUNG H PARK/ Primary Examiner, Art Unit 2411
Read full office action

Prosecution Timeline

Nov 27, 2023
Application Filed
Dec 23, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598616
SRS RESOURCE SET AND BEAM ORDER ASSOCIATION FOR MULTI-BEAM PUSCH
2y 5m to grant Granted Apr 07, 2026
Patent 12587891
FRONTHAUL TIMING IMPROVEMENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12580709
UPLINK PHASE TRACKING REFERENCE SIGNALS FOR MULTIPLE TRANSMITTERS ON UPLINK SHARED CHANNELS
2y 5m to grant Granted Mar 17, 2026
Patent 12556490
DEVICE AND METHOD FOR CONTROLLING TRAFFIC TRANSMISSION/RECEPTION IN NETWORK END TERMINAL
2y 5m to grant Granted Feb 17, 2026
Patent 12549926
DATA PROCESSING METHOD AND APPARATUS OF PACKET DATA CONVERGENCE PROTOCOL (PDCP) LAYER SUPPORTING MULTICAST AND BROADCAST SERVICE (MBS) IN NEXT-GENERATION MOBILE COMMUNICATION SYSTEM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
93%
With Interview (+4.5%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 969 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month