Prosecution Insights
Last updated: April 19, 2026
Application No. 18/347,173

SYSTEMS AND METHODS FOR PROVIDING A FRAMEWORK FOR MULTIPLE WIRELESS CAMERAS ASSOCIATED WITH A USER EQUIPMENT DEVICE

Final Rejection §103
Filed
Jul 05, 2023
Examiner
CHOI, WON JUN
Art Unit
2411
Tech Center
2400 — Computer Networks
Assignee
Verizon Patent and Licensing Inc.
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 8m
To Grant
80%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
24 granted / 33 resolved
+14.7% vs TC avg
Moderate +7% lift
Without
With
+6.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
43 currently pending
Career history
76
Total Applications
across all art units

Statute-Specific Performance

§101
1.7%
-38.3% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
22.7%
-17.3% vs TC avg
§112
19.1%
-20.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 33 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This communication is considered fully responsive to the amendment filed on 12/03/2025. Claims 1, 3-8, 12, 14-16, and 18-20 have been amended. Rejection to claims under 35 USC § 112 is withdrawn. Response to Arguments Applicant’s arguments with respect to claims 1-20 filed on 12/03/2025 have been considered but are moot because the arguments related solely to newly added limitations addressed in the instant Office Action with newly identified prior art (U.S. Patent Application Publication No. 20210112439, hereinafter “Sodagar”) and previously identified prior art by mapping the relevant teachings for more clarification thereof that read on said added feature are moot. With regard to Claim 1, amended claim 1 recites the limitation of “receiving, by one or more network devices, a request from a user equipment (UE) device to register a remote camera device with a core network, wherein the remote camera device is independent of the UE device”. First, Sodagar teaches the “remote camera device is independent of the UE device”. In para [0111], Sodagar discloses that the FLUS source (the framework for live uplink streaming (FLUS) source is interpreted as “a remote camera device” ) (410) receives media content from one or more capture devices (460). The capture devices (460) can be parts of the UE (401) or connected to the UE (401). In para [0108], Sodagar further teaches that a FLUS source entity (interpreted as “a remote camera device”), which may be embedded in a single UE, or distributed among a UE and separate audio-visual capture devices. Thus, Sodagar teaches the “wherein the remote camera device is independent of the UE device” as recited in amended claim 1. Second, Sodagar teaches the receiving, by one or more network devices, a request from a user equipment (UE) device to register a remote camera device with a core network. In para [0161], Sodagar discloses that a FLUS session can be established by the FLUS source with the FLUS sink (interpreted as “one or more network devices”). A 5GMS aware application at a UE can request for instantiation of an NBMP workflow for the network-based media processing through FLUS source. Sodagar, therefore, teaches the newly added limitation “receiving, by one or more network devices, a request from a user equipment (UE) device to register a remote camera device with a core network, wherein the remote camera device is independent of the UE device” as recited in amended claim 1. With regard to Claim 7, Applicant asserted that Yuan generally relates to a system in which a gaze tracker monitors an operator's eyes to adjust the bit rate of a video stream based on where the operator is looking (see, Abstract). In applying Yuan to claim 7, the Office Action implies that the "instruction" in Yuan is the data generated by a local sensor ( eye tracker) connected to the monitoring station itself Applicant disagrees. Claim 7 explicitly recites receiving, by the video management system, an instruction from the UE device to perform a control action for a video stream associated with the data flow from the remote camera device. In contrast, the alleged "instruction" in Yuan comes from a local eye tracking sensor at the monitoring station, not from a separate UE device associated with the subscription to which the remote camera is assigned. Furthermore, Yuan controls the bit rate based on local consumption (gaze) to save bandwidth. Nothing in Yuan discloses or even remotely suggests receiving an instruction from a UE device associated with a subscription assigned to a remote camera to perform such a control action. Claim 7 is therefore patentable over the combination of Leung, Kamdar, and Yuan for at least this additional reason. (The asserted portion in pages 16-17 of the Argument.) The Examiner respectfully disagree. First, Yuan teaches the monitoring station (interpreted as “User Equipment device”) associated with the subscription to which the one or more cameras are assigned. In particular, Yuan discloses, in para [0100], that “an operator may log into a computer device associated with monitoring station 125 and/or display 130 and may log into VMS 150 to configure one or more cameras 110. VMS 150 may configure camera 110 to provide a video stream of monitored area 106 to display 130 and display 130 may continue to receive video stream data from camera 110 and display the video stream data. The step of login and configuring camera in Yuan is interpreted as “assigning, by the one or more network devices, the remote camera device to a subscription of the UE device”. Thus, the monitoring station discussed in Yuan is associated with the subscription to which the one or more cameras are assigned. Second, Yuan teaches the “receiving, by the video management system, an instruction from the UE device to perform a control action for a video stream associated with the data flow from the remote camera device” as recited in claim 7. Yuan discloses that FIG. 4 illustrates an exemplary environment 400 of an operator 402 viewing display 130 having eye tracker 140 in an embodiment. Display 130 may include any type of display for showing information to operator 402. Operator 402 views display 130 and can interact with VMS 150 via an application running on monitoring station 125 (see FIG. 4 and para [0075] of Yuan). Yuan further discloses that, based on the designated gaze area, different actions may be triggered, so that the information (interpreted as “an instruction from the UE device”) generated by eye tracker 140 may be interpreted as a user input to the video management system. For example, if eye tracker 140-1 determines that operator 402 is viewing frame 520-1 showing the video stream from camera 110-1, then station 125-1, VMS 150, and/or camera 110-1 may reduce a bit rate for areas of the video stream associated with areas of frame 520 that are outside the operator's gaze area. See para [0080] of Yuan. FIG. 6 of Yuan is a diagram of functional components of camera 110, display 130, and VMS 150 (see FIG 6 and para [0083] of Yuan). Yuan discloses that Resource manager 680 (of VMS 150) may collect gaze area information from eye trackers 140 via eye tracker interface 670, may determine one or more bit rate reduction factors based on information stored in camera DB 685, and may instruct one or more cameras 110 to apply the one or more bit rate reduction factors to one or more regions of a video stream (interpreted as “performing, by the video management system, the control action”). See para [0087] of Yuan. Yuan, therefore, teaches the “receiving, by the video management system, an instruction from the UE device to perform a control action for a video stream associated with the data flow from the remote camera device” and “performing, by the video management system, the control action” as recited in claim 7. Thus, the Applicant’s argument is unpersuasive. With regard to Claim 8, Applicant asserted that: The Office Action cites Yuan to teach combining streams, noting that Yuan's display can show "multiple video streams". While Yuan may be reasonably construed as disclosing a display that may receive and display include multiple video streams, nothing in Yuan discloses or suggests combining a plurality of uplink video streams associated with a plurality of remote camera devices as recited in claim 8. Rather, the system of Yuan focuses on reducing the bit rate of streams that are not being actively viewed. Furthermore, nothing in Yuan discloses or suggests a control action of combining the alleged multiple video streams, merely that a video stream may include multiple video streams. Nothing in Yuan discloses receiving an instruction to combine the streams at all, let alone receiving such an instruction from a from the UE device to control a plurality of video streams from a plurality of remote camera devices. (The asserted portion in page 17 of the Argument.) The Examiner respectfully disagree. Yuan discloses that, the video stream may include multiple video streams (interpreted as “a plurality of uplink video streams associated with a plurality of remote camera devices into a combined video stream”) (para [0033] of Yuan). Yuan further discloses that, display 130 in FIG. 5B shows numerous frames 520-1 through 520-N (individually “frame 520”; or plural “frame 520”). Each frame 520-1 through 520-N may present a different video stream so operator 402 can monitor more than one area. The different streams may be produced by different cameras 110-1 through 110-M. See Fig. 5B and para [0081] of Yuan. FIG. 5B of Yuan is reproduced herein below. PNG media_image1.png 713 1029 media_image1.png Greyscale (FIG. 5B of Yuan) Yuan discloses that, in this example, frame 520-1 may show a video stream from camera 110-1 of area 106-1; video frame 520-2 may show a video stream from camera 110-2 of area 106-2; etc (para 0082) of Yuan). Yuan further discloses that an operator may log into a computer device associated with monitoring station 125 and/or display 130 and may log into VMS 150 to configure one or more cameras 110 (para [0100] of Yuan). The login step disclosed in para [0100] of Yuan, constitutes an instruction transmitted from a monitoring station (“UE”) to the VMS to configure one or more cameras 110. In response to said instruction (login), the VMS performs a subsequent step of “VMS 150 may configure camera 110 to provide a video stream of monitored area 106 to display 130 and display 130 may continue to receive video stream data from camera 110 and display the video stream data” (see para [0100] of Yuan). Specifically, in a case where the monitoring station is configured to monitor a plurality of cameras 110-1 through 110-M, as illustrated in FIG. 1 and as discussed in para [0100] of Yuan, uplink video streams associated with the plurality of cameras are combined into a single video stream for display as illustrated in FIG. 5B of Yuan. Yuan, therefore, teaches the “wherein performing the control action includes: combining a plurality of uplink video streams associated with a plurality of remote camera devices into a combined video stream” as recited in claim 8. Thus, the Applicant’s argument is unpersuasive. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 6, 12, 13, 16, and 20 rejected under 35 U.S.C. 103 as being unpatentable over Sodagar (U.S. Patent Application Publication No. 20220182435, hereinafter “Sodagar”) in view of Kamdar et al. (U.S. Patent Application Publication No. 20120099428, hereinafter “Kamdar”). Examiner’s note: in what follows, references are drawn to Sodagar unless otherwise mentioned. With respect to independent claims: Regarding claim 1, A method comprising: receiving, by one or more network devices, a request from a user equipment (UE) device to register a remote camera device with a core network(para [0111]: In an example, the FLUS source (410) receives media content from one or more capture devices (460). The capture devices (460) can be parts of the UE (401) or connected to the UE (401).) (Fig. 7 and para [0161]: At (S740), a FLUS session can be established by the FLUS source with the FLUS sink. In an example, a 5GMS aware application at a UE can request for instantiation of an NBMP workflow for the network-based media processing through FLUS source (the FLUS source is interpreted as “a remote camera”).), wherein the remote camera device is independent of the UE device (para [0108]: A FLUS source entity and a FLUS sink entity can support the point-to-point transmission of speech/audio, video, and text by media handling (e.g., signaling, transport, packet-loss handling, and adaptation). FLUS can provide a reliable and interoperable service with a predictable media quality while allowing for flexibility in the service offerings. A FLUS source entity, (interpreted as “a remote camera”) which may be embedded in a single UE, or distributed among a UE and separate audio-visual capture devices (interpreted as “the remote camera device is independent of the UE device”), may support all or a subset of the features specified in this document.); registering, by one or more network devices, the remote camera device with the core network (para [0160]: the 5GMS AF (interpreted as “one or more network devices”) provides an address of the FLUS source and an NBMP WDD corresponding to the network-based media processing to the FLUS sink. However, no instantiation of an NBMP workflow for the network-based media processing is performed at the FLUS sink.) (para [0162]: At (S740), a FLUS session can be established by the FLUS source with the FLUS sink. In an example, a 5GMS aware application at a UE can request for instantiation of an NBMP workflow for the network-based media processing through FLUS source. Accordingly, the FLUS source at the UE can transmit a request for the instantiation of the NBMP workflow for the network-based media processing to the FLUS sink with an NBMP WDD corresponding to the NBMP workflow included in the request. In response to receiving the request from the FLUS source, the FLUS sink can instantiate the NBMP workflow based on the NBMP WDD(interpreted as “registering, by one or more network devices, the remote camera device with the core network”).); assigning, by the one or more network devices, the remote camera device to a subscription of the UE device registered with the core network such that the remote camera device share the subscription of the UE device (para [0162]: Accordingly, the FLUS source at the UE can transmit a request for the instantiation of the NBMP workflow for the network-based media processing to the FLUS sink with an NBMP WDD corresponding to the NBMP workflow included in the request. In response to receiving the request from the FLUS source, the FLUS sink can instantiate the NBMP workflow based on the NBMP WDD.) (para [0163]: In another example, the FLUS sink is configured with an address of the FLUS source and an NBMP WDD corresponding to the network-based media processing by the 5GMS AF (The ‘NBMP WDD’ is interpreted as “a subscription of the UE device registered with the core network”). Under such a configuration, in response to a request for establishing the FLUS session from the FLUS source including the address of the FLUS source, the FLUS sink can instantiate an NBMP workflow based on the NBMP WDD (interpreted as “assigning, by the one or more network devices, the remote camera device to a subscription of the UE device registered with the core network such that the remote camera device share the subscription of the UE device”, the NBMP WDD is interpreted as “a subscription of the UE device registered with the core network”, see para [0128]).) (para [0128]: An MPEG NBMP Workflow Description Document (WDD) may be used to describe the media processing tasks at the FLUS sink (430) to be performed on received media components from the FLUS source (410).) Examiner’s note: FIG. 3 of Sodagar shows an example of a 5GMS system (300). The system (300) includes a user equipment (UE) (301) and data network (DN) (302) (see Fig. 3 and para [0063]). In Fig. 3 and paragraphs [0065-0079], Sodagar discloses that “M5 (Media Session Handling API): APIs exposed by the 5GMS AF (340) to the media session handler (321) for media session handling, control and assistance that also include appropriate security mechanisms (e.g. authorization and authentication (interpreted as “subscription of the UE”), and QoE metrics reporting). Thus, the security mechanisms and NBMP workflow description document (WDD) corresponding to the network-based media processing is interpreted as “assigning, by the one or more network devices, the remote camera device to a subscription of the UE device registered with the core network” and Sodagar teaches the “assigning, by the one or more network devices, the remote camera device to a subscription of the UE device registered with the core network such that the remote camera device share the subscription of the UE device”, at least in view of Fig. 3 and paragraphs [0065-0079, 0128, and 0162-0163].; assigning, by the one or more network devices, the remote camera device to an uplink streaming network slice (para [0151]: In a second scenario corresponding to the second case at (S620), the FLUS sink (604) is configured, and resources for media processing has been assigned. However, no NBMP workflow is instantiated. In this scenario, while establishing the session with the FLUS sink (604), the FLUS source (602) may request for instantiation of an NBMP workflow. AN NBMP WDD can be included in the request. In response, the FLUS sink (604) may instantiate the NBMP workflow based on the received NBMP WDD (interpreted as “assigning, by the one or more network devices, the remote camera device to an uplink streaming network slice”, the instantiation of an NBMP workflow is interpreted as “an uplink streaming network slice” ) in response to receiving a request from the FLUS source.).); assigning, by the one or more network devices, an uplink streaming video Quality of Service (QOS) class to a data flow sent from the remote camera device to the core network (para [0101]: The 5GMS AF (340) (interpreted as “one or more network devices”) can coordinate the media processing and ensures that the appropriate QoS and traffic handling for the session are provided (interpreted as “an uplink streaming video Quality of Service (QOS) class to a data flow sent from the remote camera device to the core network”).), wherein the uplink streaming video QoS class is configured to provide a guarantee for at least one video streaming QoS parameter (Examiner’s note: the missing/crossed out limitations will be discussed in view of Kamdar); and. establishing, by the one or more network devices, the data flow from the remote camera device to a destination device via the uplink streaming network slice based on the uplink streaming video QoS class (para [0171]: At (S920), a FLUS session can be established between a FLUS source at a UE and the FLUS sink in response to receiving a request from the FLUS source.)(para [0063]: For uplink streaming, the UE (301) is an origin of the media, and the DN (302) acts as the consumption entity (interpreted as “a destination device”) (para [0101]: The 5GMS AF (340) can coordinate the media processing and ensures that the appropriate QoS and traffic handling for the session are provided (interpreted as “based on the uplink streaming video QoS class”).). Sodagar does not explicitly teach the “wherein the uplink streaming video QoS class is configured to provide a guarantee for at least one video streaming QoS parameter”. In analogous art, Kamdar disclose that, in para [0023] of Kamdar: LTE network 130 may include a core network architecture of the Third Generation Partnership Project (3GPP) LTE wireless communication standard (e.g., an evolved packet core (EPC) network). In para [0024] of Kamdar: LTE network 130 may prioritize traffic based on LTE QoS class identifiers (QCIs). … For example, in one implementation, LTE network 130 may use 9 different QoS classes, such as a voice QoS class, a video telephony QoS class, a video streaming QoS class, a real-time gaming QoS class, an application signaling QoS class, a first third party hosted application QoS class, a second third party hosted application QoS class, a premium access Internet traffic QoS class, and a best effort Internet traffic QoS class. Each QoS class may be associated with a different priority.). Kamdar disclose the above recited missing feature “wherein the uplink streaming video QoS class guarantees at least one video streaming QoS parameter” (para [0024] of Kamdar: For example, QoS classes associated with real-time data delivery, such as voice, video telephony, and/or video streaming may be given a higher priority (interpreted as “at least one video streaming QoS parameter”) than QoS classes associated with data transfer, such as a premium access Internet traffic QoS class or a best effort Internet traffic QoS class (interpreted as “wherein the uplink streaming video QoS class guarantees at least one video streaming QoS parameter”).)(para [0064] of Kamdar: DSCP QoS class queues 550 may store packets that are to be sent to a particular device in customer network 110. Packets may be forwarded from a particular DSCP QoS class queue 550 based on a priority associated with the particular DSCP QoS class with which the packets are associated.). Sodagar and Kamdar are both considered to be analogous to the claimed invention because they are in the same field of a video streaming communication. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Sodagar to incorporate the teachings of Kamdar and provide video streaming based on priorities associated with QoS class. Regarding claim 12, it is a system claim corresponding to the method claim 1, except limitations “one or more devices” (Figures. 3-5) and is therefore rejected for the similar reasons set forth in the rejection of claim 1. Regarding claim 20, it is a non-transitory computer-readable memory device claim corresponding to the method claim 1, except limitations “A non-transitory computer-readable memory device storing instructions executable by a processor” (para [0012]: In an aspect, a non-transitory computer-readable medium storing computer-executable instructions includes computer-executable instructions …) and is therefore rejected for the similar reasons set forth in the rejection of claim 1. With respect to dependent claims: Regarding claim 2, Sodagar and Kamdar teach The method of claim 1, further comprising: Kamdar further teaches establishing, in the uplink streaming network slice, another data flow from the destination device to a downlink UE device, wherein the other data flow is assigned to a downlink streaming QoS class (para [0020] of Kamdar: Customer premises network 110 (interpreted as “destination device”) may include a combined gateway 120 and one or more devices connected to each other at a particular location serviced by combined gateway 120. Devices in customer premise network 110 may include, for example, set-top boxes (STBs), televisions, computers, voice-over-Internet-protocol (VoIP) devices, home networking equipment (e.g., routers, cables, splitters, local gateways, etc.) (interpreted as “a downlink UE device”), etc.) (para [0022] of Kamdar: Customer premises network 110 may prioritize traffic based on particular QoS classes associated with particular packets based on a type of traffic (interpreted as “assigned to a downlink streaming QoS class”)...). Regarding claim 6, Sodagar and Kamdar teach The method of claim 1, further comprising: Kamdar teaches storing upstream video data received via the data flow from the remote camera device in a video folder associated with the subscription of the UE device (para [0017] of Kamdar: An implementation described herein may relate to configuring a broadband home router (BHR), which interfaces a fixed wireless customer premises network with an LTE network, based on a subscriber profile downloaded over the LTE network.)(para [0022] of Kamdar: customer premises network 110 may use 9 different DSCP QoS classes, such as …, a video streaming QoS class, …)(para [0055] of Kamdar: As shown in FIG. 5, BHR 330 may include a router module 510, a Subscriber Identity Module (SIM) 520, a QoS manager 530, a QoS mapping table 540, one or more DSCP QoS class queues 550 (referred to collectively as “DSCP QoS class queues 550” and individually as “DSCP class queue 550” (interpreted as “a video folder associated with the subscription of the UE device”)), and one or more LTE QoS class queues 560 (referred to collectively as “LTE QoS class queues 560” and individually as “LTE class queue 560”)). Regarding claim 13, Claim 13 has similar limitation as of Claim(s) 2, therefore it is rejected under the same reasons as Claim(s) 2. Regarding claim 16, Claim 16 has similar limitation as of Claim(s) 6, therefore it is rejected under the same reasons as Claim(s) 6. Claim(s) 3-4, 7-8, 11, 14-15, 17, and 18 rejected under 35 U.S.C. 103 as being unpatentable over Sodagar in view of Kamdar, and further in view of Yuan (U.S. Patent Application Publication No. 20180190091, hereinafter “Yuan”). Regarding claim 3, Sodagar and Kamdar The method of claim 1, further comprising: (Examiner’s note: the missing/crossed out limitations will be discussed in view of Yuan) Kamdar teaches: establishing another data flow from the other remote camera device to the destination device via the uplink streaming network slice based on the uplink streaming video QoS class establishing, in the uplink streaming network slice, another data flow from the destination device to a downlink UE device, wherein the other data flow is assigned to a downlink streaming QoS class (para [0020] of Kamdar: Customer premises network 110 (interpreted as “destination device”) may include a combined gateway 120 and one or more devices connected to each other at a particular location serviced by combined gateway 120. Devices in customer premise network 110 may include, for example, set-top boxes (STBs), televisions, computers, voice-over-Internet-protocol (VoIP) devices, home networking equipment (e.g., routers, cables, splitters, local gateways, etc. (interpreted as “a downlink UE device”)) (para [0022] of Kamdar: Customer premises network 110 (interpreted as “destination device”) may prioritize traffic based on particular QoS classes associated with particular packets based on a type of traffic. In one implementation, customer premises network 110 may use a QoS mechanism based on DSCP. DSCP may include a networking architecture that provides QoS guarantees in an IP network.); and generating a downlink data flow from the destination device to a downlink UE device (para [0022] of Kamdar: BHR 330 may include one or more devices that buffer and forward data packets toward destinations. For example, BHR 330 may receive data packets from eNodeB 140 (e.g., via LTE module 320) and forward the data packets toward user devices 270 (interpreted as “a downlink data flow from the destination device to a downlink UE device”), wherein the downlink data flow combines video data from the data flow and the other data flow, and wherein the downlink data flow is assigned to a downlink streaming QoS class (para [0055] of Kamdar: As shown in FIG. 5, BHR 330 may include a router module 510, a Subscriber Identity Module (SIM) 520, a QoS manager 530, a QoS mapping table 540, one or more DSCP QoS class queues 550 (referred to collectively as “DSCP QoS class queues 550” and individually as “DSCP class queue 550” (interpreted as “combines video data from the data flow and the other data flow, and wherein the downlink data flow is assigned to a downlink streaming QoS class”)), and one or more LTE QoS class queues 560 (referred to collectively as “LTE QoS class queues 560” and individually as “LTE class queue 560”)). As noted above, a combination of Sodagar and Kamdar fails to teach: “registering another remote camera device with the core network;” and “assigning the other remote camera device to the subscription associated with the UE device;”. In analogous art, Yuan discloses: registering another remote camera device with the core network (FIGs. 1, 6, and para [0084] of Yuan: VMS 150 (interpreted as “core network”) may include an eye tracker interface 670, a resource manager 680, a camera database (DB) 685, and a camera interface 690.) (para [0086] of Yuan: Resource manager 680 may manage resources associated with environment 100. For example, resource manager 680 may manage network resources associated with transmission of data from cameras 110 to monitoring stations 125 and associated displays 130 across network 120, and/or processor and memory resources associated with cameras 110, monitoring stations 125, and/or displays 130.). Fig. 1 of Yuan is reproduced herein below. PNG media_image2.png 620 534 media_image2.png Greyscale (FIG. 1 of Yuan) assigning the other remote camera device to the subscription associated with the UE device (para [0100] of Yuan: For example, an operator may log into a computer device associated with monitoring station 125 (interpreted as “UE device”) and/or display 130 and may log into VMS 150 to configure one or more cameras 110. VMS 150 may configure camera 110 to provide a video stream of monitored area 106 to display 130 and display 130 may continue to receive video stream data from camera 110 and display the video stream data.)(Examiner’s note: The step of login and configuring cameras in Yuan is interpreted as “assigning, by the one or more network devices, the remote camera device to a subscription of the UE device”.). Sodagar, Kamdar and Yuan are considered to be analogous to the claimed invention because they are in the same field of a video streaming communication. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Sodagar and Kamdar to incorporate the teachings of Yuan and provide video streaming by registering a plurality of video streaming devices with the core network based on priorities associated with QoS class. Regarding claim 4, Sodagar and Kamdar teach The method of claim 1, Sodagar and Kamdar fail to disclose wherein the uplink streaming network slice enables a plurality of uplink video streams associated with a plurality of remote camera devices to be combined into a video stream controlled by the UE device. In analogous art, Yuan discloses the above limitation. (Fig. 1 and para [0047] of Yuan: VMS 150 may receive and store image data from cameras 110. VMS 150 may also provide a user interface for operators of monitoring stations 125 (interpreted as “the user equipment (UE) device”) to view image data stored in VMS 150 or image data streamed from cameras 110) (para [0033] of Yuan: A video management system may manage a client device that includes a display. The display may receive a video stream from a camera and display the video stream on the display. … In some implementations, the video stream may include multiple video streams (interpreted as “a plurality of uplink video streams associated with a plurality of camera devices to be combined into a video stream”)…)(para [0091] of Yuan: Client interface 640 may store, manage, and/or apply one or more image transmission parameters (interpreted as “uplink streaming network slice”). For example, client interface 640 may store a Quality of Service (QoS) parameter. Client interface 640 may receive an instruction from VMS 150 to adjust one or more of the stored encoding parameters in order to adjust a bit rate in a region of a video stream based on gaze area information determined by VMS 150.)(para [0075] of Yuan: FIG. 4 illustrates an exemplary environment 400 of an operator 402 viewing display 130 having eye tracker 140 in an embodiment. Display 130 may include any type of display for showing information to operator 402. Operator 402 views display 130 and can interact with VMS 150 via an application running on monitoring station 125 (interpreted as “a video stream controlled by the UE device”). Sodagar, Kamdar and Yuan are considered to be analogous to the claimed invention because they are in the same field of a video streaming communication. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Sodagar and Kamdar to incorporate the teachings of Yuan and provide the uplink streaming network slice enabling a plurality of uplink video streams associated with a plurality of camera devices to be combined into a video stream controlled by the UE device. Regarding claim 7, Yuan and Kamdar teach The method of claim 1, Yuan and Kamdar fail to disclose: wherein the destination device includes a video management system, the method further comprising: receiving, by the video management system, an instruction from the UE device to perform a control action for a video stream associated with the data flow from the camera device; and performing, by the video management system, the control action. Yuan discloses wherein the destination device includes a video management system (Fig. 1 and para [0038] of Yuan: a video management system (VMS) 150.), the method further comprising: receiving, by the video management system, an instruction from the UE device to perform a control action for a video stream associated with the data flow from the camera device (Yuan discloses that FIG. 4 illustrates an exemplary environment 400 of an operator 402 viewing display 130 having eye tracker 140 in an embodiment. Display 130 may include any type of display for showing information to operator 402. Operator 402 views display 130 and can interact with VMS 150 via an application running on monitoring station 125 (see FIG. 4 and para [0075] of Yuan). Yuan further discloses that, based on the designated gaze area, different actions may be triggered, so that the information (interpreted as “an instruction from the UE device”) generated by eye tracker 140 may be interpreted as a user input to the video management system. For example, if eye tracker 140-1 determines that operator 402 is viewing frame 520-1 showing the video stream from camera 110-1, then station 125-1, VMS 150, and/or camera 110-1 may reduce a bit rate for areas of the video stream associated with areas of frame 520 that are outside the operator's gaze area. See para [0080] of Yuan.); and performing, by the video management system, the control action (FIG. 6 of Yuan is a diagram of functional components of camera 110, display 130, and VMS 150 (see FIG 6 and para [0083] of Yuan). Yuan discloses that Resource manager 680 (of VMS 150) may collect gaze area information from eye trackers 140 via eye tracker interface 670, may determine one or more bit rate reduction factors based on information stored in camera DB 685, and may instruct one or more cameras 110 to apply the one or more bit rate reduction factors to one or more regions of a video stream. See para [0087] of Yuan.). Sodagar, Kamdar and Yuan are considered to be analogous to the claimed invention because they are in the same field of a video streaming communication. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Sodagar and Kamdar to incorporate the teachings of Yuan and provide an instruction to perform a control action for a video stream associated with the data flow from the camera devices. Regarding claim 8, Sodagar, Kamdar and Yuan teach The method of claim 7, wherein performing the control action includes: Yuan further discloses combining a plurality of uplink video streams associated with a plurality of camera devices into a combined video stream (Yuan discloses that, the video stream may include multiple video streams (interpreted as “a plurality of uplink video streams associated with a plurality of remote camera devices into a combined video stream”) (para [0033] of Yuan). Yuan further discloses that, display 130 in FIG. 5B shows numerous frames 520-1 through 520-N (individually “frame 520”; or plural “frame 520”). Each frame 520-1 through 520-N may present a different video stream so operator 402 can monitor more than one area. The different streams may be produced by different cameras 110-1 through 110-M. See Fig. 5B and para [0081] of Yuan. FIG. 5B of Yuan is reproduced herein below. PNG media_image1.png 713 1029 media_image1.png Greyscale (FIG. 5B of Yuan) Yuan discloses that, in this example, frame 520-1 may show a video stream from camera 110-1 of area 106-1; video frame 520-2 may show a video stream from camera 110-2 of area 106-2; etc (para 0082) of Yuan). Yuan further discloses that an operator may log into a computer device associated with monitoring station 125 and/or display 130 and may log into VMS 150 to configure one or more cameras 110 (para [0100] of Yuan). The login step disclosed in para [0100] of Yuan, constitutes an instruction transmitted from a monitoring station (“UE”) to the VMS to configure one or more cameras 110. In response to said instruction (login), the VMS performs a subsequent step of “VMS 150 may configure camera 110 to provide a video stream of monitored area 106 to display 130 and display 130 may continue to receive video stream data from camera 110 and display the video stream data” (see para [0100] of Yuan). Specifically, in a case where the monitoring station is configured to monitor a plurality of cameras 110-1 through 110-M, as illustrated in FIG. 1 and as discussed in para [0100] of Yuan, uplink video streams associated with the plurality of cameras are combined into a single video stream for display as illustrated in FIG. 5B of Yuan. Thus, Yuan teaches the “wherein performing the control action includes: combining a plurality of uplink video streams associated with a plurality of remote camera devices into a combined video stream” as recited in claim 8. Regarding claim 11, Sodagar, Kamdar and Yuan teach The method of claim 7, wherein performing the control action includes: Yuan further teaches: generating a highlight reel for the video stream (para [0080] of Yuan: Based on the designated gaze area, different actions may be triggered, so that the information generated by eye tracker 140 may be interpreted as a user input to the video management system. For example, if eye tracker 140-1 determines that operator 402 is viewing frame 520-1 showing the video stream from camera 110-1, then station 125-1, VMS 150, and/or camera 110-1 may reduce a bit rate for areas of the video stream associated with areas of frame 520 that are outside the operator's gaze area (interpreted as “generating a highlight reel for the video stream”)). Regarding claim 14, Claim 14 has similar limitation as of Claim(s) 3, therefore it is rejected under the same reasons as Claim(s) 3. Regarding claim 15, Claim 15 has similar limitation as of Claim(s) 4, therefore it is rejected under the same reasons as Claim(s) 4. Regarding claim 17, Claim 17 has similar limitation as of Claim(s) 7, therefore it is rejected under the same reasons as Claim(s) 7. Regarding claim 18, Claim 18 has similar limitation as of Claim(s) 8, therefore it is rejected under the same reasons as Claim(s) 8. Claims 9-10 and 19 rejected under 35 U.S.C. 103 as being unpatentable over Sodagar in view of Kamdar, in view of Yuan, and further in view of Scheepens et al. (U.S. Patent Application Publication No. 20160240170, hereinafter “Scheepens”). Regarding claim 9, Sodagar, Kamdar, and Yuan teach The method of claim 8, wherein performing the control action further includes: Yuan further teaches: selecting a primary stream from the plurality of uplink video streams (para [0080] of Yuan: Based on the designated gaze area, different actions may be triggered, so that the information generated by eye tracker 140 may be interpreted as a user input to the video management system. For example, if eye tracker 140-1 determines that operator 402 is viewing frame 520-1 showing the video stream from camera 110-1). Sodagar, Kamdar, and Yuan fail to disclose the increasing a size of a window for the primary stream in the combined video stream, in response selecting the primary stream. In analogous art, Scheepens teaches the above feature as follows: increasing a size of a window for the primary stream in the combined video stream, in response selecting the primary stream (Figs 5a-5e and para [0067] of Scheepens: FIG. 5b-5d shows an animated sequence resulting from the user selecting the visual indicator 300, the animated sequence showing the selected viewport 2E being resized to display the video data at its native resolution and the other viewports 2A-2D, 2F being resized and rearranged to free up a portion of the display area.). Therefore, it would have been obvious to one of ordinary skill in the art at the time of instant application to modify a combination of Sodagar, Kamdar, and Yuan by using the features of Scheepens in order to have more effective method such that the user can increases a size of a window for the primary stream in the combined video stream by selecting the primary stream. Regarding claim 10, Sodagar, Kamdar, Yuan, and Scheepens teach The method of claim 9, wherein selecting the primary stream from the plurality of uplink video streams includes: Yuan further teaches: monitoring user behavior based on the plurality of uplink video streams (para [0075] of Yuan: Eye tracker 140 includes a sensor (e.g., a camera) that enables monitoring station 125 to determine where the eyes of operator 402 are focused.); identifying a user trigger action in a particular stream of the plurality of uplink video streams (para [0080] of Yuan: Based on the designated gaze area, different actions may be triggered, so that the information generated by eye tracker 140 may be interpreted as a user input to the video management system. For example, if eye tracker 140-1 determines that operator 402 is viewing frame 520-1 showing the video stream from camera 110-1 (interpreted as “identifying a user trigger action in a particular stream of the plurality of uplink video streams”)) and selecting the particular stream as the primary stream, in response to identifying the user trigger action (para [0080] of Yuan: Based on the designated gaze area, different actions may be triggered, so that the information generated by eye tracker 140 may be interpreted as a user input to the video management system. For example, if eye tracker 140-1 determines that operator 402 is viewing frame 520-1 showing the video stream from camera 110-1 (interpreted as “selecting the particular stream as the primary stream, in response to identifying the user trigger action”), then station 125-1, VMS 150, and/or camera 110-1 may reduce a bit rate for areas of the video stream associated with areas of frame 520 that are outside the operator's gaze area.). Regarding claim 19, Claim 19 has similar limitation as of Claim(s) 10, therefore it is rejected under the same reasons as Claim(s) 10. Claim(s) 5 rejected under 35 U.S.C. 103 as being unpatentable over Sodagar in view of Kamdar, and further in view ow CN113411777A. Regarding claim 5, Sodagar and Kamdar teach The method of claim 1, further comprising: Sodagar and Kamdar fail to teach: performing an over-the-air (OTA) update for the camera device, wherein the OTA update is triggered by the UE device It, however, had been known in the art before the effective date of the instant application as shown by CN113411777A as follows; performing an over-the-air (OTA) update for the camera device, wherein the OTA update is triggered by the UE device (para [0039-0040] of CN113411777A, translated by Google Translation: a vehicle-mounted high-definition camera OTA upgrading method and a process sequentially comprise the following steps: 1.1 the user triggers a button for updating the software of the camera)(para [0061] of CN113411777A: … a corresponding interface of the vehicle machine (interpreted as “the UE device”) can prompt a user that the new software can be updated, the user can start upgrading the camera software only by clicking an update button.) Therefore, it would have been obvious to one of ordinary skill in the art at the time of instant application to modify a combination of Sodagar and Kamdar by using the features of CN113411777A in order to have more effective method such that the user can start upgrading the camera software only by clicking an update button. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WON JUN CHOI whose telephone number is (703)756-1695. The examiner can normally be reached MON-FRI 08:00 - 17:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Derrick W Ferris can be reached at 571-272-3123. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WON JUN CHOI/Examiner, Art Unit 2411 /DERRICK W FERRIS/Supervisory Patent Examiner, Art Unit 2411
Read full office action

Prosecution Timeline

Jul 05, 2023
Application Filed
Sep 02, 2025
Non-Final Rejection — §103
Dec 03, 2025
Response Filed
Feb 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592798
Communication Method and Communications Apparatus
2y 5m to grant Granted Mar 31, 2026
Patent 12574333
Multi Radio Media Access Control for Ultra-Low and Bounded Delay
2y 5m to grant Granted Mar 10, 2026
Patent 12568537
WIRELESS UPLINK COMMUNICATION SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12550166
Scrambling of Physical Broadcast Channel (PBCH)
2y 5m to grant Granted Feb 10, 2026
Patent 12526857
ELECTRONIC DEVICE FOR PROVIDING USER INTERFACE RELATED TO PLURALITY OF EXTERNAL ELECTRONIC DEVICES AND OPERATING METHOD THEREOF
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
80%
With Interview (+6.9%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 33 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month