DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is responsive to communication received on 11/20/2025. Claims 1-20 are pending of which claims 1,10 and 18 are amended.
The Examiner recommends filing a written authorization for Internet communication in response to the present action. Doing so permits the USPTO to communicate with Applicant using Internet email to schedule interviews or discuss other aspects of the application. Without a written authorization in place, the USPTO cannot respond to Internet correspondence received from Applicant. The preferred method of providing authorization is by filing form PTO/SB/439, available at: https://www.uspto.gov/patent/forms/forms. See MPEP § 502.03 for other methods of providing written authorization.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1,2, 5, 10, 11, 14, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang US 2014/0321378, further in view of Ji US 2013/0273954 and Zhu US 2018/0041913.
Regarding claim 1, Zhang teaches a communication apparatus, comprising: a memory, configured to store a computer program, wherein a processor configured to execute the computer program stored in the memory, to perform operations including(receiver such as UE/mobile device in a video transmission system over a mobile network, ¶46)
[0046] Embodiments of the present application provide a method and a device for video transmission, the base station receives a second video data packet transmitted by a server and first feedback information about a first video data packet transmitted by a user equipment, and performs a scheduling process according to the first feedback information, and transmits the second video data packet to the user equipment according to the result of the scheduling process after the scheduling process, therefore, the base station does not need to transmit the feedback information to the server again, thereby reducing the feedback time, that is, the base station can perform the scheduling process by utilizing the feedback information in time, therefore, resources can be fully used or data loss can be reduced, the real-time performance of video transmission is improved, and air-interface resources are fully used, thus system performance is improved. In addition, the base station receives third feedback information transmitted by a user equipment, and performing a scheduling process to video data packets in transmit buffers of the user equipment according to the third feedback information; then receives a scheduling processed video data packet, and transmits the scheduling processed video data packet to a server, therefore, the base station does not need to transmit the feedback information to the server again, thereby reducing the feedback time, that is, the base station can perform the scheduling process by utilizing the feedback information in time, therefore, resources can be fully used or data loss can be reduced, the real-time performance of video transmission is improved, and air-interface resources are fully used, thus system performance is improved. In addition, the base station detects wireless network state and transmits network state information to a user equipment, so that the user equipment feeds back the network state information to a server, and the server adjusts an encoding level according to the received network state information and encoded the video data according to the adjusted coding level, encapsulates the coded data into a video data packet, transmits the processed video data packet to a base station, so that the base station forwards the processed video data packet to the user equipment, that is, the network state information is detected by the base station, which is more prompt than the feedback information detected by the user equipment, thereby reducing the feedback time, that is, the server can perform the scheduling process by utilizing the network state information in time, therefore, resources can be fully used or data loss can be reduced, the real-time performance of video transmission is improved, and air-interface resources are fully used, thus system performance is improved.
receiving at least one video frame from the wireless network device(receiver, such a Mobile phone/UE receiving video frames from a server over a network to a base station which further relays the video packets to the UE, ¶63)
[0063] The first video data packet is a previous set of video data packets of the second video data packet, and the second video data packet is a current set of video data packets transmitted by the server; the previous set of video data packets or the current set of video data packets is received from the server and forwarded to the user equipment by the base station. The set of video data packets herein refers to a series of video data packets between two feedback information transmitted by the user equipment and received by the base station, or a series of video data packets transmitted by the server to the base station during a previous time period; the current set of video data packets is a series of video data packets which are being currently transmitted by the server or are waiting for the scheduling for transmission of the base station. The first video data packet is a series of video data packets between the first feedback information and the previous feedback information when feedback information is periodically transmitted, or a series of video data packets transmitted by the server to the base station during a previous time period when feedback information is event-based transmitted.
determining a video frame parameter based on the at least one video frame, wherein the determining the video frame parameter includes determining a receiving status of at least one downlink video frame (UE receive video packet and generates feedback report regarding data transmission metric such as lost packets, ¶s 59,62,63)
and sending, to the wireless network device according to the received reporting periodicity , the video frame parameter including the receiving status of the at least one downlink video frame (feedback report sent from UE to base station/EnodeB such a feedback report includes at least packet loss ratio indicating the status of packets received and lost, the report is sent periodically i.e. according to a certain periodicity, ¶s 63, 65)
[0059] The second video data packet may be: a set of video data packets, which is obtained by encapsulating an encoded stream of each region into a video data packet respectively by the server, where the encoded streams of regions with different importance are obtained by encoding data in the regions with different importance in a video image according to different encoding modes. The second video data packet may include a video data packet of a one-frame or a multi-frame video image, encoded streams of regions with different importance are obtained by encoding data in regions with different importance in each frame of video image according to different encoding modes, and then encoded streams of each region are encapsulated into a video data packet respectively, thus each of the regions with different importance in the second video data packet has one or more video data packets.
[0062] 102, receiving, by the base station, first feedback information about a first video data packet transmitted by a user equipment.
[0063] The first video data packet is a previous set of video data packets of the second video data packet, and the second video data packet is a current set of video data packets transmitted by the server; the previous set of video data packets or the current set of video data packets is received from the server and forwarded to the user equipment by the base station. The set of video data packets herein refers to a series of video data packets between two feedback information transmitted by the user equipment and received by the base station, or a series of video data packets transmitted by the server to the base station during a previous time period; the current set of video data packets is a series of video data packets which are being currently transmitted by the server or are waiting for the scheduling for transmission of the base station. The first video data packet is a series of video data packets between the first feedback information and the previous feedback information when feedback information is periodically transmitted, or a series of video data packets transmitted by the server to the base station during a previous time period when feedback information is event-based transmitted.
[0065] The first feedback information may include: at least one of parameters including relative time delay, packet loss ratio, average throughput, and data amount of a receive buffer or a play-out buffer of the user equipment. The relative time delay refers to a relative time delay of a video data packet arriving later with reference to a video data packet of a certain frame arriving previously. The packet loss ratio may be a video data packet discarded by the user equipment due to the video data packet arrives at the receive buffer of the user equipment too late and thus cannot be decoded; may also be a total packet loss ratio detected by the user equipment, which includes a packet loss during a transmission process, and a packet discarded by the user equipment due to arriving too late. The average throughput is an average throughput of video data packets measured by the user equipment during a measuring time. The data amount of the receive buffer or the play-out buffer of the user equipment is the quantity of video data packets in the receive buffer or the play-out buffer of the user equipment.
Zhang teaches sending a feedback report with a certain periodicity, but does not teach that is the wireless network device that send the report. Thus, Zhang does not teach receiving a reporting periodicity from a wireless network device. Ji in the same field of endeavor as the invention teaches a system for feedback and downlink data transmission based on channel feedback information. Ji teaches receiving a reporting periodicity from a wireless network device.
[0067] Thus, the CSF report scheduling information received by the UE from the BS may include call setup information indicating a periodicity of periodic CSF report scheduling, indication of a grant received from the BS for transmitting an aperiodic CSF report, information indicating types of CSF reports scheduled, and/or any of various other types of information indicative of CSF report scheduling behaviors of the base station.
It would have been obvious to a person of ordinary skill in the art at the time of the effective filing of the instant application to modify Zhang with a channel feedback report with a configurable periodicity as taught by Ji. The reason for this modification would be to provide a feedback report that is dynamic and can be change to a short periodicity to adjust transmission of data more quickly to changing channel conditions(see Ji ¶68).
The combination of Zhang/Ji teaches periodic reporting of channel quality but do specifically teach how such period reporting is configured and this does not teach receiving a configuration information element a reporting periodicity from a wireless network device, wherein the configuration information element is usable to indicate a reporting periodicity;
sending, to the wireless network device according to the received reporting periodicity indicated by the configuration information element, the video frame parameter.
Zhu in the same field of endeavor as the invention teaches a system for managing data transmission of data in telecommunication networks include real time data traffic. Zhu teaches
receiving a configuration information element a reporting periodicity from a wireless network device, wherein the configuration information element is usable to indicate a reporting periodicity(eNB send configuration to UE via RRC layer signaling that include measurement/ report interval, ¶64)
[0064] The configurations may include, for example, the activation and deactivation of the MDT measurements and reporting. The configurations may further include the measurement interval and/or the report interval for the MDT metrics. The eNB 912 may provide the configurations to the UE 906 via the RRC signaling 922. The UE 906 may perform the MDT measurement 919 in accordance with the received configurations. The UE 906 may report 924 the measured MDT metrics in logs and report the logs to the eNB 912. The eNB 912 may provide the logs (at 926) including the MDT metrics to the trace collection entity 928 (which may be, for example, a file server) of the network.
sending, to the wireless network device according to the received reporting periodicity indicated by the configuration information element, the video frame parameter(UE based on configured interval reports QOE metrics, packets lost etc to indicate transmission quality of the channel, ¶s54,57).
[0054] The UE 806 may measure the QoE metrics 819 in accordance with the configuration provided via the link 818. In some examples, the UE 806 may perform the measurements at the Real-time Transport Protocol (RTP) layer and/or higher protocol layers. For example, the UE 806 may perform the quality measurements in accordance with the measurement definitions of the configuration, aggregate the measurements into client QoE metrics, and report the metrics to the QoE server 826 via a Hypertext Transfer Protocol (HTTP) link 818. The UE 806 may report the QoE metrics to the receiving QoE server 826 during the multimedia session and/or at the end of the session.
[0057] In some examples, the QoE metrics may include successive loss of RTP packets, which may indicate the number of RTP packets lost in succession per media channel. The QoE metrics may include frame rate, which may indicate the playback frame rate. The QoE metrics may include jitter duration. Jitter may happen when the absolute difference between the actual playback time and the expected playback time is larger than a defined JitterThreshold. The expected time of a frame may be equal to the actual playback time of the last played frame plus the difference between the NPT of the frame and the NPT of the last played frame.
It would have been obvious to a person of ordinary skill in the art at the time of the effective filing of the instant application to modify Zhang/Ji with use of configuring a reporting measurement interval using an RRC later configuration signal as taught by Zhu . The reason for this modification would be to utilize a known method of a control resource(RRC layer) in the mobile telecommunication for control data transmission metrics.
Regarding claim 10, Zhang teach communication apparatus, comprising: a memory, configured to store a computer program, wherein; a processor configured to execute the computer program stored in the memory, to perform operations including(receiver such as UE/mobile device in a video transmission system over a mobile network, ¶46)
[0046] Embodiments of the present application provide a method and a device for video transmission, the base station receives a second video data packet transmitted by a server and first feedback information about a first video data packet transmitted by a user equipment, and performs a scheduling process according to the first feedback information, and transmits the second video data packet to the user equipment according to the result of the scheduling process after the scheduling process, therefore, the base station does not need to transmit the feedback information to the server again, thereby reducing the feedback time, that is, the base station can perform the scheduling process by utilizing the feedback information in time, therefore, resources can be fully used or data loss can be reduced, the real-time performance of video transmission is improved, and air-interface resources are fully used, thus system performance is improved. In addition, the base station receives third feedback information transmitted by a user equipment, and performing a scheduling process to video data packets in transmit buffers of the user equipment according to the third feedback information; then receives a scheduling processed video data packet, and transmits the scheduling processed video data packet to a server, therefore, the base station does not need to transmit the feedback information to the server again, thereby reducing the feedback time, that is, the base station can perform the scheduling process by utilizing the feedback information in time, therefore, resources can be fully used or data loss can be reduced, the real-time performance of video transmission is improved, and air-interface resources are fully used, thus system performance is improved. In addition, the base station detects wireless network state and transmits network state information to a user equipment, so that the user equipment feeds back the network state information to a server, and the server adjusts an encoding level according to the received network state information and encoded the video data according to the adjusted coding level, encapsulates the coded data into a video data packet, transmits the processed video data packet to a base station, so that the base station forwards the processed video data packet to the user equipment, that is, the network state information is detected by the base station, which is more prompt than the feedback information detected by the user equipment, thereby reducing the feedback time, that is, the server can perform the scheduling process by utilizing the network state information in time, therefore, resources can be fully used or data loss can be reduced, the real-time performance of video transmission is improved, and air-interface resources are fully used, thus system performance is improved.
wirelessly sending at least one video frame to a terminal device (receiver, such a Mobile phone/UE receiving video frames from a server over a network to a base station which further relays the video packets to the UE, ¶63)
[0063] The first video data packet is a previous set of video data packets of the second video data packet, and the second video data packet is a current set of video data packets transmitted by the server; the previous set of video data packets or the current set of video data packets is received from the server and forwarded to the user equipment by the base station. The set of video data packets herein refers to a series of video data packets between two feedback information transmitted by the user equipment and received by the base station, or a series of video data packets transmitted by the server to the base station during a previous time period; the current set of video data packets is a series of video data packets which are being currently transmitted by the server or are waiting for the scheduling for transmission of the base station. The first video data packet is a series of video data packets between the first feedback information and the previous feedback information when feedback information is periodically transmitted, or a series of video data packets transmitted by the server to the base station during a previous time period when feedback information is event-based transmitted.
and receiving according to the received reporting periodicity a video frame parameter wirelessly from the terminal device, wherein the video frame parameter is determined based on the at least one video frame and includes a receiving status of at least one downlink video frame(feedback report sent from UE to base station/EnodeB such a feedback report includes at least packet loss ratio indicating the status of packets received and lost, the report is sent periodically i.e. according to a certain periodicity, ¶s 59, 62, 63, 65)
[0059] The second video data packet may be: a set of video data packets, which is obtained by encapsulating an encoded stream of each region into a video data packet respectively by the server, where the encoded streams of regions with different importance are obtained by encoding data in the regions with different importance in a video image according to different encoding modes. The second video data packet may include a video data packet of a one-frame or a multi-frame video image, encoded streams of regions with different importance are obtained by encoding data in regions with different importance in each frame of video image according to different encoding modes, and then encoded streams of each region are encapsulated into a video data packet respectively, thus each of the regions with different importance in the second video data packet has one or more video data packets.
[0062] 102, receiving, by the base station, first feedback information about a first video data packet transmitted by a user equipment.
[0063] The first video data packet is a previous set of video data packets of the second video data packet, and the second video data packet is a current set of video data packets transmitted by the server; the previous set of video data packets or the current set of video data packets is received from the server and forwarded to the user equipment by the base station. The set of video data packets herein refers to a series of video data packets between two feedback information transmitted by the user equipment and received by the base station, or a series of video data packets transmitted by the server to the base station during a previous time period; the current set of video data packets is a series of video data packets which are being currently transmitted by the server or are waiting for the scheduling for transmission of the base station. The first video data packet is a series of video data packets between the first feedback information and the previous feedback information when feedback information is periodically transmitted, or a series of video data packets transmitted by the server to the base station during a previous time period when feedback information is event-based transmitted.
[0065] The first feedback information may include: at least one of parameters including relative time delay, packet loss ratio, average throughput, and data amount of a receive buffer or a play-out buffer of the user equipment. The relative time delay refers to a relative time delay of a video data packet arriving later with reference to a video data packet of a certain frame arriving previously. The packet loss ratio may be a video data packet discarded by the user equipment due to the video data packet arrives at the receive buffer of the user equipment too late and thus cannot be decoded; may also be a total packet loss ratio detected by the user equipment, which includes a packet loss during a transmission process, and a packet discarded by the user equipment due to arriving too late. The average throughput is an average throughput of video data packets measured by the user equipment during a measuring time. The data amount of the receive buffer or the play-out buffer of the user equipment is the quantity of video data packets in the receive buffer or the play-out buffer of the user equipment.
Zhang teaches sending a feedback report with a certain periodicity, but does not teach that is the wireless network device that send the report. Thus, Zhang does not teach wirelessly sending a reporting periodicity to a terminal device. Ji in the same field of endeavor as the invention teaches a system for feedback and downlink data transmission based on channel feedback information. Ji teaches wirelessly sending a reporting periodicity to a terminal device.
[0067] Thus, the CSF report scheduling information received by the UE from the BS may include call setup information indicating a periodicity of periodic CSF report scheduling, indication of a grant received from the BS for transmitting an aperiodic CSF report, information indicating types of CSF reports scheduled, and/or any of various other types of information indicative of CSF report scheduling behaviors of the base station.
It would have been obvious to a person of ordinary skill in the art at the time of the effective filing of the instant application to modify Zhang with a channel feedback report with a configurable periodicity as taught by Ji. The reason for this modification would be to provide a feedback report that is dynamic and can be change to a short periodicity to adjust transmission of data more quickly to changing channel conditions(see Ji ¶68).
The combination of Zhang/Ji teaches periodic reporting of channel quality but do specifically teach how such period reporting is configured and this does not teach receiving a configuration information element a reporting periodicity from a wireless network device, wherein the configuration information element is usable to indicate a reporting periodicity;
sending, to the wireless network device according to the received reporting periodicity indicated by the configuration information element, the video frame parameter.
Zhu in the same field of endeavor as the invention teaches a system for managing data transmission of data in telecommunication networks include real time data traffic. Zhu teaches
receiving a configuration information element a reporting periodicity from a wireless network device, wherein the configuration information element is usable to indicate a reporting periodicity(eNB send configuration to UE via RRC layer signaling that include measurement/ report interval, ¶64)
[0064] The configurations may include, for example, the activation and deactivation of the MDT measurements and reporting. The configurations may further include the measurement interval and/or the report interval for the MDT metrics. The eNB 912 may provide the configurations to the UE 906 via the RRC signaling 922. The UE 906 may perform the MDT measurement 919 in accordance with the received configurations. The UE 906 may report 924 the measured MDT metrics in logs and report the logs to the eNB 912. The eNB 912 may provide the logs (at 926) including the MDT metrics to the trace collection entity 928 (which may be, for example, a file server) of the network.
sending, to the wireless network device according to the received reporting periodicity indicated by the configuration information element, the video frame parameter(UE based on configured interval reports QOE metrics, packets lost etc to indicate transmission quality of the channel, ¶s54,57).
[0054] The UE 806 may measure the QoE metrics 819 in accordance with the configuration provided via the link 818. In some examples, the UE 806 may perform the measurements at the Real-time Transport Protocol (RTP) layer and/or higher protocol layers. For example, the UE 806 may perform the quality measurements in accordance with the measurement definitions of the configuration, aggregate the measurements into client QoE metrics, and report the metrics to the QoE server 826 via a Hypertext Transfer Protocol (HTTP) link 818. The UE 806 may report the QoE metrics to the receiving QoE server 826 during the multimedia session and/or at the end of the session.
[0057] In some examples, the QoE metrics may include successive loss of RTP packets, which may indicate the number of RTP packets lost in succession per media channel. The QoE metrics may include frame rate, which may indicate the playback frame rate. The QoE metrics may include jitter duration. Jitter may happen when the absolute difference between the actual playback time and the expected playback time is larger than a defined JitterThreshold. The expected time of a frame may be equal to the actual playback time of the last played frame plus the difference between the NPT of the frame and the NPT of the last played frame.
It would have been obvious to a person of ordinary skill in the art at the time of the effective filing of the instant application to modify Zhang/Ji with use of configuring a reporting measurement interval using an RRC later configuration signal as taught by Zhu. The reason for this modification would be to utilize a known method of a control resource(RRC layer) in the mobile telecommunication for control data transmission metrics.
Regarding claim 18, Zhang teaches communication system, wherein the communication system includes a terminal device and a network device; wherein the wireless network device is configured to send at least one video frame to the terminal device(receiver, such a Mobile phone/UE receiving video frames from a server over a network to a base station which further relays the video packets to the UE, ¶63)
[0063] The first video data packet is a previous set of video data packets of the second video data packet, and the second video data packet is a current set of video data packets transmitted by the server; the previous set of video data packets or the current set of video data packets is received from the server and forwarded to the user equipment by the base station. The set of video data packets herein refers to a series of video data packets between two feedback information transmitted by the user equipment and received by the base station, or a series of video data packets transmitted by the server to the base station during a previous time period; the current set of video data packets is a series of video data packets which are being currently transmitted by the server or are waiting for the scheduling for transmission of the base station. The first video data packet is a series of video data packets between the first feedback information and the previous feedback information when feedback information is periodically transmitted, or a series of video data packets transmitted by the server to the base station during a previous time period when feedback information is event-based transmitted.
and wherein the terminal device is configured to: receive the at least one video frame from the wireless network device device (receiver, such a Mobile phone/UE receiving video frames from a server over a network to a base station which further relays the video packets to the UE, ¶63)
[0063] The first video data packet is a previous set of video data packets of the second video data packet, and the second video data packet is a current set of video data packets transmitted by the server; the previous set of video data packets or the current set of video data packets is received from the server and forwarded to the user equipment by the base station. The set of video data packets herein refers to a series of video data packets between two feedback information transmitted by the user equipment and received by the base station, or a series of video data packets transmitted by the server to the base station during a previous time period; the current set of video data packets is a series of video data packets which are being currently transmitted by the server or are waiting for the scheduling for transmission of the base station. The first video data packet is a series of video data packets between the first feedback information and the previous feedback information when feedback information is periodically transmitted, or a series of video data packets transmitted by the server to the base station during a previous time period when feedback information is event-based transmitted.
determine a video frame parameter based on the at least one video frame, wherein the determining the video frame parameter includes determining a receiving status of at least one downlink video frameframe(feedback report sent from UE to base station/EnodeB such a feedback report includes at least packet loss ratio indicating the status of packets received and lost, the report is sent periodically i.e. according to a certain periodicity, ¶s 59, 62, 63, 65)
and send to the wireless network device according to the received periodicity, the video frame parameter including the receiving status of the at least one downlink video frame feedback report sent from UE to base station/EnodeB such a feedback report includes at least packet loss ratio indicating the status of packets received and lost, the report is sent periodically i.e. according to a certain periodicity, ¶s 59, 62, 63, 65)
wherein the wireless network device is further configured to receive the video frame parameter from the terminal device (feedback report sent from UE to base station/EnodeB such a feedback report includes at least packet loss ratio indicating the status of packets received and lost, the report is sent periodically i.e. according to a certain periodicity, ¶s 59, 62, 63, 65)
[0059] The second video data packet may be: a set of video data packets, which is obtained by encapsulating an encoded stream of each region into a video data packet respectively by the server, where the encoded streams of regions with different importance are obtained by encoding data in the regions with different importance in a video image according to different encoding modes. The second video data packet may include a video data packet of a one-frame or a multi-frame video image, encoded streams of regions with different importance are obtained by encoding data in regions with different importance in each frame of video image according to different encoding modes, and then encoded streams of each region are encapsulated into a video data packet respectively, thus each of the regions with different importance in the second video data packet has one or more video data packets.
[0062] 102, receiving, by the base station, first feedback information about a first video data packet transmitted by a user equipment.
[0063] The first video data packet is a previous set of video data packets of the second video data packet, and the second video data packet is a current set of video data packets transmitted by the server; the previous set of video data packets or the current set of video data packets is received from the server and forwarded to the user equipment by the base station. The set of video data packets herein refers to a series of video data packets between two feedback information transmitted by the user equipment and received by the base station, or a series of video data packets transmitted by the server to the base station during a previous time period; the current set of video data packets is a series of video data packets which are being currently transmitted by the server or are waiting for the scheduling for transmission of the base station. The first video data packet is a series of video data packets between the first feedback information and the previous feedback information when feedback information is periodically transmitted, or a series of video data packets transmitted by the server to the base station during a previous time period when feedback information is event-based transmitted.
[0065] The first feedback information may include: at least one of parameters including relative time delay, packet loss ratio, average throughput, and data amount of a receive buffer or a play-out buffer of the user equipment. The relative time delay refers to a relative time delay of a video data packet arriving later with reference to a video data packet of a certain frame arriving previously. The packet loss ratio may be a video data packet discarded by the user equipment due to the video data packet arrives at the receive buffer of the user equipment too late and thus cannot be decoded; may also be a total packet loss ratio detected by the user equipment, which includes a packet loss during a transmission process, and a packet discarded by the user equipment due to arriving too late. The average throughput is an average throughput of video data packets measured by the user equipment during a measuring time. The data amount of the receive buffer or the play-out buffer of the user equipment is the quantity of video data packets in the receive buffer or the play-out buffer of the user equipment
Zhang teaches sending a feedback report with a certain periodicity, but does not teach that is the wireless network device that send the report. Thus, Zhang does not teach receiving a reporting periodicity from a wireless network device. Ji in the same field of endeavor as the invention teaches a system for feedback and downlink data transmission based on channel feedback information. Ji teaches receiving a reporting periodicity from a wireless network device.
[0067] Thus, the CSF report scheduling information received by the UE from the BS may include call setup information indicating a periodicity of periodic CSF report scheduling, indication of a grant received from the BS for transmitting an aperiodic CSF report, information indicating types of CSF reports scheduled, and/or any of various other types of information indicative of CSF report scheduling behaviors of the base station.
It would have been obvious to a person of ordinary skill in the art at the time of the effective filing of the instant application to modify Zhang with a channel feedback report with a configurable periodicity as taught by Ji. The reason for this modification would be to provide a feedback report that is dynamic and can be change to a short periodicity to adjust transmission of data more quickly to changing channel conditions(see Ji ¶68).
The combination of Zhang/Ji teaches periodic reporting of channel quality but do specifically teach how such period reporting is configured and this does not teach receiving a configuration information element a reporting periodicity from a wireless network device, wherein the configuration information element is usable to indicate a reporting periodicity;
sending, to the wireless network device according to the received reporting periodicity indicated by the configuration information element, the video frame parameter.
Zhu in the same field of endeavor as the invention teaches a system for managing data transmission of data in telecommunication networks include real time data traffic. Zhu teaches
receiving a configuration information element a reporting periodicity from a wireless network device, wherein the configuration information element is usable to indicate a reporting periodicity(eNB send configuration to UE via RRC layer signaling that include measurement/ report interval, ¶64)
[0064] The configurations may include, for example, the activation and deactivation of the MDT measurements and reporting. The configurations may further include the measurement interval and/or the report interval for the MDT metrics. The eNB 912 may provide the configurations to the UE 906 via the RRC signaling 922. The UE 906 may perform the MDT measurement 919 in accordance with the received configurations. The UE 906 may report 924 the measured MDT metrics in logs and report the logs to the eNB 912. The eNB 912 may provide the logs (at 926) including the MDT metrics to the trace collection entity 928 (which may be, for example, a file server) of the network.
sending, to the wireless network device according to the received reporting periodicity indicated by the configuration information element, the video frame parameter(UE based on configured interval reports QOE metrics, packets lost etc to indicate transmission quality of the channel, ¶s54,57).
[0054] The UE 806 may measure the QoE metrics 819 in accordance with the configuration provided via the link 818. In some examples, the UE 806 may perform the measurements at the Real-time Transport Protocol (RTP) layer and/or higher protocol layers. For example, the UE 806 may perform the quality measurements in accordance with the measurement definitions of the configuration, aggregate the measurements into client QoE metrics, and report the metrics to the QoE server 826 via a Hypertext Transfer Protocol (HTTP) link 818. The UE 806 may report the QoE metrics to the receiving QoE server 826 during the multimedia session and/or at the end of the session.
[0057] In some examples, the QoE metrics may include successive loss of RTP packets, which may indicate the number of RTP packets lost in succession per media channel. The QoE metrics may include frame rate, which may indicate the playback frame rate. The QoE metrics may include jitter duration. Jitter may happen when the absolute difference between the actual playback time and the expected playback time is larger than a defined JitterThreshold. The expected time of a frame may be equal to the actual playback time of the last played frame plus the difference between the NPT of the frame and the NPT of the last played frame.
It would have been obvious to a person of ordinary skill in the art at the time of the effective filing of the instant application to modify Zhang/Ji with use of configuring a reporting measurement interval using an RRC later configuration signal as taught by Zhu. The reason for this modification would be to utilize a known method of a control resource(RRC layer) in the mobile telecommunication for control data transmission metrics.
Regarding claims 2,11 and 19, Zhang teaches wherein the video frame parameter comprises includes at least one of the following: a spread delay parameter, a frame gap parameter, a packet loss parameter(packet loss ratio ¶65), a late parameter, or base layer and enhancement layer parameters.
Regarding claims 5 and 14, Zhang teaches wherein the packet loss parameter comprises includes at least one of the following: a quantity of video frames in which packet loss occurs(packet loss ratio, ¶65), or a proportion of video frames in which packet loss occurs proportion , wherein packet loss occurs in K consecutive video frames, and K is a positive integer greater than or equal to 1.
[0065] The first feedback information may include: at least one of parameters including relative time delay, packet loss ratio, average throughput, and data amount of a receive buffer or a play-out buffer of the user equipment. The relative time delay refers to a relative time delay of a video data packet arriving later with reference to a video data packet of a certain frame arriving previously. The packet loss ratio may be a video data packet discarded by the user equipment due to the video data packet arrives at the receive buffer of the user equipment too late and thus cannot be decoded; may also be a total packet loss ratio detected by the user equipment, which includes a packet loss during a transmission process, and a packet discarded by the user equipment due to arriving too late. The average throughput is an average throughput of video data packets measured by the user equipment during a measuring time. The data amount of the receive buffer or the play-out buffer of the user equipment is the quantity of video data packets in the receive buffer or the play-out buffer of the user equipment.
Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang/Ji/Zhu as applied to claims 2 and 11 above, and further in view of Chen US 2014/0298366.
Regarding claim 3 and 12, Zhang does no teach a spread delay, thus Zhang/Ji/Zhu do not teach wherein the spread delay parameter indicates at least one of the following: a spread delay of a first video frame, an average spread delay of a plurality of video frames, a maximum spread delay of a plurality of video frames, a minimum spread delay of a plurality of video frames, or a variance of spread delays of a plurality of video frames, wherein the spread delay is a time length from a time when the terminal device successfully receives a first data packet of a video frame is successfully received to a time when the terminal device successfully receives a last data packet of the video frame is successfully received, and the first video frame or the plurality of video frames are one or more video frames of the at least one video frame received by the terminal device. Chen in the same field of endeavor as the invention teaches a system for media deliver quality determination. Chen teaches wherein the spread delay parameter indicates at least one of the following: a spread delay of a first video frame, an average spread delay of a plurality of video frames, a maximum spread delay of a plurality of video frames, a minimum spread delay of a plurality of video frames, or a variance of spread delays of a plurality of video frames, wherein the spread delay is a time length from a time when the terminal device successfully receives a first data packet of a video frame is successfully received to a time when the terminal device successfully receives a last data packet of the video frame is successfully received, and the first video frame or the plurality of video frames are one or more video frames of the at least one video frame received by the terminal device( Average delay across the packets of a frame are calculated from the first the last packet of a frame transmission period , ¶46).
[0046] Alternatively, in the embodiment of the present invention, the measuring a data amount of the media data received within the period of time includes: measuring an average arrival duration per frame of the media data received within the period of time, that is, AvgFTime.sub.i. A specific process is as follows: if the evaluation device receives n frames of data packets of the media data within a period of time, that is, the i.sup.th period of time, during which media data is received, and in this period of time, arrival time of a first data packet of a first frame is t1, and arrival time of a last data packet of a last frame is t2, the acquired average arrival duration per frame is AvgFTime.sub.i=(t2-t1)/n. The acquiring a play rate of the media data includes: acquiring a frame rate of the media data. After the parsing the media data and acquiring a play rate of the media data, the method further includes: acquiring a theoretical duration per frame of the media data according to the frame rate of the media data, that is, acquiring the theoretical duration per frame of the media data according to the formula: duration=1000/FR, where duration is the theoretical duration per frame and FR is the frame rate of the media data; acquiring, according to the average arrival duration per frame of the media data, a short pause jitter factor of the media data received within the period of time, that is, acquiring the short pause jitter factor of the media data received within the period of time according to the formula factor.sub.i=.SIGMA.(AvgFTime.sub.i,j-duration).times.u.sub.j, where factor, is the short pause jitter factor of the media data received within the period of time, AvgFTime is the average arrival duration per frame of the media data received within the period of time, u.sub.j is a coefficient of the short pause jitter factor, and i is the period of time (that is, the i.sup.th period of time); acquiring, according to the subjective experience score of raw data of the media data received within the period of time before transmission and the short pause jitter factor of the media data received within the period of time, the subjective experience score of the media data received within the period of time, that is, acquiring the subjective experience score of the media data received within the period of time by using formula MOSi=MOSVi.times.(1-b1.times.factor.sub.i), where MOSi is the subjective experience score of the media data received within the period of time, MOSVi is the subjective experience store of the raw data of the media data received within the period of time before transmission, b1 is a coefficient, and factor, is the short pause jitter factor of the media data received within the period of time. Then, a size of the virtual decoding buffer is acquired according to the sustainable play time of the media data received within the sampling period. Further, according to the size of the virtual decoding buffer, it is determined whether a pause occurs in the process of playing the media data, or a subjective experience score on the media data playing is further acquired. A specific step is the same as that described above.
It would have been obvious to a person of ordinary skill in the art at the time of the effective filing of the instant application to modify Zhang/Ji/Zhu with determination of a spread delay as a method of determining transmission network conditions. The reason for this modification would be to provide feedback for media transmission adjustment in response transmission conditions.
Claims 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang/Ji/Zhu as applied to claims 2 and 11 above, and further in view of Gwock US 2018/0048860.
Regarding claims 4 and 13, Zhang does not teach wherein the frame gap parameter indicates at least one of the following: a frame gap between adjacent video frames, an average value of a plurality of frame gaps, a maximum value of a plurality of frame gaps, a minimum value of a plurality of frame gaps, or a variance of a plurality of frame gaps, wherein the terminal device receives a plurality of video frames is received from the network device. Gwock in the same field of endeavor as the invention teaches a system for video quality measurement. Gwock teaches wherein the frame gap parameter indicates at least one of the following: a frame gap between adjacent video frames, an average value of a plurality of frame gaps, a maximum value of a plurality of frame gaps, a minimum value of a plurality of frame gaps, or a variance of a plurality of frame gaps, wherein the terminal device receives a plurality of video frames is received from the network device(video call quality determine by measuring average/min/max frame interval(i.e. gaps) between video frames, ¶s 70,71,73, 78, 83)
[0070] FIG. 8 illustrates an example of a frame interval according to at least one example embodiment. FIG. 8 illustrates an example of transmitting frames of a source video at 50 FPS from a transmission side. To transmit 50 frames per second, the transmission side may transmit data so that frames may be transmitted at intervals of 20 milliseconds. In reality, it may be difficult to transmit the frames accurately at intervals of 20 milliseconds. Also, due to a network delay, a transmission error, etc., a reception side may not receive the frames at the desired and/or preset intervals. Here, the parameter calculator 530 may extract a plurality of first time values from the plurality of captured display screens, and may calculate a difference value between the plurality of first time values, as a frame interval between consecutive frames, for example, a frame n and a frame n−1. Here, the parameter calculator 530 may calculate a smallest difference value, for example, an inverse number of a smallest frame interval as a number of frames per second. For example, if the smallest frame interval is 100 milliseconds, 10(=1000/100) FPS may be calculated as a number of frames per second through an inverse number of 100 milliseconds. That is, although a number of frames per second suggested by a service provider is 50 FPS, a number of frames per second actually measured from perspective of a user may be 10 FPS. In the example of FIG. 8, if 25 milliseconds is a minimum frame interval and/or a desired frame interval, 40(=1000/25) FPS may be a number of frames measured per second.
[0071] Referring again to FIGS. 5 and 6, the parameter calculator 530 may calculate at least one of a moving average and a cumulative distribution with respect to a frame interval between frames of the source video by using the calculated difference value as the frame interval. Here, the frame interval may be used to determine a level of a video that is viewed naturally from perspective of the user. The frame interval may be defined as video frame delta (vfdt). Here, vfdt(n) may indicate a frame interval between consecutive frames n and n−1. Also, the moving average of vfdt(n) may be defined as MA(n) and the cumulative distribution from vfdt(1) to vfdt(n) may be defined as CDF(n).
[0073] In Equation 1, “a” denotes a weight for a proportion of a previous value and “b” denotes a weight for a proportion of a current value. According to an increase in the proportion of the current value, an inconsistent level between frame intervals may be relatively greatly applied to the moving average MA(n). For example, a may have a desired and/or preset of 0.9 and b may have a desired and/or preset value of 0.1. A sum of a and b may have a value of ‘1’. If the calculated moving average is represented as a graph that uses a time as an X axis, it is possible to observe a change in quality of experience (QoE).
[0078] Various example embodiments may provide various objective parameters, such as an end-to-end delay between the first electronic device and the second electronic device, an FPS, a frame interval, and/or a PSNR, etc.
[0084] In operation 930, the processor of the second electronic device may control the second electronic device to calculate at least one parameter indicating a quality of video call using the extracted first time value and the extracted second time value. As described above, examples of the parameter may include parameters for an end-to-end delay, an FPS, a frame interval, a moving average and a cumulative distribution of frame intervals, and the like.
It would have been obvious to a person of ordinary skill in the art at the time of the effective filing of the instant application to modify Zhang/Ji/Zhu with determination of a frame gaps as taught by Gwock. The reason for this modification would be to provide feedback for media transmission adjustment in response video transmission quality.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang/Ji/Zhu as applied to claim 2 above, and further in view of Ma US 2016/0219088.
Regarding claim 6, Zhang teaches determination of a packet loss rate parameter and that further the in the use of a PDCP sublayer. Zhang/Ji/Zhu do not teach use of sequence numbers and thus does not teach wherein the processor is further configured to perform operations for operations further comprises: determining whether packet data convergence protocol (PDCP) sequence numbers (SNs) of data packets comprised in a received video frame are consecutive; and determining that packet loss occurs in the video frame when PDCP SNs of the data packets comprised in the video frame received by the terminal device are non-consecutive.
Ma in the same field of endeavor teaches a system for real-time video transmission management. Ma teaches determination of a packet loss rate parameter and that further the in the use of a PDCP sublayer. Zhang does not teach use of sequence numbers and thus does not teach wherein the processor is further configured to perform operations for operations further comprises: determining whether packet data convergence protocol (PDCP) sequence numbers (SNs) of data packets comprised in a received video frame are consecutive; and determining that packet loss occurs in the video frame when PDCP SNs of the data packets comprised in the video frame received by the terminal device are non-consecutive(PDCP sequence number us to determine number of missing i.e. non-consecutive packets, ¶s142,143).
[0142] The eNB 802 may infer an occurrence of uplink congestion via the use of PDCP PDU sequence numbers. The eNB 802 may determine uplink congestion by inspecting the PDCP PDU sequence number. For example, the eNB 802 may determine uplink congestion if the number of missing PDCP PDUs is a percentage of the total PDCP PDUs and/or if the number of missing PDCP PDUs exceeds a threshold. For example, the threshold may be slightly larger than the percentage corresponding to the target MCS. The eNB 802 may use information regarding uplink congestion for future uplink scheduling, for example, by giving a smaller share of the uplink resources to a user that has experienced congestion.
[0143] At the WTRU 801, a 12-bit long PDCP sequence number (e.g., Next_PDCP_TX_SN) may be assigned to a PDCP SDU. The 12-bit long PDCP sequence number (e.g., Next_PDCP_TX_SN) may be increased by 1 for the next PDCP SDU, for example, that may come from an upper layer. A discardTimer may be associated with a received PDCP SDU. The WTRU 801 may be configured to associate a discardTimer with a received PDCP SDU. If the discardTimer expires, the associated PDCP SDU and/or PDCP PDU may be discarded. The WTRU 801 may send a discard signal to a lower layer, for example, if the PDCP PDU has been submitted to the lower layer. The discardTimer may expire due to bad channel quality (e.g., a particular bad realization of the random channel) and/or congestion.
It would have been obvious to a person of ordinary skill in the art at the time of the effective filing of the instant application to modify Zhang/Ji/Zhu with the use of PDCP sequence numbers to detect gaps in packet sequences to determine packet loss rate. The reason for this modification would be to apply the well known technique of using packet sequence numbers to detect packet loss and perform retransmission or congestion mitigation.
Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang/Ji/Zhu as applied to claims 2 and 11 above, and further in view of Paul US 6,148,005.
Regarding claims 7 and 15 , Zhang does not teach wherein the late parameter indicates at least one of the following: a quantity of late video frames, a proportion of late video frames, a time difference between an actual receiving moment and a correct receiving moment of a late video frame, an average value of time differences between actual receiving moments and correct receiving moments of a plurality of late video frames, a maximum value of time differences between actual receiving moments and correct receiving moments of a plurality of late video frames, a minimum value of time differences between actual receiving moments and correct receiving moments of a plurality of late video frames, or a variance of time differences between actual receiving moments and correct receiving moments of a plurality of late video frames.
Paul in the same field of endeavor as the invention teaches a system for retransmission based error recovery of transmitted video streams. Paul teaches wherein the late parameter indicates at least one of the following: a quantity of late video frames, a proportion of late video frames, a time difference between an actual receiving moment and a correct receiving moment of a late video frame, an average value of time differences between actual receiving moments and correct receiving moments of a plurality of late video frames, a maximum value of time differences between actual receiving moments and correct receiving moments of a plurality of late video frames, a minimum value of time differences between actual receiving moments and correct receiving moments of a plurality of late video frames, or a variance of time differences between actual receiving moments and correct receiving moments of a plurality of late video frames(ratio… number of late packets over of a group of frames as well as the amount/degree of lateness is determined an reported) .
[(23) To decide whether a receiver is in the CONGESTED state 325, UNLOADED state 350, or LOADED state 375, conditions C, U and L are checked and one of them must be true. In one implementation of the LVMR system 100, statistics are obtained after receiving a GOP (Group of Pictures) of frames, showing the packet loss ratio during that period, how many frames are late, and how late they are. Such results are used in the above state condition checking., Col 8 Lines 47-59]
[(29) Time statistics are also taken to show how many frames are late and how late they are during a certain time span. It's important to differentiate "slightly late" frames from those "very late" ones. If a GOP contains several "slightly late" frames, then the delay jitter can usually be absorbed later, however, "very late" frames tend to hint possible congestion in the network or overload on the receiver's machine. The boundary value to decide whether a frame is slightly or very late is a function of inter-frame time. It varies in different systems and applications, and is adapted dynamically. For instance, in one implementation, frames are categorized as NOT-LATE, SLIGHTLY-LATE and VERY-LATE (a lost frame is considered as a VERY LATE frame) with the respective variables f.sub.n, f.sub.s and f.sub.v denoting the numbers of frames that fall into each of these three categories within a GOP, respectively., Col 9 Lines 1-16]
It would have been obvious to a person of ordinary skill in the art at the time of the effective filing of the instant application to modify Zhang/Ji/Zhu with determining and reporting of late frame metrics as taught by Paul. The reason for this modification would be to provide additional metric to report transmission conditions to the sender such that mitigation such as retransmission/ adjusting transmission quality can be performed.
Claims 8 and16 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang/Ji/Zhu as applied to claim 2 and 11 above, and further in view of Rose 2003/0099298.
Regarding claim 8, Zhang/Ji/Zhu do not teach wherein each video frame includes base layer data and enhancement layer data; and the base layer and enhancement layer parameters includes at least one of the following: a time difference between a time when the terminal device receives base layer data is received and a time when the terminal device receives enhancement layer data is received in a second video frame, wherein the second video frame is a video frame in the at least one video frame; at least one of a quantity of third video frames, a proportion of third video frames, or a quantity of lost packets at an enhancement layer in a third video frame, wherein the third video frame is a video frame in which packet loss does not occur in base layer data but occurs in enhancement layer data, and the third video frame is a video frame in the at least one video frame; and at least one of a quantity of fourth video frames, a proportion of fourth video frames, or a quantity of lost packets at a base layer in a fourth video frame, wherein the fourth video frame is a video frame in which packet loss does not occur in enhancement layer data but occurs in base layer data, and the fourth video frame is a video frame in the at least one video frame.
Rose in the same field of endeavor as the invention teaches a system for adaptive streaming of video transmission. Rose teaches wherein each video frame includes base layer data and enhancement layer data; and the base layer and enhancement layer parameters includes at least one of the following: a time difference between a time when the terminal device receives base layer data is received and a time when the terminal device receives enhancement layer data is received in a second video frame, wherein the second video frame is a video frame in the at least one video frame; at least one of a quantity of third video frames, a proportion of third video frames, or a quantity of lost packets at an enhancement layer in a third video frame, wherein the third video frame is a video frame in which packet loss does not occur in base layer data but occurs in enhancement layer data, and the third video frame is a video frame in the at least one video frame; and at least one of a quantity of fourth video frames, a proportion of fourth video frames, or a quantity of lost packets at a base layer in a fourth video frame, wherein the fourth video frame is a video frame in which packet loss does not occur in enhancement layer data but occurs in base layer data, and the fourth video frame is a video frame in the at least one video frame(packet loss rate of the base layer or one or more enhancement layers. Is determined )
[0025] The present invention also describes apparatus and methods that may be broadly practiced within adaptive transport tools including adaptive error correction, such as the selection of forward error correction, retransmission decisions with or without feedback information, subscription and de-subscription to service layers in a receiver-driven system, support for selectable QoS levels, and so forth along with combination approaches thereof.
[0151] FIG. 5 through FIG. 7 illustrate estimation accuracy (dB) for different estimation methods over a range of packet loss conditions. FIG. 5 depicts the results for QCIF sequence "carphone" in a single layer system at 32 kbps for 10 fps. FIG. 6 depicts the results from the same sequence as in FIG. 7, but for a three-layer bit-stream at 32 kbps, 64 kbps, and 96 kbps at 10 fps. The results for a CIF sequence "LTS" for a three-layer bit-stream at 100 kbps, 200 kbps, and 400 kbps for 10 fps are depicted in FIG. 7. Referring to FIG. 6 and FIG. 7, the packet loss rates for the base layer, first enhancement layer, and second enhancement layer under different cases in (b) and (c) are: Case 1 (0%, 5%, 10%), case 2 (1%, 3%, 5%), case 3 (3%, 8%, 15%), case 4 (5%, 10%, 95%), and case 5 (5%, 95%, 95%).
It would have been obvious to a person of ordinary skill in the art at the time of the effective filing of the instant application to modify Zhang/Ji/Zhu with determining packet loss rate parameter of base and enhancement layer video packets. The reason for this modification would be to provide indication of transmission condition for feedback to sender for adaptation of the video transmission
Claims 9, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang/Ji/Zhu as applied to claim 1 above, and further in view of Lee US 2009/0213938.
Regarding claim 9, Zhang/Ji/Zhu do not teach wherein the video frame includes one or more data slices after network coding is performed, and the video frame parameter further includes at least one of the following: a quantity of data slices that are to be received to successfully decode a video frame and/or a data volume of the data slices;
a quantity of data slices that the are to be received to successfully decode a base layer of a video frame and/or a data volume of the data slices; or a quantity of data slices that are to be received to successfully decode an enhancement layer of a video frame and/or a data volume of the data slices.
Lee in the same field of endeavor as the invention teaches a system for performing error handling for received video frames. Lee teaches wherein the video frame includes one or more data slices after network coding is performed, and the video frame parameter further includes at least one of the following: a quantity of data slices that are to be received to successfully decode a video frame and/or a data volume of the data slices;
a quantity of data slices that are to be received to successfully decode a base layer of a video frame and/or a data volume of the data slices; or a quantity of data slices that are to be received to successfully decode an enhancement layer of a video frame and/or a data volume of the data slices(number of MB(macroblocks) needs to decode the slice such that if errors can be compensated for and reconstruction cam be perform with losing a frame, ¶31)
[0031] When a frame is divided into several slices, a decoder usually decodes one slice at a time, instead of one frame at a time. Ideally, a decoder may apply a look-ahead operation to seek the next slice header to determine the number of MBs that it needs to decode for the current slice. The decoder needs to know the first MB and last MB of the corrupted data segment so that, when an error is found during decoding, an error handler can provide these numbers for error concealment. In this manner, the decoder only conceals the MBs within the corrupted data segment and resumes decoding from the next slice header. With the help of a slice structure, the decoder can reconstruct more MBs instead of losing an entire frame.
It would have been obvious to a person of ordinary skill in the art at the time of the effective filing of the instant application to modify Zhang/Ji/Zhu with splitting a frame into slice and providing slice information in the header to indicate a number of blocks. The reason for this modification would be to provide error correction of partially corrupted video frames.
Applicant Remarks
Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tom Y. Chang whose telephone number is 571-270-5938. The examiner can normally be reached on Monday-Friday from 9am to 5pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emmanuel Moise, can be reached on (571)272-3865. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/TOM Y CHANG/
Primary Examiner, Art Unit 2455