DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Allowable Subject Matter
Claims 2, 3 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Instant Application
Patent No. 11,716,520
(Claim 1)
1. A computer-implemented method comprising: determining, for a plurality of different media items, a current time scale at which each of the media items is encoded, wherein at least two of the plurality of media items are encoded at different frame rates; calculating, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items; changing a media item selected from the plurality of media items from the current time scale to the unified time scale that provides a constant frame interval for the selected media item; and streaming the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale.
(Claim 2)
2. The computer-implemented method of claim 1, wherein the calculating includes determining a minimum number of time units per second at which the unified time scale is uniform for the plurality of media items.
(Claim 3)
3. The computer-implemented method of claim 1, wherein the calculating includes determining a least common multiplier of the encoded frame rates in the plurality of media items, the least common multiplier comprising a value for which the frame interval for each encoded frame rate is a whole number.
(Claim 4)
4. The computer-implemented method of claim 1, wherein the selected media item comprises video content that was captured using a variable refresh rate.
(Claim 5)
5. The computer-implemented method of claim 1, wherein a maximum range of frame rates for the media items within the plurality of media items includes a specified upper bound and a specified lower bound.
(Claim 6)
6. The computer-implemented method of claim 5, wherein the lower bound comprises 24 frames per second, and wherein the upper bound comprises 60 frames per second.
(Claim 7)
7. The computer-implemented method of claim 1, wherein the plurality of media items comprises video media items.
(Claim 8)
8. The computer-implemented method of claim 7, wherein the video media items are encoded at 23.97, 24, 25, 29.97, 30, 59.94, 60, 120, 240 or 300 frames per second.
(Claim 9)
9. The computer-implemented method of claim 1, wherein the plurality of media items comprises audio media items.
(Claim 10)
10. The computer-implemented method of claim 9, wherein the audio media items have a frame rate of 1024, 1536, or 2048 samples per frame.
(Claim 11)
11. The computer-implemented method of claim 1, wherein each of the plurality of media items in a specified group of media items has a specified video frame rate and audio frame rate, and wherein the unified time scale is calculated to optimize the specified video frame rate and the specified audio frame rate of the media items in the group.
(Claim 12)
12. The computer-implemented method of claim 11, wherein the unified time scale is implemented to generate one or more presentation time stamps (PTSs) for the group of media items.
(Claim 13)
13. The computer-implemented method of claim 12, wherein the one or more PTSs are monotonically increasing, and wherein units used in the unified time scale are selected to maximize wrap-around time for the group of media items.
(Claim 14)
14. The computer-implemented method of claim 13, wherein the units selected to maximize wrap-around time for the group of media items are selected based on video frame rate.
(Claim 15)
15. A system comprising: at least one physical processor; and physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to: determine, for a plurality of different media items, a current time scale at which each of the media items is encoded, wherein at least two of the plurality of media items are encoded at different frame rates; calculate, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items; change a media item selected from the plurality of media items from the current time scale to the unified time scale that provides a constant frame interval for the selected media item; and stream the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale.
(Claim 16)
16. The system of claim 15, wherein the unified time scale includes a presentation time stamp (PTS) interval, and wherein the PTS interval comprises a minimum frame interval or a multiple of a minimum frame interval.
(Claim 17)
17. The system of claim 15, wherein calculating the unified time scale includes converting one or more input presentation time stamps from the plurality of media items having different time scales into PTSs based on the unified time scale.
(Claim 18)
18. The system of claim 17, wherein implementing the converted input PTSs avoids PTS counter wrap-around.
(Claim 19)
19. The system of claim 15, wherein changing at least one of the plurality of media items from the current time scale to the unified time scale allows a single fixed V-Synch interrupt to be implemented during playback of the plurality of media items.
(Claim 20)
20. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to: determine, for a plurality of different media items, a current time scale at which each of the media items is encoded, wherein at least two of the plurality of media items are encoded at different frame rates; calculate, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items; change a media item selected from the plurality of media items from the current time scale to the unified time scale that provides a constant frame interval for the selected media item; and stream the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale.
(Claim 1)
1. A computer-implemented method comprising: determining, for each of a plurality of different media items, a current time scale at which the media items are encoded for distribution, wherein at least two of the plurality of media items are encoded at different frame rates; identifying, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items, the identifying including determining a minimum number of time units per second at which the unified time scale is uniform for the plurality of media items; and changing at least one of the plurality of media items from the current time scale to the identified unified time scale to provide a constant frame interval for the at least one changed media item.
(Claim 2 of the Instant Application is indicated allowable)
(Claim 3 of the Instant Application is indicated allowable)
(Claim 1 above includes the claimed limitations of Claim 4 of the Instant Application)
(Claim 1 above includes the claimed limitations of Claim 5 of the Instant Application)
(Claim 1 above includes the claimed limitations of Claim 6 of the Instant Application)
(Claim 2)
2. The computer-implemented method of claim 1, wherein the media items comprise video media items.
(Claim 3)
3. The computer-implemented method of claim 2, wherein the video media items are encoded at 23.97, 24, 25, 29.97, 30, 59.94, 60, 120, 240 or 300 frames per second.
(Claim 4)
4. The computer-implemented method of claim 1, wherein the media items comprise audio media items.
(Claim 5)
5. The computer-implemented method of claim 4, wherein the audio media items have a frame rate of 1024, 1536, or 2048 samples per frame.
(Claim 6)
6. The computer-implemented method of claim 1, wherein each of the plurality of media items in a specified group of media items has a specified video frame rate and audio frame rate, and wherein the unified time scale is calculated to optimize the specified video frame rate and the specified audio frame rate of the media items in the group.
(Claim 7)
7. The computer-implemented method of claim 1, wherein the unified time scale is implemented to generate one or more presentation time stamps (PTSs) for the plurality of media items.
(Claim 8)
8. The computer-implemented method of claim 6, wherein the one or more PTSs are monotonically increasing, and wherein units used in the unified time scale are selected to maximize wrap-around time for the plurality of media items.
(Claim 9)
9. The computer-implemented method of claim 8, wherein the units selected to maximize wrap-around time for the plurality of media items are selected based on video frame rate.
(Claim 10)
10. The computer-implemented method of claim 1, wherein the identified unified time scale includes a presentation time stamp (PTS) interval, and wherein the PTS interval comprises a minimum frame interval or a multiple of a minimum frame interval.
(Claim 11)
11. The computer-implemented method of claim 10, further comprising restoring the PTS interval to a specified resolution.
(Claim 12)
12. A system comprising: at least one physical processor; and physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to: determine, for each of a plurality of different media items, a current time scale at which the media items are encoded for distribution, wherein at least two of the plurality of media items are encoded at different frame rates; identify, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items, the identifying including determining a minimum number of time units per second at which the unified time scale is uniform for the plurality of media items; and change at least one of the plurality of media items from the current time scale to the identified unified time scale to provide a constant frame interval for the at least one changed media item.
(Claim 16 of the Instant Application is indicated allowable)
(Claim 16 below includes the claimed limitations of Claim 17 of the Instant Application)
(Claim 17 below includes the claimed limitations of Claim 18 of the Instant Application)
(Claim 18 below includes the claimed limitations of Claim 19 of the Instant Application)
(Claim 13)
13. The system of claim 12, wherein the identified unified time scale allows the plurality of media items to be streamed at a variable frame rate while maintaining the constant frame interval.
(Claim 14)
14. The system of claim 13, wherein media items with different frame rates are streamed at a variable frame rate while maintaining the constant frame interval for each frame rate using the unified time scale.
(Claim 15)
15. The system of claim 12, wherein media items having video content that was captured using a variable refresh rate are streamed at a variable frame rate while maintaining the constant frame interval using the unified time scale.
(Claim 16)
16. The system of claim 12, wherein identifying the unified time scale includes converting one or more input presentation time stamps from the plurality of different media items having different time scales into PTSs based on the unified time scale.
(Claim 17)
17. The system of claim 16, wherein implementing the converted input PTSs avoids PTS counter wrap-around.
(Claim 18)
18. The system of claim 12, wherein changing at least one of the plurality of media items from the current time scale to the identified unified time scale allows a single fixed V-Synch interrupt to be implemented during playback of the plurality of media items.
(Claim 19)
19. The system of claim 12, further comprising optimizing PTSs for the plurality of media items, such that scaled presentation time stamps match native PTSs without a resulting rounding error.
(Claim 20)
20. A non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to: determine, for each of a plurality of different media items, a current time scale at which the media items are encoded for distribution, wherein at least two of the plurality of media items are encoded at different frame rates; identify, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items, the identifying including determining a minimum number of time units per second at which the unified time scale is uniform for the plurality of media items; and change at least one of the plurality of media items from the current time scale to the identified unified time scale to provide a constant frame interval for the at least one changed media item.
Claims 1, 4, 7-15 and 17-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11,716,520, and further in view of Kim et al. US Pub. No. 2004/0125124, and further in view of Choi US Pub. No. 2007/0168188.
Re claim 1, the conflicting claims are not patentably distinct from each other because every limitation of claim 1 of the Instant Application is found in claim 1 of the Patent No. 11,716,520, except the following limitation: “calculating, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items; streaming the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale.”
However, the reference of Kim explicitly teaches “streaming the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale” (see ¶ 35 for streaming the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale (i.e. the time scale of the visual rhythm needs to be adjusted to be uniform, one simple way of adjustments is to make the number of vertical lines of the visual rhythm per a unit time interval, for example one second, be equal to the maximum frame rate of encoded video by adding extra vertical lines into a sparse unit time interval as described in fig. 8 paragraph 142))
Therefore, taking the combined teachings of Patent No. 11,716,520 and Kim as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (time scale) into the system of Patent No. 11,716,520 as taught by Kim.
One will be motivated to incorporate the above feature into the system of Patent No. 11,716,520 as taught by Kim for the benefit of providing a browser interface step-based approach used as a rough visual time scale, but there may be considerable temporal distortion in the visual time scale when the original video source is encoded in a variable frame rate encoding schemes such as Microsoft’s ASF (Advanced Streaming Format), wherein variable frame rate encoding schemes dynamically adjust the frame rate while encoding a video source in order to improve efficiency when producing a video stream with a constant bit rate (see ¶ 35)
On the other hand, the reference of Choi explicitly teaches “calculating, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items” (see ¶ 76 for calculating, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items (i.e. in case of an MPEG signal, the real time-scale (i.e. the target time-scale) of the time-scaled video signal may be calculated from the time stamp, the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4)))
Therefore, taking the combined teachings of Patent No. 11,716,520 and Choi as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (calculating) into the system of Patent No. 11,716,520 as taught by Choi.
One will be motivated to incorporate the above feature into the system of Patent No. 11,716,520 as taught by Choi for the benefit of calculating the real time-scale (i.e. the target time-scale) of the time-scaled video signal from the time stamp, wherein the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, wherein the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal. .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4) in order to improve efficiency when achieving synchronization of the AV signals while time-scaling, i.e., the audio reproduction speed can be coincided with the video reproduction speed regardless of the real reproduction speed of the video signal, as a result, the synchronization between the time-scaled audio and video signals can be well maintained (see ¶s 75-76)
Re claim 4, the conflicting claims are not patentably distinct from each other because every limitation of claim 4 of the Instant Application is found in claim 1 of the Patent No. 11,716,520, except the following limitation: “wherein the selected media item comprises video content that was captured using a variable refresh rate.”
However, the reference of Kim explicitly teaches “wherein the selected media item comprises video content that was captured using a variable refresh rate” (see ¶ 35 for the selected media item comprises video content that was captured using a variable refresh rate (i.e. the time scale of the visual rhythm needs to be adjusted to be uniform, one simple way of adjustments is to make the number of vertical lines of the visual rhythm per a unit time interval, for example one second, be equal to the maximum frame rate of encoded video by adding extra vertical lines into a sparse unit time interval as described in fig. 8 paragraph 142))
Therefore, taking the combined teachings of Patent No. 11,716,520, Kim and Choi as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (time scale) into the system of Patent No. 11,716,520 as taught by Kim.
One will be motivated to incorporate the above feature into the system of Patent No. 11,716,520 as taught by Kim for the benefit of providing a browser interface step-based approach used as a rough visual time scale, but there may be considerable temporal distortion in the visual time scale when the original video source is encoded in a variable frame rate encoding schemes such as Microsoft’s ASF (Advanced Streaming Format), wherein variable frame rate encoding schemes dynamically adjust the frame rate while encoding a video source in order to improve efficiency when producing a video stream with a constant bit rate (see ¶ 35)
Re claim 7, the conflicting claims are not patentably distinct from each other because every limitation of claim 7 of the Instant Application is recited in claim 2 of the Patent No. 11,716,520.
Re claim 8, the conflicting claims are not patentably distinct from each other because every limitation of claim 8 of the Instant Application is recited in claim 3 of the Patent No. 11,716,520.
Re claim 9, the conflicting claims are not patentably distinct from each other because every limitation of claim 9 of the Instant Application is recited in claim 4 of the Patent No. 11,716,520.
Re claim 10, the conflicting claims are not patentably distinct from each other because every limitation of claim 10 of the Instant Application is recited in claim 5 of the Patent No. 11,716,520.
Re claim 11, the conflicting claims are not patentably distinct from each other because every limitation of claim 11 of the Instant Application is recited in claim 6 of the Patent No. 11,716,520.
Re claim 12, the conflicting claims are not patentably distinct from each other because every limitation of claim 12 of the Instant Application is recited in claim 7 of the Patent No. 11,716,520.
Re claim 13, the conflicting claims are not patentably distinct from each other because every limitation of claim 13 of the Instant Application is recited in claim 8 of the Patent No. 11,716,520.
Re claim 14, the conflicting claims are not patentably distinct from each other because every limitation of claim 14 of the Instant Application is recited in claim 9 of the Patent No. 11,716,520.
Re claim 15, the conflicting claims are not patentably distinct from each other because every limitation of claim 15 of the Instant Application is found in claim 12 of the Patent No. 11,716,520, except the following limitation: “calculate, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items; stream the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale.”
However, the reference of Kim explicitly teaches “stream the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale” (see ¶ 35 for stream the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale (i.e. the time scale of the visual rhythm needs to be adjusted to be uniform, one simple way of adjustments is to make the number of vertical lines of the visual rhythm per a unit time interval, for example one second, be equal to the maximum frame rate of encoded video by adding extra vertical lines into a sparse unit time interval as described in fig. 8 paragraph 142))
Therefore, taking the combined teachings of Patent No. 11,716,520 and Kim as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (time scale) into the system of Patent No. 11,716,520 as taught by Kim.
One will be motivated to incorporate the above feature into the system of Patent No. 11,716,520 as taught by Kim for the benefit of providing a browser interface step-based approach used as a rough visual time scale, but there may be considerable temporal distortion in the visual time scale when the original video source is encoded in a variable frame rate encoding schemes such as Microsoft’s ASF (Advanced Streaming Format), wherein variable frame rate encoding schemes dynamically adjust the frame rate while encoding a video source in order to improve efficiency when producing a video stream with a constant bit rate (see ¶ 35)
On the other hand, the reference of Choi explicitly teaches “calculate, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items” (see ¶ 76 for calculate, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items (i.e. in case of an MPEG signal, the real time-scale (i.e. the target time-scale) of the time-scaled video signal may be calculated from the time stamp, the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4)))
Therefore, taking the combined teachings of Patent No. 11,716,520 and Choi as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (calculate) into the system of Patent No. 11,716,520 as taught by Choi.
One will be motivated to incorporate the above feature into the system of Patent No. 11,716,520 as taught by Choi for the benefit of calculating the real time-scale (i.e. the target time-scale) of the time-scaled video signal from the time stamp, wherein the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, wherein the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal. .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4) in order to improve efficiency when achieving synchronization of the AV signals while time-scaling, i.e., the audio reproduction speed can be coincided with the video reproduction speed regardless of the real reproduction speed of the video signal, as a result, the synchronization between the time-scaled audio and video signals can be well maintained (see ¶s 75-76)
Re claim 17, the conflicting claims are not patentably distinct from each other because every limitation of claim 17 of the Instant Application is found in claim 16 of the Patent No. 11,716,520, except the following limitation: “calculating the unified time scale.”
However, the reference of Choi explicitly teaches “calculating the unified time scale” (see ¶ 76 for calculating the unified time scale (i.e. the target time-scale) of the time-scaled video signal may be calculated from the time stamp, the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4)))
Therefore, taking the combined teachings of Patent No. 11,716,520, Kim and Choi as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (calculating) into the system of Patent No. 11,716,520 as taught by Choi.
One will be motivated to incorporate the above feature into the system of Patent No. 11,716,520 as taught by Choi for the benefit of calculating the real time-scale (i.e. the target time-scale) of the time-scaled video signal from the time stamp, wherein the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, wherein the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal. .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4) in order to improve efficiency when achieving synchronization of the AV signals while time-scaling, i.e., the audio reproduction speed can be coincided with the video reproduction speed regardless of the real reproduction speed of the video signal, as a result, the synchronization between the time-scaled audio and video signals can be well maintained (see ¶s 75-76)
Re claim 18, the conflicting claims are not patentably distinct from each other because every limitation of claim 18 of the Instant Application is recited in claim 17 of the Patent No. 11,716,520.
Re claim 19, the conflicting claims are not patentably distinct from each other because every limitation of claim 19 of the Instant Application is recited in claim 18 of the Patent No. 11,716,520.
Re claim 20, the conflicting claims are not patentably distinct from each other because every limitation of claim 20 of the Instant Application is found in claim 20 of the Patent No. 11,716,520, except the following limitation: “calculate, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items; stream the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale.”
However, the reference of Kim explicitly teaches “stream the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale” (see ¶ 35 for stream the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale (i.e. the time scale of the visual rhythm needs to be adjusted to be uniform, one simple way of adjustments is to make the number of vertical lines of the visual rhythm per a unit time interval, for example one second, be equal to the maximum frame rate of encoded video by adding extra vertical lines into a sparse unit time interval as described in fig. 8 paragraph 142))
Therefore, taking the combined teachings of Patent No. 11,716,520 and Kim as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (time scale) into the system of Patent No. 11,716,520 as taught by Kim.
One will be motivated to incorporate the above feature into the system of Patent No. 11,716,520 as taught by Kim for the benefit of providing a browser interface step-based approach used as a rough visual time scale, but there may be considerable temporal distortion in the visual time scale when the original video source is encoded in a variable frame rate encoding schemes such as Microsoft’s ASF (Advanced Streaming Format), wherein variable frame rate encoding schemes dynamically adjust the frame rate while encoding a video source in order to improve efficiency when producing a video stream with a constant bit rate (see ¶ 35)
On the other hand, the reference of Choi explicitly teaches “calculate, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items” (see ¶ 76 for calculate, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items (i.e. in case of an MPEG signal, the real time-scale (i.e. the target time-scale) of the time-scaled video signal may be calculated from the time stamp, the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4)))
Therefore, taking the combined teachings of Patent No. 11,716,520 and Choi as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (calculate) into the system of Patent No. 11,716,520 as taught by Choi.
One will be motivated to incorporate the above feature into the system of Patent No. 11,716,520 as taught by Choi for the benefit of calculating the real time-scale (i.e. the target time-scale) of the time-scaled video signal from the time stamp, wherein the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, wherein the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal. .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4) in order to improve efficiency when achieving synchronization of the AV signals while time-scaling, i.e., the audio reproduction speed can be coincided with the video reproduction speed regardless of the real reproduction speed of the video signal, as a result, the synchronization between the time-scaled audio and video signals can be well maintained (see ¶s 75-76)
Claims 5 and 6 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11,716,520, and further in view of Kim et al. US Pub. No. 2004/0125124, and further in view of Choi US Pub. No. 2007/0168188, and further in view of Nago et al. US Patent. No. 6,567,117.
Re claim 5, the conflicting claims are not patentably distinct from each other because every limitation of claim 5 of the Instant Application is found in claim 1 of the Patent No. 11,716,520, except the following limitation: “wherein a maximum range of frame rates for the media items within the plurality of media items includes a specified upper bound and a specified lower bound.”
However, the reference of Nago explicitly teaches “wherein a maximum range of frame rates for the media items within the plurality of media items includes a specified upper bound and a specified lower bound” (see col. 7 lines 45-63 for a maximum range of frame rates for the media items within the plurality of media items includes a specified upper bound and a specified lower bound (i.e. a maximum range of operable frame rates for each coding bit rate is determined in a manner such that 1 to 7 frames/sec are available for the bit rate of 32 kbps and 1 to 13 frames/sec are available for the bit rate of 64 kbps as shown in the conversion table as described in col. 7 lines 50-54))
Therefore, taking the combined teachings of Patent No. 11,716,520, Kim, Choi and Nago as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (maximum) into the system of Patent No. 11,716,520 as taught by Nago.
One will be motivated to incorporate the above feature into the system of Patent No. 11,716,520 as taught by Nago for the benefit of determining a maximum range of operable frame rates for each coding bit rate in a manner such that 1 to 7 frames/sec are available for the bit rate of 32 kbps and 1 to 13 frames/sec are available for the bit rate of 64 kbps as shown in the conversion table in order to improve efficiency when determining a maximum range of operable frame rates (see col. 7 lines 50-54)
Re claim 6, the conflicting claims are not patentably distinct from each other because every limitation of claim 6 of the Instant Application is found in claim 1 of the Patent No. 11,716,520, except the following limitation: “wherein the lower bound comprises 24 frames per second, and wherein the upper bound comprises 60 frames per second.”
However, the reference of Kim explicitly teaches “wherein the lower bound comprises 24 frames per second, and wherein the upper bound comprises 60 frames per second” (see ¶ 14 for the lower bound comprises 24 frames per second, and wherein the upper bound comprises 60 frames per second (i.e. the ATSC digital TV standard, Revision B (ATSC Standard A/53B) defines a standard for digital video based on MPEG-2 encoding, and allows video frames as large as 1920.times.1080 pixels/pels (2,073,600 pixels) at 20 Mbps, for example))
Therefore, taking the combined teachings of Patent No. 11,716,520, Kim, Choi and Nago as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (24 frames per second, 60 frames per second) into the system of Patent No. 11,716,520 as taught by Kim.
One will be motivated to incorporate the above feature into the system of Patent No. 11,716,520 as taught by Kim for the benefit of having the ATSC digital TV standard, Revision B (ATSC Standard A/53B) defining a standard for digital video based on MPEG-2 encoding, and allowing video frames as large as 1920.times.1080 pixels/pels (2,073,600 pixels) at 20 Mbps, for example in order to improve efficiency when decoding video frames (see ¶ 14)
Response to Arguments
Applicant’s arguments filed on 12/30/2025 with respect to claims 1, 4-15 and 17-20 have been fully considered but they are not persuasive.
In re pages 2-3, Applicant states that “In the Action, the Examiner rejected claims 1, 4, 7-9, 11, 15, and 20 under 35 U.S.C. § 103 as allegedly unpatentable over the cited references. Applicant respectfully traverses these rejections for at least the following reasons. Independent claim 1 recites inter alia “streaming the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale.” In contrast, the cited art does not disclose, teach, or suggest at least this claim feature. For example, in the Action, the Examiner appears to map the claimed constant frame interval to Kim’s time scale of a visual rhythm. However, Kim’s time scale of a visual rhythm differs from the claimed constant time interval. For example, Kim explicitly teaches adjusting only the display-side visual rhythm image so that its horizontal axis appears uniform for GUI browsing. See, e.g., Kim, par. [0142]. Kim teaches that, when a video source uses a variable frame rate, “the time scale of the visual rhythm needs to be adjusted to be uniform” by “adding extra vertical lines into a sparse unit time interval” or “dropping selected lines from a densely populated time interval.” Id. That operation amounts to visualizing or normalizing an image used for browsing in the GUI, not maintaining a constant frame interval using a unified time scale for media presentation or streaming. See id. Indeed, Kim does not disclose maintaining per-frame presentation intervals during streaming. See id. Instead, Kim’s time scale is merely an internal property of the visual rhythm display achieved by inserting or dropping visual rhythm lines per unit time to make the GUI timeline appear uniform. See id. In other words, neither Kim’s time scale nor Kim’s visual rhythm amounts to a frame-interval guarantee for streamed media. In sum, Kim’s visual rhythm is a GUI display normalization. In contrast, the claimed constant frame interval is a media timing property enforced via a unified time scale during streaming. Accordingly, Kim’s visual rhythm and the claimed constant frame interval operate in different domains and for different purposes. For at least these reasons, Kim does not disclose, teach, or suggest “streaming the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale,” as recited in claim 1. None of the other cited art cures these deficiencies of Kim. Nor does the Examiner suggest that any of the other cited art does so. Accordingly, because the cited art does not disclose each and every feature of claim 1, the cited art is not sufficient to establish a prima facie obviousness rejection. Although not identical to claim 1, independent claims 15 and 20 recite similar subject matter and are patentable for at least the same reasons as claim 1. In addition, Applicant submits that the corresponding dependent claims are allowable for at least the same reasons given above with respect to the independent claims. Applicant, therefore, respectfully requests withdrawal of these rejections and the allowance of all pending claims.”
(1) In response, the Examiner respectfully disagrees. First, the Applicant’s arguments are mainly directed to Kim and the Examiner will respond accordingly. Second, Choi is mostly used to disclose the limitation that Wang does not teach. Third, Applicant cannot show non-obviousness by attacking references individually, whereas here the rejections are based on combination of references. In re Keller, 208 USPQ 871 (CCPA 1981).
For instance, Kim discloses the following: First, the "browser interface" provided by the step-based approach can be used as a rough visual time scale, but there may be considerable temporal distortion in the visual time scale when the original video source is encoded in a variable frame rate encoding schemes such as Microsoft's ASF (Advanced Streaming Format) as described in paragraph 35. Second, variable frame rate encoding schemes dynamically adjust the frame rate while encoding a video source in order to produce a video stream with a constant bit rate as described in paragraph 35. Third, as a result, within a single ASF-encoded video stream (or other variable frame rate encoded stream), the frame rate might be different from segment-to-segment or from shot-to-shot, this produces considerable distortion in the time scale of the "browser interface" as described in paragraph 35. Fourth, fig. 1 shows two "browser interfaces", a first browser interface 102 and a second browser interface 104, both produced from different versions of a single video source, encoded at high and low bit rates, respectively as described in fig. 1 paragraph 36. Fifth, the first and second browser interfaces 102 and 104 are intentionally juxtaposed to facilitate direct visual comparison as described in fig. 1 paragraph 36. Sixth, the first browser interface 102 is produced from the video source encoded at a relatively high bit rate (e.g., 300 Kbps in ASF) format, while the second browser interface 104 is produced from exactly the same video source encoded at a relatively lower bit rate (e.g., 36 Kbps) as described in fig. 1 paragraph 36. Seventh, the widths of the browser interfaces 102 and 104 have been adjusted to be the same as described in fig. 1 paragraph 36. Eighth, two video "shots" 106 and 110 are identified in the first browser interface 102, two shots 108 and 112 in the second browser interface are also identified as described in fig. 1 paragraph 36. Ninth, the shots 106 and 108 correspond to the same video content at a first point in the video stream, and the shots 110 and 112 correspond to the same video content at a second point in the video stream as described in fig. 1 paragraph 36. Tenth, in fig. 1, the widths of the shots 106 and 108 (produced from the same source video information) are different as described in paragraph 37. Eleventh, the different widths of the shots 106 and 108 mean that the frame rates of their corresponding shots in the high and low bit rate encoded video streams are different, because each vertical line of the "browser interface" corresponds to one frame of encoded video source as described in fig. 1 paragraph 37. Twelfth, similarly, the differing horizontal position and widths of shots 110 and 112 indicate differences in frame rate between the high and low bit-rate encoded video streams as described in fig. 1 paragraph 37. Thirteenth, as fig. 1 illustrates, although the browser interface can be used as a time scale for the video it represents, it is only a coarse representation of absolute time because variable frame rates affect the widths and positions of visual features of the browser interface as described in paragraph 37.
From the above passages, Kim indeed discloses the following claimed limitations of independent claim 1 that recites “determining, for a plurality of different media items, a current time scale at which each of the media items is encoded, wherein at least two of the plurality of media items are encoded at different frame rates.” See the actual claim rejections further below.
Furthermore, Kim discloses the following: First, figs. 8A and 8B are screen images showing two examples of a GUI for viewing a visual rhythm, according to an embodiment of the present invention as described in paragraph 139. Second, in the GUI screen image 810 of FIG. 8A (corresponding to View of Visual Rhythm 530, FIG. 5), a small portion of a visual rhythm 820 is displayed as described in paragraph 139. Third, the shot boundaries are detected, using any suitable technique as described in paragraph 139. Fourth, the detected shot boundaries are shown graphically on the visual rhythm by placing a special symbol called "shot marker" 822 (e.g., a triangle marker as shown) at each shot boundary as described in fig. 8 paragraph 139. Fifth, the shot markers are adjacent the visual rhythm image as described in fig. 8 paragraph 139. Sixth, for a given shot (between two shot boundaries), rather than displaying a "true" visual rhythm image (e.g., 710), a "virtual" visual rhythm image is displayed as a simple, recognizable, distinguishable background pattern, such as horizontal lines, vertical lines, diagonal lines, crossed lines, plaids, herringbone, etc, rather than a true visual rhythm image, within its detected shot boundaries as described in fig. 8 paragraph 139. Seventh, in FIG. 8A, six shot markers 822 are shown, and seven distinct background patterns for detected shots are shown as described in paragraph 139. Eighth, the background patterns are selected from a suite of background patterns, and it should be understood that there is no need that the pattern bear any relationship to the type of shot which has been detected (e.g., dissolve, wipe, etc.) as described in fig. 8 paragraph 139. Ninth, there should, of course, be at least two different background patterns so that adjacent shots can be visually distinguished from one another. Tenth, in order to synchronize the audio waveform with the visual rhythm, the time scales of both visual objects should be uniform as described in fig. 8 paragraph 142. Eleventh, since audio is usually encoded at constant sampling rate, there is no need for any other adjustments, however, the time scale of a visual rhythm might not be uniform if the video source (stream/file) is encoded using a variable frame rate encoding technique such as ASF as described in fig. 8 paragraph 142. Twelfth, in this case, the time scale of the visual rhythm needs to be adjusted to be uniform as described in fig. 8 paragraph 142. Thirteenth, one simple way of adjustments is to make the number of vertical lines of the visual rhythm per a unit time interval, for example one second, be equal to the maximum frame rate of encoded video by adding extra vertical lines into a sparse unit time interval as described in fig. 8 paragraph 142. Fourteenth, these extra visual rhythm lines can be inserted by padding or duplicating the last vertical line in the current unit time interval as described in fig. 8 paragraph 142. Sixteenth, another way of "linearizing" the visual rhythm is to maintain some fixed number of frames per unit time interval by either adding extra vertical lines into a sparse time interval or dropping selected lines from a densely populated time interval as described in fig. 8 paragraph 142. Also, see paragraph 35 above. As a result, the Applicant’s statements are unsupported by Kim.
Thus, from the above passages, Kim indeed discloses the following claimed limitations of independent claim 1 that recites “changing a media item selected from the plurality of media items from the current time scale to the unified time scale that provides a constant frame interval for the selected media item; and streaming the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale.” See the actual claim rejections further below.
On the other hand, Choi discloses the following claimed limitations of independent claim 1 that recites “calculating, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items” (see ¶ 76 for calculating, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items (i.e. in case of an MPEG signal, the real time-scale (i.e. the target time-scale) of the time-scaled video signal may be calculated from the time stamp, the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal. .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4)))
Therefore, the combined teachings of the primary reference and the secondary reference do not destroy the primary reference; in fact, it enhances the operation of the primary reference since Choi discloses in paragraphs 75-76 that calculating the real time-scale (i.e. the target time-scale) of the time-scaled video signal from the time stamp, wherein the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, wherein the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal. .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4) in order to improve efficiency when achieving synchronization of the AV signals while time-scaling, i.e., the audio reproduction speed can be coincided with the video reproduction speed regardless of the real reproduction speed of the video signal, as a result, the synchronization between the time-scaled audio and video signals can be well maintained. As a result, the combination of both references won’t change the principle of operation of the primary reference being modified, and then the teachings of the references are sufficient to render the claims prima facie obvious.
In re pages 3-4, Applicant states that “Dependent claim 4 recites that “the selected media item comprises video content that was captured using a variable refresh rate.” In contrast, the cited art does not disclose, teach, or suggest at least this claim feature. For example, Kim’s discussion of variability merely concerns variable frame-rate encoding used “to produce a video stream with a constant bit rate,” which causes distortion in a browsing interface time scale. See, e.g., Kim, pars. [0035]-[0037]). Kim thus addresses encoder behavior and a coarse visual time representation, not a media item “captured using a variable refresh rate.” See id. For at least these reasons, Kim does not disclose, teach, or suggest a selected media item comprising video content that was captured using a variable refresh rate, as required by claim 4. Applicant, therefore, respectfully requests withdrawal of this rejection and the allowance of claim 4.”
(2) In response, the Examiner respectfully disagrees. As described above in paragraphs 35 and 142 Kim discloses the claimed limitations of dependent claim 4 that recites “the selected media item comprises video content that was captured using a variable refresh rate.” As a result, the Applicant’s statements are unsupported by Kim. See actual claim rejection further below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 7-9, 11, 15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (US 2004/0125124 A1)(hereinafter Kim), and further in view of Choi (US 2007/0168188 A1)(hereinafter Choi).
Re claim 1, Kim discloses a computer-implemented method comprising: determining, for a plurality of different media items, a current time scale at which each of the media items is encoded, wherein at least two of the plurality of media items are encoded at different frame rates (see ¶ 36 for determining, for a plurality of different media items, a current time scale at which each of the media items is encoded, wherein at least two of the plurality of media items are encoded at different frame rates (i.e. the different widths of the shots 106 and 108 mean that the frame rates of their corresponding shots in the high and low bit rate encoded video streams are different, because each vertical line of the "browser interface" corresponds to one frame of encoded video source, similarly, the differing horizontal position and widths of shots 110 and 112 indicate differences in frame rate between the high and low bit-rate encoded video streams, as FIG. 1 illustrates, although the browser interface can be used as a time scale for the video it represents, it is only a coarse representation of absolute time because variable frame rates affect the widths and positions of visual features of the browser interface as described in fig. 1 paragraph 37)); changing a media item selected from the plurality of media items from the current time scale to the unified time scale that provides a constant frame interval for the selected media item (see ¶ 139 for changing a media item selected from the plurality of media items from the current time scale to the unified time scale that provides a constant frame interval for the selected media item (i.e. when an audio segment 862 does not match up cleanly with a video shot 852, it may be better to move the start position of the video shot 852 to match that of the audio segment 862, because humans can be more sensitive to audio than video, (to move the start position of a shot, either ahead or behind, the user can click on the shot marker and move it to the left or right) as described in fig. 8 paragraph 141, furthermore, the time scale of the visual rhythm needs to be adjusted to be uniform, one simple way of adjustments is to make the number of vertical lines of the visual rhythm per a unit time interval, for example one second, be equal to the maximum frame rate of encoded video by adding extra vertical lines into a sparse unit time interval as described in fig. 8 paragraph 142)); and streaming the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale (see ¶ 35 for streaming the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale (i.e. the time scale of the visual rhythm needs to be adjusted to be uniform, one simple way of adjustments is to make the number of vertical lines of the visual rhythm per a unit time interval, for example one second, be equal to the maximum frame rate of encoded video by adding extra vertical lines into a sparse unit time interval as described in fig. 8 paragraph 142))
Kim fails to explicitly teach calculating, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items. However, the reference of Choi explicitly teaches calculating, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items (see ¶ 76 for calculating, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items (i.e. in case of an MPEG signal, the real time-scale (i.e. the target time-scale) of the time-scaled video signal may be calculated from the time stamp, the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal. .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4)))
Therefore, taking the combined teachings of Kim and Choi as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (calculating) into the system of Kim as taught by Choi.
One will be motivated to incorporate the above feature into the system of Kim as taught by Choi for the benefit of calculating the real time-scale (i.e. the target time-scale) of the time-scaled video signal from the time stamp, wherein the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, wherein the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal. .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4) in order to improve efficiency when achieving synchronization of the AV signals while time-scaling, i.e., the audio reproduction speed can be coincided with the video reproduction speed regardless of the real reproduction speed of the video signal, as a result, the synchronization between the time-scaled audio and video signals can be well maintained (see ¶s 75-76)
Re claim 4, the combination of Kim and Choi as discussed in claim 1 above discloses all the claim limitations with additional claimed feature taught by Kim wherein the selected media item comprises video content that was captured using a variable refresh rate (see ¶ 35 for the selected media item comprises video content that was captured using a variable refresh rate (i.e. the time scale of the visual rhythm needs to be adjusted to be uniform, one simple way of adjustments is to make the number of vertical lines of the visual rhythm per a unit time interval, for example one second, be equal to the maximum frame rate of encoded video by adding extra vertical lines into a sparse unit time interval as described in fig. 8 paragraph 142))
Re claim 7, the combination of Kim and Choi as discussed in claim 1 above discloses all the claim limitations with additional claimed feature taught by Kim wherein the plurality of media items comprises video media items (see ¶s 100-101 for the plurality of media items comprises video media items as shown in fig. 2)
Re claim 8, the combination of Kim and Choi as discussed in claim 7 above discloses all the claim limitations with additional claimed feature taught by Kim wherein the video media items are encoded at 23.97, 24, 25, 29.97, 30, 59.94, 60, 120, 240 or 300 frames per second (see ¶ 14 for the video media items are encoded at 23.97, 24, 25, 29.97, 30, 59.94, 60, 120, 240 or 300 frames per second (i.e. the ATSC digital TV standard, Revision B (ATSC Standard A/53B) defines a standard for digital video based on MPEG-2 encoding, and allows video frames as large as 1920.times.1080 pixels/pels (2,073,600 pixels) at 20 Mbps, for example))
Re claim 9, the combination of Kim and Choi as discussed in claim 1 above discloses all the claim limitations with additional claimed feature taught by Kim wherein the plurality of media items comprises audio media items (see ¶s 141 for the plurality of media items comprises audio media items as shown fig. 8)
Re claim 11, the combination of Kim and Choi as discussed in claim 1 above discloses all the claim limitations with additional claimed feature taught by Kim wherein each of the plurality of media items in a specified group of media items has a specified video frame rate and audio frame rate, and wherein the unified time scale is calculated to optimize the specified video frame rate and the specified audio frame rate of the media items in the group (see ¶ 141 for each of the plurality of media items in a specified group of media items has a specified video frame rate and audio frame rate, and wherein the unified time scale is calculated to optimize the specified video frame rate and the specified audio frame rate of the media items in the group (i.e. the time scale of the visual rhythm needs to be adjusted to be uniform, one simple way of adjustments is to make the number of vertical lines of the visual rhythm per a unit time interval, for example one second, be equal to the maximum frame rate of encoded video by adding extra vertical lines into a sparse unit time interval as described in fig. 8 paragraph 142))
Re claim 15, Kim discloses a system comprising: at least one physical processor (see ¶ 14 for processor); and physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor to (see ¶ 14 for physical memory comprising computer-executable instructions that, when executed by the physical processor, cause the physical processor. It should be noted that a computer as described in paragraph 25 has to include a processor, memory comprising computer-executable instructions by the processor): determine, for a plurality of different media items, a current time scale at which each of the media items is encoded, wherein at least two of the plurality of media items are encoded at different frame rates (see ¶ 36 for determine, for a plurality of different media items, a current time scale at which each of the media items is encoded, wherein at least two of the plurality of media items are encoded at different frame rates (i.e. the different widths of the shots 106 and 108 mean that the frame rates of their corresponding shots in the high and low bit rate encoded video streams are different, because each vertical line of the "browser interface" corresponds to one frame of encoded video source, similarly, the differing horizontal position and widths of shots 110 and 112 indicate differences in frame rate between the high and low bit-rate encoded video streams, as FIG. 1 illustrates, although the browser interface can be used as a time scale for the video it represents, it is only a coarse representation of absolute time because variable frame rates affect the widths and positions of visual features of the browser interface as described in fig. 1 paragraph 37)); change a media item selected from the plurality of media items from the current time scale to the unified time scale that provides a constant frame interval for the selected media item (see ¶ 139 for change a media item selected from the plurality of media items from the current time scale to the unified time scale that provides a constant frame interval for the selected media item (i.e. when an audio segment 862 does not match up cleanly with a video shot 852, it may be better to move the start position of the video shot 852 to match that of the audio segment 862, because humans can be more sensitive to audio than video, (to move the start position of a shot, either ahead or behind, the user can click on the shot marker and move it to the left or right) as described in fig. 8 paragraph 141, furthermore, the time scale of the visual rhythm needs to be adjusted to be uniform, one simple way of adjustments is to make the number of vertical lines of the visual rhythm per a unit time interval, for example one second, be equal to the maximum frame rate of encoded video by adding extra vertical lines into a sparse unit time interval as described in fig. 8 paragraph 142)); and stream the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale (see ¶ 35 for stream the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale (i.e. the time scale of the visual rhythm needs to be adjusted to be uniform, one simple way of adjustments is to make the number of vertical lines of the visual rhythm per a unit time interval, for example one second, be equal to the maximum frame rate of encoded video by adding extra vertical lines into a sparse unit time interval as described in fig. 8 paragraph 142))
Kim fails to explicitly teach calculate, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items. However, the reference of Choi explicitly teaches calculate, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items (see ¶ 76 for calculate, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items (i.e. in case of an MPEG signal, the real time-scale (i.e. the target time-scale) of the time-scaled video signal may be calculated from the time stamp, the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal. .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4)))
Therefore, taking the combined teachings of Kim and Choi as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (calculating) into the system of Kim as taught by Choi.
One will be motivated to incorporate the above feature into the system of Kim as taught by Choi for the benefit of calculating the real time-scale (i.e. the target time-scale) of the time-scaled video signal from the time stamp, wherein the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, wherein the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal. .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4) in order to improve efficiency when achieving synchronization of the AV signals while time-scaling, i.e., the audio reproduction speed can be coincided with the video reproduction speed regardless of the real reproduction speed of the video signal, as a result, the synchronization between the time-scaled audio and video signals can be well maintained (see ¶s 75-76)
Re claim 20, Kim discloses a non-transitory computer-readable medium comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device, cause the computing device to: determine, for a plurality of different media items, a current time scale at which each of the media items is encoded, wherein at least two of the plurality of media items are encoded at different frame rates (see ¶ 36 for determine, for a plurality of different media items, a current time scale at which each of the media items is encoded, wherein at least two of the plurality of media items are encoded at different frame rates (i.e. the different widths of the shots 106 and 108 mean that the frame rates of their corresponding shots in the high and low bit rate encoded video streams are different, because each vertical line of the "browser interface" corresponds to one frame of encoded video source, similarly, the differing horizontal position and widths of shots 110 and 112 indicate differences in frame rate between the high and low bit-rate encoded video streams, as FIG. 1 illustrates, although the browser interface can be used as a time scale for the video it represents, it is only a coarse representation of absolute time because variable frame rates affect the widths and positions of visual features of the browser interface as described in fig. 1 paragraph 37)); change a media item selected from the plurality of media items from the current time scale to the unified time scale that provides a constant frame interval for the selected media item (see ¶ 139 for change a media item selected from the plurality of media items from the current time scale to the unified time scale that provides a constant frame interval for the selected media item (i.e. when an audio segment 862 does not match up cleanly with a video shot 852, it may be better to move the start position of the video shot 852 to match that of the audio segment 862, because humans can be more sensitive to audio than video, (to move the start position of a shot, either ahead or behind, the user can click on the shot marker and move it to the left or right) as described in fig. 8 paragraph 141, furthermore, the time scale of the visual rhythm needs to be adjusted to be uniform, one simple way of adjustments is to make the number of vertical lines of the visual rhythm per a unit time interval, for example one second, be equal to the maximum frame rate of encoded video by adding extra vertical lines into a sparse unit time interval as described in fig. 8 paragraph 142)); and stream the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale (see ¶ 35 for stream the selected media item at a variable frame rate, while maintaining the constant frame interval using the unified time scale (i.e. the time scale of the visual rhythm needs to be adjusted to be uniform, one simple way of adjustments is to make the number of vertical lines of the visual rhythm per a unit time interval, for example one second, be equal to the maximum frame rate of encoded video by adding extra vertical lines into a sparse unit time interval as described in fig. 8 paragraph 142))
Kim fails to explicitly teach calculate, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items. However, the reference of Choi explicitly teaches calculate, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items (see ¶ 76 for calculate, for the plurality of media items, a unified time scale that provides a constant frame interval for each of the plurality of media items (i.e. in case of an MPEG signal, the real time-scale (i.e. the target time-scale) of the time-scaled video signal may be calculated from the time stamp, the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal. .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4)))
Therefore, taking the combined teachings of Kim and Choi as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (calculating) into the system of Kim as taught by Choi.
One will be motivated to incorporate the above feature into the system of Kim as taught by Choi for the benefit of calculating the real time-scale (i.e. the target time-scale) of the time-scaled video signal from the time stamp, wherein the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, wherein the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal. .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4) in order to improve efficiency when achieving synchronization of the AV signals while time-scaling, i.e., the audio reproduction speed can be coincided with the video reproduction speed regardless of the real reproduction speed of the video signal, as a result, the synchronization between the time-scaled audio and video signals can be well maintained (see ¶s 75-76)
Claims 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (US 2004/0125124 A1)(hereinafter Kim) as applied to claims 1, 4, 7-9, 11, 15 and 20 above, and further in view of Choi (US 2007/0168188 A1)(hereinafter Choi), and further in view of Nago et al. (US 6,567,117 B1)(hereinafter Nago).
Re claim 5, the combination of Kim and Choi as discussed in claim 1 above discloses all the claimed limitations but fails to explicitly teach wherein a maximum range of frame rates for the media items within the plurality of media items includes a specified upper bound and a specified lower bound. However, the reference of Nago explicitly teaches wherein a maximum range of frame rates for the media items within the plurality of media items includes a specified upper bound and a specified lower bound (see col. 7 lines 45-63 for a maximum range of frame rates for the media items within the plurality of media items includes a specified upper bound and a specified lower bound (i.e. a maximum range of operable frame rates for each coding bit rate is determined in a manner such that 1 to 7 frames/sec are available for the bit rate of 32 kbps and 1 to 13 frames/sec are available for the bit rate of 64 kbps as shown in the conversion table as described in col. 7 lines 50-54))
Therefore, taking the combined teachings of Kim, Choi and Nago as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (maximum) into the system of Kim as taught by Nago.
One will be motivated to incorporate the above feature into the system of Kim as taught by Nago for the benefit of determining a maximum range of operable frame rates for each coding bit rate in a manner such that 1 to 7 frames/sec are available for the bit rate of 32 kbps and 1 to 13 frames/sec are available for the bit rate of 64 kbps as shown in the conversion table in order to improve efficiency when determining a maximum range of operable frame rates (see col. 7 lines 50-54)
Re claim 6, the combination of Kim, Choi and Nago as discussed in claim 5 above discloses all the claim limitations with additional claimed feature taught by Kim wherein the lower bound comprises 24 frames per second, and wherein the upper bound comprises 60 frames per second (see ¶ 14 for the lower bound comprises 24 frames per second, and wherein the upper bound comprises 60 frames per second (i.e. the ATSC digital TV standard, Revision B (ATSC Standard A/53B) defines a standard for digital video based on MPEG-2 encoding, and allows video frames as large as 1920.times.1080 pixels/pels (2,073,600 pixels) at 20 Mbps, for example))
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (US 2004/0125124 A1)(hereinafter Kim) as applied to claims 1, 4, 7-9, 11, 15 and 20 above, and further in view of Choi (US 2007/0168188 A1)(hereinafter Choi), and further in view of Baumgarte (US 2014/0297291 A1)(hereinafter Baumgarte).
Re claim 10, the combination of Kim and Choi as discussed in claim 9 above discloses all the claimed limitations but fails to explicitly teach wherein the audio media items have a frame rate of 1024, 1536, or 2048 samples per frame. However, the reference of Baumgarte explicitly teaches wherein the audio media items have a frame rate of 1024, 1536, or 2048 samples per frame (see ¶ 21 for the audio media items have a frame rate of 1024, 1536, or 2048 samples per frame (i.e. a typical frame size of MPEG-AAC is 1024 samples, wherein for each new frame, the decoder reconstructs 2048 samples, the first 1024 of which are added to the last 1024 samples of the previous block as described in paragraph 152))
Therefore, taking the combined teachings of Kim, Choi and Baumgarte as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (samples per frame) into the system of Kim as taught by Baumgarte.
One will be motivated to incorporate the above feature into the system of Kim as taught by Baumgarte for the benefit of having a typical audio decoder that reconstructs the audio signal using an overlap-add method with 50% overlap of subsequent blocks, wherein each of the blocks is weighted by a window that tapers off at either end, for instance, a typical frame size of MPEG-AAC is 1024 samples, wherein for each new frame, the decoder reconstructs 2048 samples, the first 1024 of which are added to the last 1024 samples of the previous block and the result is the decoder output, wherein the info blocks that come with frame k are scheduled uniformly during the second half of the reconstructed block, wherein the gain values within each info block are distributed uniformly across the info block's duration in order to improve efficiency when ensuring that all necessary DRC gain values are available when decoding starts and ends, as well as for interpolation (see ¶ 152)
Claims 12 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (US 2004/0125124 A1)(hereinafter Kim) as applied to claims 1, 4, 7-9, 11, 15 and 20 above, and further in view of Choi (US 2007/0168188 A1)(hereinafter Choi), and further in view of Chen (WO 2012068898 A1)(hereinafter Chen).
Re claim 12, the combination of Kim and Choi as discussed in claim 11 above discloses all the claimed limitations but fails to explicitly teach wherein the unified time scale is implemented to generate one or more presentation time stamps (PTSs) for the group of media items. However, the reference of Chen explicitly teaches wherein the unified time scale is implemented to generate one or more presentation time stamps (PTSs) for the group of media items (see page 6 lines 12-32 for the unified time scale is implemented to generate one or more presentation time stamps (PTSs) for the group of media items (i.e. the front-end transmitting device converts a PTS (Presentation Time Stamp) timestamp of the layer code streams of the media data into a mobile multimedia broadcast timestamp under a unified time reference as described in page 6 lines 10-12))
Therefore, taking the combined teachings of Kim, Choi and Chen as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (presentation time stamps (PTSs)) into the system of Kim as taught by Chen.
One will be motivated to incorporate the above feature into the system of Kim as taught by Chen for the benefit of having a front-end transmitting device that converts a PTS (Presentation Time Stamp) timestamp of the layer code streams of the media data into a mobile multimedia broadcast timestamp under a unified time reference, and controls the layer code streams, wherein the front-end transmitting device calculates, according to the PTS timestamp in the PES (Packetized Elementary Stream) packet and the PCR time information of the PES packet, the PCR time of each layer of the code stream, wherein the front-end transmitting device divides the PCR time of the layer code stream by the clock scale of the PCR and multiplies the mobile multimedia broadcast time scale to obtain a mobile multimedia broadcast time stamp of the layer code stream, wherein the front-end transmitting device broadcasts the media data that needs to be sent together with the mobile multimedia broadcast timestamp of the layer code streams of the media data in order to improve efficiency when the uniformity of time stamps in mobile multimedia broadcasts ensures synchronization between streams of different layered service (see page 6 lines 10-32)
Re claim 17, the combination of Kim and Choi as discussed in claim 15 above discloses all the claimed limitations but fails to explicitly teach wherein calculating the unified time scale. However, the reference of Choi explicitly teaches calculating the unified time scale (see ¶ 76 for calculating the unified time scale (i.e. the target time-scale) of the time-scaled video signal may be calculated from the time stamp, the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4)))
Therefore, taking the combined teachings of Kim and Choi as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (calculating) into the system of Kim as taught by Choi.
One will be motivated to incorporate the above feature into the system of Kim as taught by Choi for the benefit of calculating the real time-scale (i.e. the target time-scale) of the time-scaled video signal from the time stamp, wherein the video signal time-scale processor 170 can read the time value from the time stamp of the current time-scaled video frame, thus, if the time stamp TS1 of the time-scaled video frame at a certain point in the past T1 and the time stamp TS2 of the time-scaled video frame at the current time T2 are known, the real time-scale of time-scaled video signal av can be calculated from the equation (4), that is, the real time-scale of the video signal is the ratio of the real elapsed time T2-T1 from a certain point T1 in the past to the current time T2 to the difference between the time stamp TS1 of the time-scaled video frame at T1 and the time stamp TS2 of the time-scaled video frame at T2, wherein the calculated value is applied as a new target time-scale .alpha.' in the time-scaled reproduction of the audio signal. .alpha..sub.v=.alpha.'=(TS2-TS1)/(T2-T1) (4) in order to improve efficiency when achieving synchronization of the AV signals while time-scaling, i.e., the audio reproduction speed can be coincided with the video reproduction speed regardless of the real reproduction speed of the video signal, as a result, the synchronization between the time-scaled audio and video signals can be well maintained (see ¶s 75-76)
Furthermore, Kim fails to explicitly teach includes converting one or more input presentation time stamps from the plurality of media items having different time scales into PTSs based on the unified time scale. However, the reference of Chen explicitly teaches includes converting one or more input presentation time stamps from the plurality of media items having different time scales into PTSs based on the unified time scale (see page 6 lines 12-32 for converting one or more input presentation time stamps from the plurality of different media items having different time scales into PTSs based on the unified time scale (i.e. the front-end transmitting device converts a PTS (Presentation Time Stamp) timestamp of the layer code streams of the media data into a mobile multimedia broadcast timestamp under a unified time reference as described in page 6 lines 10-12))
Therefore, taking the combined teachings of Kim, Choi and Chen as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (presentation time stamps (PTSs)) into the system of Kim as taught by Chen.
One will be motivated to incorporate the above feature into the system of Kim as taught by Chen for the benefit of having a front-end transmitting device that converts a PTS (Presentation Time Stamp) timestamp of the layer code streams of the media data into a mobile multimedia broadcast timestamp under a unified time reference, and controls the layer code streams, wherein the front-end transmitting device calculates, according to the PTS timestamp in the PES (Packetized Elementary Stream) packet and the PCR time information of the PES packet, the PCR time of each layer of the code stream, wherein the front-end transmitting device divides the PCR time of the layer code stream by the clock scale of the PCR and multiplies the mobile multimedia broadcast time scale to obtain a mobile multimedia broadcast time stamp of the layer code stream, wherein the front-end transmitting device broadcasts the media data that needs to be sent together with the mobile multimedia broadcast timestamp of the layer code streams of the media data in order to improve efficiency when the uniformity of time stamps in mobile multimedia broadcasts ensures synchronization between streams of different layered service (see page 6 lines 10-32)
Claims 13, 14 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (US 2004/0125124 A1)(hereinafter Kim) as applied to claims 1, 4, 7-9, 11, 15 and 20 above, and further in view of Choi (US 2007/0168188 A1)(hereinafter Choi), and further in view of Chen (WO 2012068898 A1)(hereinafter Chen), and further in view of Shaffer et al. (US 2014/0140417 A1)(hereinafter Shaffer).
Re claim 13, the combination of Kim, Choi and Chen as discussed in claim 12 above discloses all the claimed limitations but fails to explicitly teach wherein the one or more PTSs are monotonically increasing, and wherein units used in the unified time scale are selected to maximize wrap-around time for the group of media items. However, the reference of Shaffer explicitly teaches wherein the one or more PTSs are monotonically increasing, and wherein units used in the unified time scale are selected to maximize wrap-around time for the group of media items (see ¶ 76 for the one or more PTSs are monotonically increasing, and wherein units used in the unified time scale are selected to maximize wrap-around time for the group of media items (i.e. at the wrap of the first PTS cycle, the next fragment boundary timestamp doesn't start at PTS=0 but rather at the last fragment boundary of the first PTS cycle+Fragment Length (modulo 2 33), in this way, the fragments and segments have the same length at the PTS wrap and no PTS discontinuities occur for the frame rate reduced profiles as described in fig. 6 paragraph 77))
Therefore, taking the combined teachings of Kim, Choi, Chen and Shaffer as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (wrap-around) into the system of Kim as taught by Shaffer.
One will be motivated to incorporate the above feature into the system of Kim as taught by Shaffer for the benefit of considering a video synchronization procedure multiple successive PTS cycles, wherein depending upon the current cycle as determined by the source PTS values, the position of the theoretical fragment/segment boundaries will change, wherein at the wrap of the first PTS cycle, the next fragment boundary timestamp doesn't start at PTS=0 but rather at the last fragment boundary of the first PTS cycle+Fragment Length (modulo 2 33), wherein in this way, the fragments and segments have the same length at the PTS wrap and no PTS discontinuities occur for the frame rate reduced profiles, wherein given the video frame rate, the number of frames per fragment and the number of fragments per segment, a lookup table 212 (FIG. 2) is built that contains all fragment and segment boundaries for all PTS cycles in order to improve efficiency when upon reception of an input PTS value, the current PTS cycle is determined and a lookup is performed in lookup table 212 to find the next fragment/segment boundary (see ¶ 77)
Re claim 14, the combination of Kim, Choi, Chen and Shaffer as discussed in claim 13 above discloses all the claimed limitations but fails to explicitly teach wherein the units selected to maximize wrap-around time for the group of media items are selected based on video frame rate. However, the reference of Shaffer explicitly teaches wherein the units selected to maximize wrap-around time for the group of media items are selected based on video frame rate (see ¶ 76 for the units selected to maximize wrap-around time for the group of media items are selected based on video frame rate (i.e. at the wrap of the first PTS cycle, the next fragment boundary timestamp doesn't start at PTS=0 but rather at the last fragment boundary of the first PTS cycle+Fragment Length (modulo 2 33), in this way, the fragments and segments have the same length at the PTS wrap and no PTS discontinuities occur for the frame rate reduced profiles as described in fig. 6 paragraph 77))
Therefore, taking the combined teachings of Kim, Choi, Chen and Shaffer as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (wrap-around) into the system of Kim as taught by Shaffer.
Per claim 14, Kim, Choi, Chen and Shaffer are combined for the same motivation as set forth in claim 13 above.
Re claim 18, the combination of Kim, Choi and Chen as discussed in claim 17 above discloses all the claimed limitations but fails to explicitly teach wherein implementing the converted input PTSs avoids PTS counter wrap-around. However, the reference of Shaffer explicitly teaches wherein implementing the converted input PTSs avoids PTS counter wrap-around (see ¶ 53 for implementing the converted input PTSs avoids PTS counter wrap-around (i.e. this means that the last fragment before the wrap of the PTS counter will be longer than the other fragments and the last fragment ends at the PTS wrap))
Therefore, taking the combined teachings of Kim, Choi, Chen and Shaffer as a whole, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (wrap-around) into the system of Kim as taught by Shaffer.
One will be motivated to incorporate the above feature into the system of Kim as taught by Shaffer for the benefit of extending the last fragment in the PTS cycle to the end of the PTS cycle, wherein this means that the last fragment before the wrap of the PTS counter will be longer than the other fragments and the last fragment ends at the PTS wrap in order to improve efficiency when addressing an issue that arises with using a PTS value as a time reference for video synchronization is that the PTS value wraps around back to zero after approximately 26.5 hours (see ¶ 53)
Conclusion
THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSE M MESA whose telephone number is (571)270-1706. The examiner can normally be reached Monday-Friday 8:30AM-6:00PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Tran can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
3/12/2026
/JOSE M. MESA/
Examiner
Art Unit 2484
/THAI Q TRAN/Supervisory Patent Examiner, Art Unit 2484