Prosecution Insights
Last updated: April 19, 2026
Application No. 18/791,436

IMAGE OUTPUT CONTROL DEVICE AND METHOD

Non-Final OA §103§112
Filed
Aug 01, 2024
Examiner
LIU, GORDON G
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Realtek Semiconductor Corporation
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
98%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
556 granted / 673 resolved
+20.6% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
29 currently pending
Career history
702
Total Applications
across all art units

Statute-Specific Performance

§101
6.7%
-33.3% vs TC avg
§103
73.3%
+33.3% vs TC avg
§102
3.0%
-37.0% vs TC avg
§112
5.7%
-34.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 673 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-19 are pending under this Office action. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. A broad range or limitation together with a narrow range or limitation that falls within the broad range or limitation (in the same claim) may be considered indefinite if the resulting claim does not clearly set forth the metes and bounds of the patent protection desired. See MPEP § 2173.05(c). In the present instance, claims 9 and 19 recite the broad recitation “wherein when the reading position of the storage device does not exceed the start writing position, the controller is configured to read the first frame and the second frame from the plurality of first input frames stored in the storage device corresponding to the first frame rate, and when the reading position of the storage device reaches or exceeds the start writing position, the controller is configured to read the first frame and the second frame from the plurality of second input frames stored in the storage device corresponding to the second frame rate”, and the claim also recites “the first frame and the second frame are respectively the first input frame and the second input frame” which is the narrower statement of the range/limitation. The claim(s) are considered indefinite because there is a question or doubt as to whether the feature introduced by such narrower language is (a) merely exemplary of the remainder of the claim, and therefore not required, or (b) a required feature of the claims. Claim 9 recites the limitation "the first input frame and the second input frame" in “wherein when the first frame and the second frame are respectively the first input frame and the second input frame”. There is insufficient antecedent basis for this limitation in the claim. Note that the independent claim 1 has defined “the first input frames and the second input frames”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Oshima, etc. (US 20030053797 A1) in view of Braness, etc. (US 20120173751 A1). Regarding claim 1, Oshima teaches that an image output control device (See Oshima: Fig. 5, and [0074], “The reproducing operation is described by referring to the block diagram of 3D reproducing device of the invention in FIG. 5, and the timing chart in FIG. 6. When a signal is reproduced from the optical disk 1 by an optical head 15 and an optical reproducing circuit 24, and a stereoscopic video identifier is detected by a stereoscopic video arrangement information reproducing unit 26, or when video data designated to be stereoscopic video in a stereoscopic video arrangement table 14 as shown in FIG. 4 is reproduced, if a stereoscopic video output is instructed from an input unit 19 or the like, the stereoscopic video is processed, and, at the same time, a SW unit 27 is controlled, and R signal and L signal are issued from an R output unit 29 and an L output unit 30, and R and L are issued alternately in each field from an RL mixed output unit 28”), comprising: a storage device configured to sequentially store a plurality of first input frames (See Oshima: Figs. 1-2, and [0066], FIG. 1 is a block diagram of an optical disk recording device 2 of the invention. A signal for the right eye of a stereoscopic image is called an R-TV signal, and a signal for the left eye is called an L-TV signal, and the R-TV signal and L-TV signal are compressed into MPEG signals by MPEG encoders 3a, 3b, and an R-MPEG signal and an L-MPEG signal as shown in FIG. 2(2) are obtained. These signals are interleaved in an interleave circuit 4, as shown in FIG. 2(3), so that an R frame group 6 by combining R frames 5 of R-MPEG signals by the number of frames of one GOP or more into a frame group, and an L frame group 8 by combining L frames 7 of L-MPEG signals by the number of frames of one GOP or more may be disposed alternately. This recording unit is called an interleaved block, or called a frame group in the specification. In order that the right-eye signal and left-eye signal may be synchronized when reproducing, the number of frames in the R frame group 6 and L frame group 8 is same as the number of frames in the same duration. This is also called the video data unit, and in one unit, data for the duration of 0.4 sec to 1 sec is recorded. In the case of DVD, on the other hand, the innermost circumference is 1440 rpm, that is, 24 Hz. Accordingly, as shown in FIG. 2(4), the interleaved block is recorded for more than one revolution to more than ten revolutions of the disk”. Note that R-frame group and L-frame group are recorded on the disk sequentially, and R-frame group is mapped to the first plurality input frames, but the first input frames may be the R-frames, L-frames, or first video recorded on the disk, etc.) corresponding to a first frame rate and a plurality of second input frames (See Oshima: Figs. 1-2, and [0066], FIG. 1 is a block diagram of an optical disk recording device 2 of the invention. A signal for the right eye of a stereoscopic image is called an R-TV signal, and a signal for the left eye is called an L-TV signal, and the R-TV signal and L-TV signal are compressed into MPEG signals by MPEG encoders 3a, 3b, and an R-MPEG signal and an L-MPEG signal as shown in FIG. 2(2) are obtained. These signals are interleaved in an interleave circuit 4, as shown in FIG. 2(3), so that an R frame group 6 by combining R frames 5 of R-MPEG signals by the number of frames of one GOP or more into a frame group, and an L frame group 8 by combining L frames 7 of L-MPEG signals by the number of frames of one GOP or more may be disposed alternately. This recording unit is called an interleaved block, or called a frame group in the specification. In order that the right-eye signal and left-eye signal may be synchronized when reproducing, the number of frames in the R frame group 6 and L frame group 8 is same as the number of frames in the same duration. This is also called the video data unit, and in one unit, data for the duration of 0.4 sec to 1 sec is recorded. In the case of DVD, on the other hand, the innermost circumference is 1440 rpm, that is, 24 Hz. Accordingly, as shown in FIG. 2(4), the interleaved block is recorded for more than one revolution to more than ten revolutions of the disk”. Note that L-frame group is mapped to the second plurality input frames, but the second input frames may be the L-frames, R-frames, or second video recorded on the disk, etc.) corresponding to a second frame rate in image data; a controller configured to record a start writing position of the plurality of second input frames in the storage device (See Oshima: Figs. 4-5, and [0078], “Next is described the procedure of rotating at single speed and taking out only R signal. The standard rotation of the DVD reproducing device is called the single speed, and double rotation of the standard is called the double speed. Since it is not necessary to rotate the motor 34 at double speed, a single speed command is sent from a control unit 21 to a rotating speed change circuit 35, and the rotating speed is lowered. The procedure of taking out only R signal at single speed from the optical disk in which R signal and L signal are recorded is described by referring to the time chart in FIG. 8. As explained in FIGS. 6(1), (2), R frame groups 6 and L frame groups 8 are alternately recorded in the optical disk of the invention. This state is shown in FIGS. 8(1), (2)”; [0066], “As shown in FIG. 4, the channel numbers arranging R and L stereoscopic videos, start address and end address are presented. On the basis of such arrangement information and identification information, in the reproducing device, stereoscopic videos are correctly issued as R and L outputs”; and [0089], “As shown in the recorded data on the optical disk in the time chart (1) in FIG. 14, A1 data and the beginning address a5 of the first interleaved block 56a to be accessed next are recorded in the first interleaved block 56. That is, since the next pointer 60 is recorded, as shown in FIG. 14(2), when reproduction of the first interleaved block 56 is over, only by accessing the address of the pointer 60a, by jumping tracks, a next first interleaved block 56a is accessed in 100 msec, so that A2 data can be reproduced. Similarly, A3 data is reproduced. Thus, contents A3 can be reproduced continuously”. Note that the starting writing positions of any second video different from the first video frames, such as the address a2 or pointer a6 for video B1, a3 for video C1, and a4 for video D1, etc. are mapped to the writing position of the second input frames), and provide a control signal to the storage device to set a reading position of the storage device (See Oshima: Fig. 5 and 35, and [0141], “In this case, since only the first video signal as basic story is reproduced usually, after the first stream 111a, a next first stream 11b is reproduced and issued consecutively. However, at the moment of t=tc, when the user commands to change over to the second video signal from the command input unit 19 in FIG. 5, at t=tc, the track at other radius position is accessed by using the tracking control circuit 22 in FIG. 5 from the first stream 111a to the second stream 112b, and the output signal is changed over to the second stream 112b of the second video signa”. Note that the control signal for the optical head to a different reading position/tracks is mapped to the reading position, also under video processing, time or timestamps is also mapped to the position of the video frame if the starting time and frame rate are known), so as to read a first frame and a second frame from the storage device (See Oshima: Figs. 5 and 35, and [0141], “In this case, since only the first video signal as basic story is reproduced usually, after the first stream 111a, a next first stream 11b is reproduced and issued consecutively. However, at the moment of t=tc, when the user commands to change over to the second video signal from the command input unit 19 in FIG. 5, at t=tc, the track at other radius position is accessed by using the tracking control circuit 22 in FIG. 5 from the first stream 111a to the second stream 112b, and the output signal is changed over to the second stream 112b of the second video signal”; and [0142], “Thus, when the first video signal is at the time of t=tc in FIG. 35(2), the picture, sound and sub-picture of the second video signal are changed over smoothly without interruption”. Note that several frames of the second video are red out after tc when the reading control signal is executed to read the second video, and the first two frames of the second video are mapped to the first frame and the second frame), wherein when the reading position of the storage device does not exceed the start writing position, the controller is configured to read the first frame and the second frame from the plurality of first input frames stored in the storage device corresponding to the first frame rate (See Oshima: Figs. 5, 13-14 and 35, and [0141], “In this case, since only the first video signal as basic story is reproduced usually, after the first stream 111a, a next first stream 11b is reproduced and issued consecutively. However, at the moment of t=tc, when the user commands to change over to the second video signal from the command input unit 19 in FIG. 5, at t=tc, the track at other radius position is accessed by using the tracking control circuit 22 in FIG. 5 from the first stream 111a to the second stream 112b, and the output signal is changed over to the second stream 112b of the second video signal”; [0142], “Thus, when the first video signal is at the time of t=tc in FIG. 35(2), the picture, sound and sub-picture of the second video signal are changed over smoothly without interruption”, and [0083], “FIG. 13 is a time chart of stereoscopic video identifier and output signal. If the time after FIG. 13(3) is defined as one interleaved block time unit, there is a delay time of It, but it is not shown in the chart. The stereoscopic video identifier in FIG. 13(1) is changed from 1 to 0 at t=t7. As recorded signals in FIG. 13(2), from t1 to t7, R frame groups 6, 6a, 6b and L frame groups 8, 8a, 8b of stereoscopic videos are recorded. In t7 to till, on the other hand, completely different contents A and B are recorded as first frame groups 44, 44a, and second frame groups 45, 45a. In the standard of DVD, etc., there is no definition of stereoscopic video, and hence stereoscopic video identifier is not included in the data or directory information. Therefore, upon start of the optical disk, it is required to read out the stereoscopic video arrangement information file of the invention. In R output and L output in FIG. 13(3), (4), from t1 to t7, the data in first time domains 46, 46a, 46b may be directly issued to R output, and the data in second time domains 47, 47a, 47b, directly to L output. After t=t7, there is no stereoscopic video identifier, and therefore the same data as in first time domains 46c, 46d are issued to the R output and L output. In other output system, that is, in a mixed output in FIGS. 13(5), (6), from t1 to t7 in which the stereoscopic video identifier is 1, at the field frequency of 60 Hz or 120 Hz, even field signals 48, 48a and odd field signals 49, 49a are issued alternately from one output. The data of the first time domains 46, 46a are issued to the even field signals, and the data of the second time domains 47, 47a, to the odd field signals”. Note that several frames of the second video are red out after tc when the reading control signal is executed to read the second video, and the first two frames of the second video are mapped to the first frame and the second frame; also in Fig. 13, before t7 or between t1 to t7, stereoscopic video is read out, and the first frame and the second frame of the first input video (stereoscopic video) are mapped to first and second frame, the current position of the optical head, or the current time (between t1 and t7) is mapped to the reading position, and it is less than t7 (the writing position/time of the second video), and when the reading position of the storage device reaches or exceeds the start writing position, the controller is configured to read the first frame and the second frame from the plurality of second input frames stored in the storage device corresponding to the second frame rate (See Oshima: Figs. 5, 13-14 and 35, and [0141], “In this case, since only the first video signal as basic story is reproduced usually, after the first stream 111a, a next first stream 11b is reproduced and issued consecutively. However, at the moment of t=tc, when the user commands to change over to the second video signal from the command input unit 19 in FIG. 5, at t=tc, the track at other radius position is accessed by using the tracking control circuit 22 in FIG. 5 from the first stream 111a to the second stream 112b, and the output signal is changed over to the second stream 112b of the second video signal”; [0142], “Thus, when the first video signal is at the time of t=tc in FIG. 35(2), the picture, sound and sub-picture of the second video signal are changed over smoothly without interruption”, and [0083], “FIG. 13 is a time chart of stereoscopic video identifier and output signal. If the time after FIG. 13(3) is defined as one interleaved block time unit, there is a delay time of It, but it is not shown in the chart. The stereoscopic video identifier in FIG. 13(1) is changed from 1 to 0 at t=t7. As recorded signals in FIG. 13(2), from t1 to t7, R frame groups 6, 6a, 6b and L frame groups 8, 8a, 8b of stereoscopic videos are recorded. In t7 to till, on the other hand, completely different contents A and B are recorded as first frame groups 44, 44a, and second frame groups 45, 45a. In the standard of DVD, etc., there is no definition of stereoscopic video, and hence stereoscopic video identifier is not included in the data or directory information. Therefore, upon start of the optical disk, it is required to read out the stereoscopic video arrangement information file of the invention. In R output and L output in FIG. 13(3), (4), from t1 to t7, the data in first time domains 46, 46a, 46b may be directly issued to R output, and the data in second time domains 47, 47a, 47b, directly to L output. After t=t7, there is no stereoscopic video identifier, and therefore the same data as in first time domains 46c, 46d are issued to the R output and L output. In other output system, that is, in a mixed output in FIGS. 13(5), (6), from t1 to t7 in which the stereoscopic video identifier is 1, at the field frequency of 60 Hz or 120 Hz, even field signals 48, 48a and odd field signals 49, 49a are issued alternately from one output. The data of the first time domains 46, 46a are issued to the even field signals, and the data of the second time domains 47, 47a, to the odd field signals”. Note that after t7, the second video is read out from the storage disk, the t7 is the writing position of the second video A, the current time is greater than t7, mapping to the reading position is greater than the writing position). However, Oshima fails to explicitly disclose that a plurality of first input frames corresponding to a first frame rate and a plurality of second input frames corresponding to a second frame rate in image data. However, Braness teaches that a plurality of first input frames corresponding to a first frame rate and a plurality of second input frames corresponding to a second frame rate in image data (See Braness: Fig. 1, and [0060], “An adaptive streaming system in accordance with an embodiment of the invention is illustrated in FIG. 1. The adaptive streaming system 10 includes a source encoder 12 configured to encode source media as a number of alternative streams. In the illustrated embodiment, the source encoder is a server. In other embodiments, the source encoder can be any processing device including a processor and sufficient resources to perform the transcoding of source media (including but not limited to video, audio, and/or subtitles). As is discussed further below, the source encoding server 12 generates a top level index to a plurality of container files containing the streams, at least a plurality of which are alternative streams. Alternative streams are streams that encode the same media content in different ways. In many instances, alternative streams encode media content (such as but not limited to video) at different bitrates. In a number of embodiments, the alternative streams are encoded with different resolutions and/or at different frame rates. The top level index file and the container files are uploaded to an HTTP server 14. A variety of playback devices can then use HTTP or another appropriate stateless protocol to request portions of the top level index file and the container files via a network 16 such as the Internet”. Note that the server with large memory can stores the videos in different resolutions and frame rates, i.e., the sources of videos have first frame rate video, second frame rate videos, and much more). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Oshima to have a plurality of first input frames corresponding to a first frame rate and a plurality of second input frames corresponding to a second frame rate in image data as taught by Braness in order to reduce interruption to playback by increasing speed with which the device can switch between streams and reducing amount of overhead data downloaded to obtain the switch (See Braness: Figs. 9A-B, and [0112], “The process illustrated in FIG. 9b is ideally performed when adapting bitrate downwards, because a reduction in available resources can be exacerbated by a need to download index information in addition to media. The likelihood of interruption to playback is reduced by increasing the speed with which the playback device can switch between streams and reducing the amount of overhead data downloaded to achieve the switch”). Oshima teaches a method and system that may record high resolution stereoscopic vides in optical disks and playback the videos in the display device, stereo or non-stereo videos may be recorded in tracks on the disks with video identifier information; while Braness teaches a system and method that may stream multimedia content to users with adaptive bitrates by storing multimedia content in various resolutions and frame rates in the server files with index and headers to locate the content files. Therefore, it is obvious to one of ordinary skill in the art to modify Oshima by Braness to have various frame rate videos stored in the large storage capacity optical disk. The motivation to modify Oshima by Braness is “Use of known technique to improve similar devices (methods, or products) in the same way”. Regarding claim 10, Oshima and Braness teach all the features with respect to claim 1 as outlined above. Further, Oshima and Braness teach that an image output control method (See Oshima: Fig. 5, and [0074], “The reproducing operation is described by referring to the block diagram of 3D reproducing device of the invention in FIG. 5, and the timing chart in FIG. 6. When a signal is reproduced from the optical disk 1 by an optical head 15 and an optical reproducing circuit 24, and a stereoscopic video identifier is detected by a stereoscopic video arrangement information reproducing unit 26, or when video data designated to be stereoscopic video in a stereoscopic video arrangement table 14 as shown in FIG. 4 is reproduced, if a stereoscopic video output is instructed from an input unit 19 or the like, the stereoscopic video is processed, and, at the same time, a SW unit 27 is controlled, and R signal and L signal are issued from an R output unit 29 and an L output unit 30, and R and L are issued alternately in each field from an RL mixed output unit 28”), comprising: writing image data into a storage device, wherein the image data comprises a plurality of first input frames (See Oshima: Figs. 1-2, and [0066], FIG. 1 is a block diagram of an optical disk recording device 2 of the invention. A signal for the right eye of a stereoscopic image is called an R-TV signal, and a signal for the left eye is called an L-TV signal, and the R-TV signal and L-TV signal are compressed into MPEG signals by MPEG encoders 3a, 3b, and an R-MPEG signal and an L-MPEG signal as shown in FIG. 2(2) are obtained. These signals are interleaved in an interleave circuit 4, as shown in FIG. 2(3), so that an R frame group 6 by combining R frames 5 of R-MPEG signals by the number of frames of one GOP or more into a frame group, and an L frame group 8 by combining L frames 7 of L-MPEG signals by the number of frames of one GOP or more may be disposed alternately. This recording unit is called an interleaved block, or called a frame group in the specification. In order that the right-eye signal and left-eye signal may be synchronized when reproducing, the number of frames in the R frame group 6 and L frame group 8 is same as the number of frames in the same duration. This is also called the video data unit, and in one unit, data for the duration of 0.4 sec to 1 sec is recorded. In the case of DVD, on the other hand, the innermost circumference is 1440 rpm, that is, 24 Hz. Accordingly, as shown in FIG. 2(4), the interleaved block is recorded for more than one revolution to more than ten revolutions of the disk”. Note that R-frame group and L-frame group are recorded on the disk sequentially, and R-frame group is mapped to the first plurality input frames, but the first input frames may be the R-frames, L-frames, or first video recorded on the disk, etc.) corresponding to a first frame rate (See Braness: Fig. 1, and [0060], “An adaptive streaming system in accordance with an embodiment of the invention is illustrated in FIG. 1. The adaptive streaming system 10 includes a source encoder 12 configured to encode source media as a number of alternative streams. In the illustrated embodiment, the source encoder is a server. In other embodiments, the source encoder can be any processing device including a processor and sufficient resources to perform the transcoding of source media (including but not limited to video, audio, and/or subtitles). As is discussed further below, the source encoding server 12 generates a top level index to a plurality of container files containing the streams, at least a plurality of which are alternative streams. Alternative streams are streams that encode the same media content in different ways. In many instances, alternative streams encode media content (such as but not limited to video) at different bitrates. In a number of embodiments, the alternative streams are encoded with different resolutions and/or at different frame rates. The top level index file and the container files are uploaded to an HTTP server 14. A variety of playback devices can then use HTTP or another appropriate stateless protocol to request portions of the top level index file and the container files via a network 16 such as the Internet”. Note that the server with large memory can stores the videos in different resolutions and frame rates, i.e., the sources of videos have first frame rate video, second frame rate videos, and much more) and a plurality of second input frames (See Oshima: Figs. 1-2, and [0066], FIG. 1 is a block diagram of an optical disk recording device 2 of the invention. A signal for the right eye of a stereoscopic image is called an R-TV signal, and a signal for the left eye is called an L-TV signal, and the R-TV signal and L-TV signal are compressed into MPEG signals by MPEG encoders 3a, 3b, and an R-MPEG signal and an L-MPEG signal as shown in FIG. 2(2) are obtained. These signals are interleaved in an interleave circuit 4, as shown in FIG. 2(3), so that an R frame group 6 by combining R frames 5 of R-MPEG signals by the number of frames of one GOP or more into a frame group, and an L frame group 8 by combining L frames 7 of L-MPEG signals by the number of frames of one GOP or more may be disposed alternately. This recording unit is called an interleaved block, or called a frame group in the specification. In order that the right-eye signal and left-eye signal may be synchronized when reproducing, the number of frames in the R frame group 6 and L frame group 8 is same as the number of frames in the same duration. This is also called the video data unit, and in one unit, data for the duration of 0.4 sec to 1 sec is recorded. In the case of DVD, on the other hand, the innermost circumference is 1440 rpm, that is, 24 Hz. Accordingly, as shown in FIG. 2(4), the interleaved block is recorded for more than one revolution to more than ten revolutions of the disk”. Note that L-frame group is mapped to the second plurality input frames, but the second input frames may be the L-frames, R-frames, or second video recorded on the disk, etc.) corresponding to a second frame rate (See Braness: Fig. 1, and [0060], “An adaptive streaming system in accordance with an embodiment of the invention is illustrated in FIG. 1. The adaptive streaming system 10 includes a source encoder 12 configured to encode source media as a number of alternative streams. In the illustrated embodiment, the source encoder is a server. In other embodiments, the source encoder can be any processing device including a processor and sufficient resources to perform the transcoding of source media (including but not limited to video, audio, and/or subtitles). As is discussed further below, the source encoding server 12 generates a top level index to a plurality of container files containing the streams, at least a plurality of which are alternative streams. Alternative streams are streams that encode the same media content in different ways. In many instances, alternative streams encode media content (such as but not limited to video) at different bitrates. In a number of embodiments, the alternative streams are encoded with different resolutions and/or at different frame rates. The top level index file and the container files are uploaded to an HTTP server 14. A variety of playback devices can then use HTTP or another appropriate stateless protocol to request portions of the top level index file and the container files via a network 16 such as the Internet”. Note that the server with large memory can stores the videos in different resolutions and frame rates, i.e., the sources of videos have first frame rate video, second frame rate videos, and much more); recording a start writing position of the plurality of second input frames in the storage device (See Oshima: Figs. 4-5, and [0078], “Next is described the procedure of rotating at single speed and taking out only R signal. The standard rotation of the DVD reproducing device is called the single speed, and double rotation of the standard is called the double speed. Since it is not necessary to rotate the motor 34 at double speed, a single speed command is sent from a control unit 21 to a rotating speed change circuit 35, and the rotating speed is lowered. The procedure of taking out only R signal at single speed from the optical disk in which R signal and L signal are recorded is described by referring to the time chart in FIG. 8. As explained in FIGS. 6(1), (2), R frame groups 6 and L frame groups 8 are alternately recorded in the optical disk of the invention. This state is shown in FIGS. 8(1), (2)”; [0066], “As shown in FIG. 4, the channel numbers arranging R and L stereoscopic videos, start address and end address are presented. On the basis of such arrangement information and identification information, in the reproducing device, stereoscopic videos are correctly issued as R and L outputs”; and [0089], “As shown in the recorded data on the optical disk in the time chart (1) in FIG. 14, A1 data and the beginning address a5 of the first interleaved block 56a to be accessed next are recorded in the first interleaved block 56. That is, since the next pointer 60 is recorded, as shown in FIG. 14(2), when reproduction of the first interleaved block 56 is over, only by accessing the address of the pointer 60a, by jumping tracks, a next first interleaved block 56a is accessed in 100 msec, so that A2 data can be reproduced. Similarly, A3 data is reproduced. Thus, contents A3 can be reproduced continuously”. Note that the starting writing positions of any second video different from the first video frames, such as the address a2 or pointer a6 for video B1, a3 for video C1, and a4 for video D1, etc. are mapped to the writing position of the second input frames) when the written image data converts from the plurality of first input frames to the plurality of second input frames (See Braness: Fig. 1, and [0060], “An adaptive streaming system in accordance with an embodiment of the invention is illustrated in FIG. 1. The adaptive streaming system 10 includes a source encoder 12 configured to encode source media as a number of alternative streams. In the illustrated embodiment, the source encoder is a server. In other embodiments, the source encoder can be any processing device including a processor and sufficient resources to perform the transcoding of source media (including but not limited to video, audio, and/or subtitles). As is discussed further below, the source encoding server 12 generates a top level index to a plurality of container files containing the streams, at least a plurality of which are alternative streams. Alternative streams are streams that encode the same media content in different ways. In many instances, alternative streams encode media content (such as but not limited to video) at different bitrates. In a number of embodiments, the alternative streams are encoded with different resolutions and/or at different frame rates”. Note that the alternative stream is encoded in different frame rate, and stored in different location under different file names, which is mapped to “converts from the plurality of first input frames to the plurality of second input frames”); setting a reading position of the storage device (See Oshima: Fig. 5 and 35, and [0141], “In this case, since only the first video signal as basic story is reproduced usually, after the first stream 111a, a next first stream 11b is reproduced and issued consecutively. However, at the moment of t=tc, when the user commands to change over to the second video signal from the command input unit 19 in FIG. 5, at t=tc, the track at other radius position is accessed by using the tracking control circuit 22 in FIG. 5 from the first stream 111a to the second stream 112b, and the output signal is changed over to the second stream 112b of the second video signa”. Note that the control signal for the optical head to a different reading position/tracks is mapped to the reading position, also under video processing, time or timestamps is also mapped to the position of the video frame if the starting time and frame rate are known) so as to read a first frame and a second frame from the storage device (See Oshima: Figs. 5 and 35, and [0141], “In this case, since only the first video signal as basic story is reproduced usually, after the first stream 111a, a next first stream 11b is reproduced and issued consecutively. However, at the moment of t=tc, when the user commands to change over to the second video signal from the command input unit 19 in FIG. 5, at t=tc, the track at other radius position is accessed by using the tracking control circuit 22 in FIG. 5 from the first stream 111a to the second stream 112b, and the output signal is changed over to the second stream 112b of the second video signal”; and [0142], “Thus, when the first video signal is at the time of t=tc in FIG. 35(2), the picture, sound and sub-picture of the second video signal are changed over smoothly without interruption”. Note that several frames of the second video are red out after tc when the reading control signal is executed to read the second video, and the first two frames of the second video are mapped to the first frame and the second frame); reading the plurality of first input frames of the storage device corresponding to the first frame rate as the first frame and the second frame when the reading position of the storage device does not exceed the start writing position (See Oshima: Figs. 5, 13-14 and 35, and [0141], “In this case, since only the first video signal as basic story is reproduced usually, after the first stream 111a, a next first stream 11b is reproduced and issued consecutively. However, at the moment of t=tc, when the user commands to change over to the second video signal from the command input unit 19 in FIG. 5, at t=tc, the track at other radius position is accessed by using the tracking control circuit 22 in FIG. 5 from the first stream 111a to the second stream 112b, and the output signal is changed over to the second stream 112b of the second video signal”; [0142], “Thus, when the first video signal is at the time of t=tc in FIG. 35(2), the picture, sound and sub-picture of the second video signal are changed over smoothly without interruption”, and [0083], “FIG. 13 is a time chart of stereoscopic video identifier and output signal. If the time after FIG. 13(3) is defined as one interleaved block time unit, there is a delay time of It, but it is not shown in the chart. The stereoscopic video identifier in FIG. 13(1) is changed from 1 to 0 at t=t7. As recorded signals in FIG. 13(2), from t1 to t7, R frame groups 6, 6a, 6b and L frame groups 8, 8a, 8b of stereoscopic videos are recorded. In t7 to till, on the other hand, completely different contents A and B are recorded as first frame groups 44, 44a, and second frame groups 45, 45a. In the standard of DVD, etc., there is no definition of stereoscopic video, and hence stereoscopic video identifier is not included in the data or directory information. Therefore, upon start of the optical disk, it is required to read out the stereoscopic video arrangement information file of the invention. In R output and L output in FIG. 13(3), (4), from t1 to t7, the data in first time domains 46, 46a, 46b may be directly issued to R output, and the data in second time domains 47, 47a, 47b, directly to L output. After t=t7, there is no stereoscopic video identifier, and therefore the same data as in first time domains 46c, 46d are issued to the R output and L output. In other output system, that is, in a mixed output in FIGS. 13(5), (6), from t1 to t7 in which the stereoscopic video identifier is 1, at the field frequency of 60 Hz or 120 Hz, even field signals 48, 48a and odd field signals 49, 49a are issued alternately from one output. The data of the first time domains 46, 46a are issued to the even field signals, and the data of the second time domains 47, 47a, to the odd field signals”. Note that several frames of the second video are red out after tc when the reading control signal is executed to read the second video, and the first two frames of the second video are mapped to the first frame and the second frame; also in Fig. 13, before t7 or between t1 to t7, stereoscopic video is read out, and the first frame and the second frame of the first input video (stereoscopic video) are mapped to first and second frame, the current position of the optical head, or the current time (between t1 and t7) is mapped to the reading position, and it is less than t7 (the writing position/time of the second video); and reading the plurality of second input frames of the storage device corresponding to the second frame rate as the first frame and the second frame when the reading position of the storage device reaches or exceeds the start writing position (See Oshima: Figs. 5, 13-14 and 35, and [0141], “In this case, since only the first video signal as basic story is reproduced usually, after the first stream 111a, a next first stream 11b is reproduced and issued consecutively. However, at the moment of t=tc, when the user commands to change over to the second video signal from the command input unit 19 in FIG. 5, at t=tc, the track at other radius position is accessed by using the tracking control circuit 22 in FIG. 5 from the first stream 111a to the second stream 112b, and the output signal is changed over to the second stream 112b of the second video signal”; [0142], “Thus, when the first video signal is at the time of t=tc in FIG. 35(2), the picture, sound and sub-picture of the second video signal are changed over smoothly without interruption”, and [0083], “FIG. 13 is a time chart of stereoscopic video identifier and output signal. If the time after FIG. 13(3) is defined as one interleaved block time unit, there is a delay time of It, but it is not shown in the chart. The stereoscopic video identifier in FIG. 13(1) is changed from 1 to 0 at t=t7. As recorded signals in FIG. 13(2), from t1 to t7, R frame groups 6, 6a, 6b and L frame groups 8, 8a, 8b of stereoscopic videos are recorded. In t7 to till, on the other hand, completely different contents A and B are recorded as first frame groups 44, 44a, and second frame groups 45, 45a. In the standard of DVD, etc., there is no definition of stereoscopic video, and hence stereoscopic video identifier is not included in the data or directory information. Therefore, upon start of the optical disk, it is required to read out the stereoscopic video arrangement information file of the invention. In R output and L output in FIG. 13(3), (4), from t1 to t7, the data in first time domains 46, 46a, 46b may be directly issued to R output, and the data in second time domains 47, 47a, 47b, directly to L output. After t=t7, there is no stereoscopic video identifier, and therefore the same data as in first time domains 46c, 46d are issued to the R output and L output. In other output system, that is, in a mixed output in FIGS. 13(5), (6), from t1 to t7 in which the stereoscopic video identifier is 1, at the field frequency of 60 Hz or 120 Hz, even field signals 48, 48a and odd field signals 49, 49a are issued alternately from one output. The data of the first time domains 46, 46a are issued to the even field signals, and the data of the second time domains 47, 47a, to the odd field signals”. Note that after t7, the second video is read out from the storage disk, the t7 is the writing position of the second video A, the current time is greater than t7, mapping to the reading position is greater than the writing position). Claims 2-6 and 11-15 are rejected under 35 U.S.C. 103 as being unpatentable over Oshima, etc. (US 20030053797 A1) in view of Braness, etc. (US 20120173751 A1), further in view of Halna, etc. (US 20120050613 A1). Regarding claim 2, Oshima and Braness teach all the features with respect to claim 1 as outlined above. However, Oshima, modified by Braness, fails to explicitly disclose that the image output control device of claim 1, wherein the controller is configured to perform compensation on the first frame and the second frame to provide a plurality of output frames to a display, wherein the plurality of output frames correspond to a display frame rate of the display. However, Halna teaches that the image output control device of claim 1, wherein the controller is configured to perform compensation on the first frame and the second frame to provide a plurality of output frames to a display, wherein the plurality of output frames correspond to a display frame rate of the display (See Halna: Figs. 1A-C, and [0025]-[0027], “obtaining a frame rate, referred to as target frame rate, which is common for the devices, [0026] adapting, by each device of the set of sending devices or of the set of receiving devices, a source video stream received from a source, respectively, directly or via a sending device, from the source frame rate to the target frame rate, [0027] adjusting, at each receiving device, the display frame rate to the target frame rate so as to control a display at said target frame rate”; and [0133], “In this scenario, each sending node 102 executes operations for computing source, reference or target frame rate and duplication period as described below, for the source 101 to which it is attached.” Note that a target frame rate common to the sending device and the receiving device is determined according to the source frame rate and the display frame rate, and adapted the source video to the target frame rate, and adjusted the display frame rate to the target frame rate, this mechanism is mapped to compensation (adaptation the video source to the target frame rate which depends on both display frame rate and the source frame rate) the input frames to provide the output frame for the display that is also adjusted to the target frame rate based on its original display frame rate). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Oshima to have the image output control device of claim 1, wherein the controller is configured to perform compensation on the first frame and the second frame to provide a plurality of output frames to a display, wherein the plurality of output frames correspond to a display frame rate of the display as taught by Halna in order to enable synchronizing the devices of video image stream distribution system in a simple and inexpensive manner (See Halna: Figs. 1A-C, and [0085], “The system has similar advantages to those of the method set out above, in particular that of enabling switching of sources (via switching of sending devices for example) or of receiving devices provided for driving a display, without re-synchronization of the pair of devices communicating together”). Oshima teaches a method and system that may record high resolution stereoscopic vides in optical disks and playback the videos in the display device, stereo or non-stereo videos may be recorded in tracks on the disks with video identifier information; while Halna teaches a system and method that may adapt the source video frame rate to suit for the display through a target frame rate common to both the source and receiving devices to simplify the synchronization when sources or display devices are changed. Therefore, it is obvious to one of ordinary skill in the art to modify Oshima by Halna to adapt the source video frame rate to the display frame rate. The motivation to modify Oshima by Halna is “Use of known technique to improve similar devices (methods, or products) in the same way”. Regarding claim 3, Oshima, Braness, and Halna teach all the features with respect to claim 2 as outlined above. Further, Halna teaches that the image output control device of claim 2, wherein the display frame rate is greater than or equal to the first frame rate or the second frame rate (See Halna: Figs. 6A-B, and [0428], “Returning to step 627, the module 506 triggers the sending of an "add line" warning message to all the other nodes of the network. More particularly, it is possible to be in this situation only if the "Vsync" signal 508 generated by the current receiving node 103 has become faster than the target synchronization signal cadencing the received video data. This means that the receiving node has a display frame rate higher than the reference frame rate (by definition the highest of the FlmLocal). The warning message thus makes it possible to update the target frame rate in each node of the network”. Note that the display frame rate is higher than the reference frame rate which is common to the source frame rate and the receiving device frame rate, this scenario is mapped to the display frame rate is greater than or equal to the first frame rate or the second frame rate (source frame rates). Regarding claim 4, Oshima, Braness, and Halna teach all the features with respect to claim 2 as outlined above. Further, Oshima teaches that the image output control device of claim 2, wherein when the reading position of the storage device does not exceed the start writing position, the amount of the plurality of output frames is determined by a ratio between the display frame rate and the first frame rate (See Oshima: Figs. 13, 39, and 45, and [0083], “FIG. 13 is a time chart of stereoscopic video identifier and output signal. If the time after FIG. 13(3) is defined as one interleaved block time unit, there is a delay time of It, but it is not shown in the chart. The stereoscopic video identifier in FIG. 13(1) is changed from 1 to 0 at t=t7. As recorded signals in FIG. 13(2), from t1 to t7, R frame groups 6, 6a, 6b and L frame groups 8, 8a, 8b of stereoscopic videos are recorded. In t7 to till, on the other hand, completely different contents A and B are recorded as first frame groups 44, 44a, and second frame groups 45, 45a. In the standard of DVD, etc., there is no definition of stereoscopic video, and hence stereoscopic video identifier is not included in the data or directory information. Therefore, upon start of the optical disk, it is required to read out the stereoscopic video arrangement information file of the invention. In R output and L output in FIG. 13(3), (4), from t1 to t7, the data in first time domains 46, 46a, 46b may be directly issued to R output, and the data in second time domains 47, 47a, 47b, directly to L output. After t=t7, there is no stereoscopic video identifier, and therefore the same data as in first time domains 46c, 46d are issued to the R output and L output. In other output system, that is, in a mixed output in FIGS. 13(5), (6), from t1 to t7 in which the stereoscopic video identifier is 1, at the field frequency of 60 Hz or 120 Hz, even field signals 48, 48a and odd field signals 49, 49a are issued alternately from one output. The data of the first time domains 46, 46a are issued to the even field signals, and the data of the second time domains 47, 47a, to the odd field signals”; [0166], “First, the stream A is reproduced by double speed rotation, and accumulation of data in the first track buffer 23a in the track buffer 23 is started. This state is shown in FIG. 45(1), in which at t=t1 to t2, data is accumulated in the portion of one interleaved block (ILB) I1 of first video signal in the period of one interleave time T1. The data quantity in the first track buffer increases, and at t=t2, it increases to the data quantity of one ILB, and accumulation of data for the portion of one ILB of the first video signal is complete. At t=t2, after finishing accumulation of the portion of one ILB over one GOP of the first video signal, this time, the second video signal of the stream B is reproduced from a next interleaved block I2 of the optical disk, and as indicated by a solid line in FIG. 45(4), at t=t2, accumulation of data of second video signal is stated in a second track buffer 23b, and data is accumulated in the second track buffer 23b up to t=t6. At the same time, from t=t2 to t8, as shown in FIGS. 45(7), (10), the first video signal and second video signal are fed into the first video decoder 69c and second video decoder 69d from the track buffer 23a and track buffer 23b by synchronizing the video presentation time stamp, that is, the time of VPTS. These input signals are, as shown in FIGS. 45(8), (11), are issued as two sets of expanded video data from the first video decoder 69c and second video decoder 69d, from time t=t3 delayed by the video delay time twd as the MPEG expansion process time. From t=t4 to t10, the two video data of stream A and stream B are combined into a progressive signal in the progressive transforming unit 170, and the progressive signal for the portion of one interleaved block is issued”; and [0167], “Thus, from t=t2 to t8, data of one interleaved block is put into the decoder. Therefore, nearly at a same rate, data in the first track buffer 23a and second track buffer 23b are consumed and decreased”. Note that the time t1-tn is mapped to the reading position, and the current time (the optical head position) is mapped to the current reading position, when t<t7, all stereoscopic videos are read out, the system has the same source frame rate and the display frame rate, i.e., the ratio of the frame rate and the display frame rate is 1, and the whole R-frame GOP and L-frame GOP are read out and displayed when the reading position is less than t7, and this is mapped to the instant cited limitation in this dependent claim 4). Regarding claim 5, Oshima, Braness, and Halna teach all the features with respect to claim 2 as outlined above. Further, Oshima teaches that the image output control device of claim 2, wherein when the reading position of the storage device reaches or exceeds the start writing position, the amount of the plurality of output frames is determined by a ratio between the display frame rate and the second frame rate (See Oshima: Fig. 35, and [0141], “In this case, since only the first video signal as basic story is reproduced usually, after the first stream 111a, a next first stream 11b is reproduced and issued consecutively. However, at the moment of t=tc, when the user commands to change over to the second video signal from the command input unit 19 in FIG. 5, at t=tc, the track at other radius position is accessed by using the tracking control circuit 22 in FIG. 5 from the first stream 111a to the second stream 112b, and the output signal is changed over to the second stream 112b of the second video signal”. Note that after the optical head (reading position) was switched to read the second video, the whole section of the section video was read for display, again, the source frame rate is equal to the display frame rate, the frame rate ratio is 1, and the whole amount of the second video was read, and this is mapped to the cited limitation of this dependent claim 5. Note that the cited limitation does not specify how much video was read in for display when the frame rate ratio is at some value). Regarding claim 6, Oshima and Braness teach all the features with respect to claim 1 as outlined above. Further, Halna teaches that the image output control device of claim 1, wherein in response to each of the reading positions of the storage device, the controller is configured to provide a plurality of compensation phase values corresponding to the display frame rate of the display according to the frame rates of the first and second frames (See Halna: Figs. 5A-D, and [0327], “The detection module 505 computes the phase difference between the vertical synchronization signal of the receiving device or node 103, "Vsync" 508 (presenting the display frame rate FlmAff) and the vertical synchronization signal for the received video data, which is progressively reconstituted and represented by the "peer_vsync" signal 507. The implementation of a phase difference detection module is well known to the person skilled in the art and will therefore not be described in more detail here”. Note that the phase difference is mapped to compensation phase value), and in response to each of the compensation phase values, the controller is configured to provide the output frame to the display according to the first frame and the second frame (See Halna: Figs. 5A-D, and [0371], “The video data corresponding to the image that has just ended are then forwarded from the second storage means to the storage means 501, during step 712. This step provides for the actual duplication of the image within the video data”; and [0372], “If the image is not to be duplicated or further to step 712, step 700 is returned to in order to continue the processing of the following video packets”. Note that the video frames are processed packets by packets based on the received frames and output the processed video frames for display, and this is mapped to “provide the output frame to the display according to the first frame and the second frame”). Regarding claim 11, Oshima and Braness teach all the features with respect to claim 10 as outlined above. Further, Halna teaches that the image output control method of claim 10, further comprising: performing compensation on the first frame and the second frame, to provide a plurality of output frames to a display, wherein the plurality of output frames correspond to a display frame rate (See Halna: Figs. 1A-C, and [0025]-[0027], “obtaining a frame rate, referred to as target frame rate, which is common for the devices, [0026] adapting, by each device of the set of sending devices or of the set of receiving devices, a source video stream received from a source, respectively, directly or via a sending device, from the source frame rate to the target frame rate, [0027] adjusting, at each receiving device, the display frame rate to the target frame rate so as to control a display at said target frame rate”; and [0133], “In this scenario, each sending node 102 executes operations for computing source, reference or target frame rate and duplication period as described below, for the source 101 to which it is attached.” Note that a target frame rate common to the sending device and the receiving device is determined according to the source frame rate and the display frame rate, and adapted the source video to the target frame rate, and adjusted the display frame rate to the target frame rate, this mechanism is mapped to compensation (adaptation the video source to the target frame rate which depends on both display frame rate and the source frame rate) the input frames to provide the output frame for the display that is also adjusted to the target frame rate based on its original display frame rate). Regarding claim 12, Oshima, Braness, and Halna teach all the features with respect to claim 11 as outlined above. Further, Halna teaches that the image output control method of claim 11, wherein the display frame rate is a refresh rate of the display, and the display frame rate is greater than or equal to the first frame rate or the second frame rate (See Halna: Figs. 6A-B, and [0428], “Returning to step 627, the module 506 triggers the sending of an "add line" warning message to all the other nodes of the network. More particularly, it is possible to be in this situation only if the "Vsync" signal 508 generated by the current receiving node 103 has become faster than the target synchronization signal cadencing the received video data. This means that the receiving node has a display frame rate higher than the reference frame rate (by definition the highest of the FlmLocal). The warning message thus makes it possible to update the target frame rate in each node of the network”. Note that the display frame rate is higher than the reference frame rate which is common to the source frame rate and the receiving device frame rate, this scenario is mapped to the display frame rate is greater than or equal to the first frame rate or the second frame rate (source frame rates). Regarding claim 13, Oshima and Braness teach all the features with respect to claim 11 as outlined above. Further, Oshima teaches that the image output control method of claim 11, wherein when the plurality of first input frames of the storage device have not been completely read, the amount of the plurality of output frames is determined by a ratio between the display frame rate and the first frame rate (See Oshima: Figs. 13, 39, and 45, and [0083], “FIG. 13 is a time chart of stereoscopic video identifier and output signal. If the time after FIG. 13(3) is defined as one interleaved block time unit, there is a delay time of It, but it is not shown in the chart. The stereoscopic video identifier in FIG. 13(1) is changed from 1 to 0 at t=t7. As recorded signals in FIG. 13(2), from t1 to t7, R frame groups 6, 6a, 6b and L frame groups 8, 8a, 8b of stereoscopic videos are recorded. In t7 to till, on the other hand, completely different contents A and B are recorded as first frame groups 44, 44a, and second frame groups 45, 45a. In the standard of DVD, etc., there is no definition of stereoscopic video, and hence stereoscopic video identifier is not included in the data or directory information. Therefore, upon start of the optical disk, it is required to read out the stereoscopic video arrangement information file of the invention. In R output and L output in FIG. 13(3), (4), from t1 to t7, the data in first time domains 46, 46a, 46b may be directly issued to R output, and the data in second time domains 47, 47a, 47b, directly to L output. After t=t7, there is no stereoscopic video identifier, and therefore the same data as in first time domains 46c, 46d are issued to the R output and L output. In other output system, that is, in a mixed output in FIGS. 13(5), (6), from t1 to t7 in which the stereoscopic video identifier is 1, at the field frequency of 60 Hz or 120 Hz, even field signals 48, 48a and odd field signals 49, 49a are issued alternately from one output. The data of the first time domains 46, 46a are issued to the even field signals, and the data of the second time domains 47, 47a, to the odd field signals”; [0166], “First, the stream A is reproduced by double speed rotation, and accumulation of data in the first track buffer 23a in the track buffer 23 is started. This state is shown in FIG. 45(1), in which at t=t1 to t2, data is accumulated in the portion of one interleaved block (ILB) I1 of first video signal in the period of one interleave time T1. The data quantity in the first track buffer increases, and at t=t2, it increases to the data quantity of one ILB, and accumulation of data for the portion of one ILB of the first video signal is complete. At t=t2, after finishing accumulation of the portion of one ILB over one GOP of the first video signal, this time, the second video signal of the stream B is reproduced from a next interleaved block I2 of the optical disk, and as indicated by a solid line in FIG. 45(4), at t=t2, accumulation of data of second video signal is stated in a second track buffer 23b, and data is accumulated in the second track buffer 23b up to t=t6. At the same time, from t=t2 to t8, as shown in FIGS. 45(7), (10), the first video signal and second video signal are fed into the first video decoder 69c and second video decoder 69d from the track buffer 23a and track buffer 23b by synchronizing the video presentation time stamp, that is, the time of VPTS. These input signals are, as shown in FIGS. 45(8), (11), are issued as two sets of expanded video data from the first video decoder 69c and second video decoder 69d, from time t=t3 delayed by the video delay time twd as the MPEG expansion process time. From t=t4 to t10, the two video data of stream A and stream B are combined into a progressive signal in the progressive transforming unit 170, and the progressive signal for the portion of one interleaved block is issued”; and [0167], “Thus, from t=t2 to t8, data of one interleaved block is put into the decoder. Therefore, nearly at a same rate, data in the first track buffer 23a and second track buffer 23b are consumed and decreased”. Note that the time t1-tn is mapped to the reading position, and the current time (the optical head position) is mapped to the current reading position, when t<t7, all stereoscopic videos are read out, the system has the same source frame rate and the display frame rate, i.e., the ratio of the frame rate and the display frame rate is 1, and the whole R-frame GOP and L-frame GOP are read out and displayed when the first input frames have not been completely read out, and this is mapped to the instant cited limitation in this dependent claim 4). Regarding claim 14, Oshima and Braness teach all the features with respect to claim 11 as outlined above. Further, Oshima teaches that the image output control method of claim 11, wherein when the plurality of first input frames of the storage device have been completely read, the amount of the plurality of output frames is determined by a ratio between the display frame rate and the second frame rate (See Oshima: Fig. 35, and [0141], “In this case, since only the first video signal as basic story is reproduced usually, after the first stream 111a, a next first stream 11b is reproduced and issued consecutively. However, at the moment of t=tc, when the user commands to change over to the second video signal from the command input unit 19 in FIG. 5, at t=tc, the track at other radius position is accessed by using the tracking control circuit 22 in FIG. 5 from the first stream 111a to the second stream 112b, and the output signal is changed over to the second stream 112b of the second video signal”. Note that after the first input frames have been completely read out, the second input frames will be read, again, the source frame rate is equal to the display frame rate, the frame rate ratio is 1, and the whole amount of the second video was read, and this is mapped to the cited limitation of this dependent claim 5. Note that the cited limitation does not specify how much video was read in for display when the frame rate ratio is at some value). Regarding claim 15, Oshima and Braness teach all the features with respect to claim 10 as outlined above. Further, Halna teaches that the image output control method of claim 10, further comprising: in response to each of the reading positions of the storage device, providing a plurality of compensation phase values corresponding to the display frame rate of the display according to the frame rates of the first and second frames (See Halna: Figs. 5A-D, and [0327], “The detection module 505 computes the phase difference between the vertical synchronization signal of the receiving device or node 103, "Vsync" 508 (presenting the display frame rate FlmAff) and the vertical synchronization signal for the received video data, which is progressively reconstituted and represented by the "peer_vsync" signal 507. The implementation of a phase difference detection module is well known to the person skilled in the art and will therefore not be described in more detail here”. Note that the phase difference is mapped to compensation phase value); and performing compensation on the first frame and the second frame according to each of the compensation phase values, to provide the output frame to the display (See Halna: Figs. 5A-D, and [0371], “The video data corresponding to the image that has just ended are then forwarded from the second storage means to the storage means 501, during step 712. This step provides for the actual duplication of the image within the video data”; and [0372], “If the image is not to be duplicated or further to step 712, step 700 is returned to in order to continue the processing of the following video packets”. Note that the video frames are processed packets by packets based on the received frames and output the processed video frames for display, and this is mapped to “provide the output frame to the display”). Claims 7-8 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Oshima, etc. (US 20030053797 A1) in view of Braness, etc. (US 20120173751 A1), further in view of Halna, etc. (US 20120050613 A1) and Cower (US 20210182540 A1). Regarding claim 7, Oshima, Braness, and Halna teach h all the features with respect to claim 6 as outlined above. However, Oshima, modified by Braness and Halna, fails to explicitly disclose that the image output control device of claim 6, wherein when the first frame and the second frame are the plurality of first input frames, the controller is configured to provide the plurality of compensation phase values according to the first frame rate, and the amount of the plurality of compensation phase values is determined by a ratio between the display frame rate and the first frame rate. However, Cower teaches that the image output control device of claim 6, wherein when the first frame and the second frame are the plurality of first input frames, the controller is configured to provide the plurality of compensation phase values according to the first frame rate, and the amount of the plurality of compensation phase values is determined by a ratio between the display frame rate and the first frame rate (See Cower: Fig. 8, and [0086], “Frame sequence 820 includes upsampled video and includes frames 811, 822, 823, 824, 815, 826, 827, 828, and 819, which are displayed at a higher framerate. From the received video stream that includes frames 811, 815, and 819, additional frames are obtained by interpolation, as described herein. The additional frames in frame sequence 820 are 822, 823, 824, 826, 827, and 828, and are obtained by interpolation and added to the sequence according to the techniques described herein. As a result of using interpolated frames, the frame sequence 820 can be displayed at a higher framerate (since intermediate frames 822-824 and 826-828 are available) with no jumpiness while the bandwidth utilized to receive the video remains the same as that for frame sequence 810”. Note that the display frame rate is four time higher than the received (source frame rate), the frame rate ratio is 4, three extra frames 822, 823, and 824 are inserted evenly between frame 811 and 815, the phases for 811, 822, 823, 824 and 815 are 0, 0.25, 0.5, 0.75, and 1 respectively; the multiple phases for 822, 823, 824 are 0.25, 0.5, and 0.75, and they are mapped to “the amount of the plurality of compensation phase values is determined by a ratio between the display frame rate and the first frame rate”. Note that the amount of compensation phase values is mapped to the relative position of the interpolated frames relative to the received frame 811). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Oshima to have the image output control device of claim 6, wherein when the first frame and the second frame are the plurality of first input frames, the controller is configured to provide the plurality of compensation phase values according to the first frame rate, and the amount of the plurality of compensation phase values is determined by a ratio between the display frame rate and the first frame rate as taught by Cower in order to use reduced computational power to display video with a perceived higher frame rate (See Cower: Fig. 1, and [0029], “The various embodiments described below have several advantages. First, the processing is performed by the user device that displays the video. As a result, the video application uses reduced computational power to display video with a perceived higher frame rate. Second, the embodiments also provide higher frame rates than received video frame rate, even when the video is received with the use of end-to-end encryption between a sender device and a receiver device that displays the video. Third, the interpolation is computationally efficient because the structure of the video frames is interpolated and not the texture”). Oshima teaches a method and system that may record high resolution stereoscopic vides in optical disks and playback the videos in the display device, stereo or non-stereo videos may be recorded in tracks on the disks with video identifier information; while Cower teaches a system and method that may adjust the received video to higher display frame rate with interpolation with phase compensation evenly distributed between the consecutive received frames. Therefore, it is obvious to one of ordinary skill in the art to modify Oshima by Cower to adjust the received frames with phase compensations to generate high display frame rate videos. The motivation to modify Oshima by Cower is “Use of known technique to improve similar devices (methods, or products) in the same way”. Regarding claim 8, Oshima, Braness, and Halna teach all the features with respect to claim 6 as outlined above. Further, Cower teaches that the image output control device of claim 6, wherein when the first frame and the second frame are the plurality of second input frames, the controller is configured to provide the plurality of compensation phase values according to the second frame rate, and the amount of the plurality of compensation phase values is determined by a ratio between the display frame rate and the second frame rate (See Cower: Fig. 8, and [0086], “Frame sequence 820 includes upsampled video and includes frames 811, 822, 823, 824, 815, 826, 827, 828, and 819, which are displayed at a higher framerate. From the received video stream that includes frames 811, 815, and 819, additional frames are obtained by interpolation, as described herein. The additional frames in frame sequence 820 are 822, 823, 824, 826, 827, and 828, and are obtained by interpolation and added to the sequence according to the techniques described herein. As a result of using interpolated frames, the frame sequence 820 can be displayed at a higher framerate (since intermediate frames 822-824 and 826-828 are available) with no jumpiness while the bandwidth utilized to receive the video remains the same as that for frame sequence 810”. Note that the display frame rate is four time higher than the received (source frame rate), the frame rate ratio is 4, three extra frames 822, 823, and 824 are inserted evenly between frame 811 and 815, the phases for 811, 822, 823, 824 and 815 are 0, 0.25, 0.5, 0.75, and 1 respectively; the multiple phases for 822, 823, 824 are 0.25, 0.5, and 0.75, and they are mapped to “the amount of the plurality of compensation phase values is determined by a ratio between the display frame rate and the first frame rate”. Note that the amount of compensation phase values is mapped to the relative position of the frame relative to the received frame 811. Note that Cower techniques can be applied to any input sources, the same interpolation method can be used for either the input frames are from the first video source or the input frames are from the second video source). Regarding claim 16, Oshima, Braness, and Halna all the features with respect to claim 15 as outlined above. Further, Cower teaches that the image output control method of claim 15, further comprising: when the written image data converts from the plurality of first input frames to the plurality of second input frames and has not performed compensation on the first and second frames according to each of the compensation phase values, performing compensation on the first frame and the second frame corresponding to the first frame rate according to the remaining compensation phase values to provide the output frame, wherein the amount of the plurality of compensation phase values is determined by a ratio between the display frame rate and the first frame rate (See Cower: Fig. 8, and [0086], “Frame sequence 820 includes upsampled video and includes frames 811, 822, 823, 824, 815, 826, 827, 828, and 819, which are displayed at a higher framerate. From the received video stream that includes frames 811, 815, and 819, additional frames are obtained by interpolation, as described herein. The additional frames in frame sequence 820 are 822, 823, 824, 826, 827, and 828, and are obtained by interpolation and added to the sequence according to the techniques described herein. As a result of using interpolated frames, the frame sequence 820 can be displayed at a higher framerate (since intermediate frames 822-824 and 826-828 are available) with no jumpiness while the bandwidth utilized to receive the video remains the same as that for frame sequence 810”. Note that the display frame rate is four time higher than the received (source frame rate), the frame rate ratio is 4, three extra frames 822, 823, and 824 are inserted evenly between frame 811 and 815, the phases for 811, 822, 823, 824 and 815 are 0, 0.25, 0.5, 0.75, and 1 respectively; the multiple phases for 822, 823, 824 are 0.25, 0.5, and 0.75, and they are mapped to “the amount of the plurality of compensation phase values is determined by a ratio between the display frame rate and the first frame rate”. Note that the amount of compensation phase values is mapped to the relative position of the interpolated frames relative to the received frame 811). Regarding claim 17, Oshima, Braness, and Halna all the features with respect to claim 15 as outlined above. Further, Cower teaches that the image output control method of claim 15, further comprising: performing compensation on the first frame and the second frame corresponding to the first frame rate according to each of the compensation phase values to provide the output frame when the reading position of the storage device does not exceed the start writing position, wherein the amount of the plurality of compensation phase values is determined by a ratio between the display frame rate and the first frame rate (See Cower: Fig. 8, and [0086], “Frame sequence 820 includes upsampled video and includes frames 811, 822, 823, 824, 815, 826, 827, 828, and 819, which are displayed at a higher framerate. From the received video stream that includes frames 811, 815, and 819, additional frames are obtained by interpolation, as described herein. The additional frames in frame sequence 820 are 822, 823, 824, 826, 827, and 828, and are obtained by interpolation and added to the sequence according to the techniques described herein. As a result of using interpolated frames, the frame sequence 820 can be displayed at a higher framerate (since intermediate frames 822-824 and 826-828 are available) with no jumpiness while the bandwidth utilized to receive the video remains the same as that for frame sequence 810”. Note that the display frame rate is four time higher than the received (source frame rate), the frame rate ratio is 4, three extra frames 822, 823, and 824 are inserted evenly between frame 811 and 815, the phases for 811, 822, 823, 824 and 815 are 0, 0.25, 0.5, 0.75, and 1 respectively; the multiple phases for 822, 823, 824 are 0.25, 0.5, and 0.75, and they are mapped to “the amount of the plurality of compensation phase values is determined by a ratio between the display frame rate and the first frame rate”. Note that the amount of compensation phase values is mapped to the relative position of the interpolated frames relative to the received frame 811, and the first frame rate input video is read out for display when the reading position does not exceed the writing position of the second input frames). Regarding claim 18, Oshima, Braness, and Halna all the features with respect to claim 15 as outlined above. Further, Cower teaches that the image output control method of claim 15, further comprising: performing compensation on the first frame and the second frame corresponding to the second frame rate according to each of the compensation phase values to provide the output frame when the reading position of the storage device reaches or exceeds the start writing position, wherein the amount of the plurality of compensation phase values is determined by a ratio between the display frame rate and the second frame rate (See Cower: Fig. 8, and [0086], “Frame sequence 820 includes upsampled video and includes frames 811, 822, 823, 824, 815, 826, 827, 828, and 819, which are displayed at a higher framerate. From the received video stream that includes frames 811, 815, and 819, additional frames are obtained by interpolation, as described herein. The additional frames in frame sequence 820 are 822, 823, 824, 826, 827, and 828, and are obtained by interpolation and added to the sequence according to the techniques described herein. As a result of using interpolated frames, the frame sequence 820 can be displayed at a higher framerate (since intermediate frames 822-824 and 826-828 are available) with no jumpiness while the bandwidth utilized to receive the video remains the same as that for frame sequence 810”. Note that the display frame rate is four time higher than the received (source frame rate), the frame rate ratio is 4, three extra frames 822, 823, and 824 are inserted evenly between frame 811 and 815, the phases for 811, 822, 823, 824 and 815 are 0, 0.25, 0.5, 0.75, and 1 respectively; the multiple phases for 822, 823, 824 are 0.25, 0.5, and 0.75, and they are mapped to “the amount of the plurality of compensation phase values is determined by a ratio between the display frame rate and the first frame rate”. Note that the amount of compensation phase values is mapped to the relative position of the interpolated frames relative to the received frame 811, and the second frame rate input video is read out for display when the reading position exceeds the writing position of the second input frames. Note further that Cower techniques can be applied to any input sources, the same interpolation method can be used for either the input frames are from the first video source or the input frames are from the second video source). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GORDON G LIU whose telephone number is (571)270-0382. The examiner can normally be reached Monday - Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona E Faulk can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GORDON G LIU/ Primary Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

Aug 01, 2024
Application Filed
Jan 30, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602846
GENERATING REALISTIC MACHINE LEARNING-BASED PRODUCT IMAGES FOR ONLINE CATALOGS
2y 5m to grant Granted Apr 14, 2026
Patent 12602840
IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12602871
MESH TOPOLOGY GENERATION USING PARALLEL PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12592022
INTEGRATION CACHE FOR THREE-DIMENSIONAL (3D) RECONSTRUCTION
2y 5m to grant Granted Mar 31, 2026
Patent 12586330
DISPLAYING A VIRTUAL OBJECT IN A REAL-LIFE SCENE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
98%
With Interview (+15.1%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 673 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month