Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA
The amendments to the claims, filed on 12/17/2025, have been entered and made of record.
Claim 36-38 are cancelled.
Claims 1-35 are amended and pending.
Response to Arguments
Arguments presented in the Remarks (“Remarks") filed on 12/17/2025 have been fully considered, but are rendered moot in view of the new ground(s) of rejection necessitated by amendment(s) initiated by the applicant(s).
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 35 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. In Specification is not disclosed explicitly the limitation “the plurality of forward pointing motion vectors are generated by applying an inversion transformation to the plurality of backward pointing motion vectors”.
Claim Interpretations
It is noticed that Fig. 4 and para. 0071 in the Specification provide contrast interpretation of the warping direction and the source/destination pixels.
Fig. 4 shows: (A) the forward warping direction is from the left side to the right side in graphical sense while the backward warping direction is the opposite direction; and (B) the “source pixel” is the starting point of the forward warping direction while the “destination pixel” is the starting point of the backward warping direction.
Para. 0071 discloses “interpolated frame generator forward warps input motion vectors from T=0 or T=1 to T=t, then at destination in T=t”. It is understood: (A) the forward warping direction can be from either T=0 or T=1 to T=t (i.e. the interpolated frame is the destination). This disclosure of forward warping direction regardless of the left side or the right side in graphical sense is in contrast to the disclosure of Fig. 4 (Ref. to 1(A)); and (B) the “destination pixel” is understood to be in the interpolated frame while the “source pixel” is in either the video frame at T=0 or T=1. This disclosure is in contrast to the disclosure of Fig. 4 (Ref. to 1(B)).
Thus, the “source pixel” is interpreted to be the pixels in the existing video frame at T=0 or T=1 (i.e. the “source pixel” as the pixel used for interpolation) while the “destination pixel” is to be in the interpolation frame at T=t.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 8-10, 16-18, 20, 23-25, 27, 29, 31-32 and 35 rejected under 35 U.S.C. 103 as being unpatentable over Bao et al. (“Bao”) [NPL Titled “Depth-Aware Video Frame Interpolation” provided in IDS filed on 06/15/2021] in view of Barenbrug et al. (“Barenbrug”) [US 2011/0142289 A1] in further view of Schroers et al. (“Schroers”) [US 2020/0053388 A1]
Regarding claim 1, Bao meets the claim limitations as follows:
One or more processors, comprising: circuitry to:
generate a plurality of forward pointing motion vectors (i.e. ‘approximate
Ft->1(x)’) from a plurality of backward pointing motion vectors (i.e. ‘F1->0(y)) [Sect. 3.2.: ‘the projected flow Ft->1 can be obtained from the flow F1->0 and depth map D1’],
wherein the plurality of forward pointing motion vectors point to a destination pixel location (e.g. y1 or y2) [Fig. 2: multiple flow vectors could be projected to the same position at time t; Sect. 3.2.: ‘we approximate
Ft->1(x) by –(1-t)F1->0(y)’]; and
select, from the plurality of forward pointing motion vectors, (i.e. ‘approximate Ft->1(x)’) based, at least in part, on (i.e. depth map D1 for frame F=t) [See Claim Interpretation: para. 2(B) and 3 for interpretation of source pixels] to which the plurality of forward pointing motion vectors correspond [Fig. 2: multiple flow vectors could be projected to the same position at time t; Sect. 3.2.: ‘we approximate Ft->1(x) by –(1-t) F1->0(y)’. Eq. (1) and Eq. (2) define the backward projected flow. ‘Similarly, the projected flow Ft->1 can be obtained from the flow F1->0 and depth map D1’].
Bao does not disclose explicitly the following claim limitations (emphasis added):
One or more processors, comprising: circuitry to: …;
select, from the plurality of forward pointing motion vectors, a single forward pointing motion vector based, at least in part, on one or more comparisons of depth values of source pixels to which the plurality of forward pointing motion vectors correspond.
However in the same field of endeavor Barenbrug discloses the deficient claim as follows:
One or more processors, comprising: circuitry to: …;
select, from the plurality of forward pointing motion vectors, a single forward pointing motion vector (i.e. ‘selects a motion vector’) based, at least in part, on one or more comparisons of depth values (i.e. ‘to calculate a match error … using the depth information’; ‘the largest depth difference’) [para. 0030-0031: ‘calculate the match error solely using the depth information’; ‘to select a motion vector from the motion vectors using the calculated match errors’] of source pixels to which the plurality of forward pointing motion vectors correspond.
Bao and Barenbrug are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao and Barenbrug as motivation to calculate the motion vector match error for improving the motion estimation quality in occlusion areas [Barenbrug: para. 0030].
Barenbrug does not disclose explicitly the following claim limitations (emphasis added):
One or more processors, comprising: circuitry to: …;
However in the same field of endeavor Schroers discloses the deficient claim as follows:
One or more processors (i.e. ‘Computing component 1000’), comprising: circuitry (e.g. ‘RAM’) to: [Fig. 10: Processor 1004, Memory 1008; para. 0089-0092].
Bao, Barenbrug and Schroers are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Barenbrug and Schroers as motivation to include a processor with memory to perform ‘machine learning based video compression’ [Schroers: para. 0005-0006].
Regarding claim 2, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The processor one or more processors of claim 1, wherein the source pixels are of a first video frame (e.g. y1, y2 of frame T=0) [Fig. 2]); the circuitry is further to generate a third video frame [Fig. 2: frame T=t; Sect. 3.1, 3.2: ‘synthesizing the intermediate frame’ It^] based, at least in part, on the selected single forward pointing motion vector (e.g. [Fig. 2: Ft->1; Sect. 3.1, 3.2]; and the third video frame is between the first video frame [Fig. 2: frame T=0] and a second video frame [Fig. 2: frame T=1], wherein the destination pixel location is in the second video frame [Fig. 2].
Regarding claim 3, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The one or more processors of claim 1, wherein a first video frame (i.e. frame T=0) [Fig. 2]) includes one or more objects (e.g. y1, y2 of frame T=0) [Fig. 2]; that correspond to the source pixels [Note: the source pixels are the pixels y1 and y2 in the frame T=0], and the selected single forward pointing motion vector (i.e. ‘approximate Ft->1(x)’) [Fig. 2 shows instead the backward pointing motion vector Ft->0(x) as an example. Sect. 3.2: ‘Similarly the projected flow Ft->1 can be obtained from the flow F1->0’] corresponds to one or more motions of the one or more objects (e.g. y1, y2) [Fig. 2 shows the location of the object pixel y1 and y2 in the frame T=0 is different from the location of y1 and y2 in the frame T=1] between the first video frame and a second video frame [Fig. 2] to a destination pixel location of the second video frame [Fig. 2].
Regarding claim 4, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The one or more processors of claim 1, wherein the circuitry is further to generate a third video frame (i.e. frame T=t) [Fig. 2: frame T=t; Sect. 3.1, 3.2: ‘synthesizing the intermediate frame’ It^] between a first video frame and a second video frame based, at least in part, on the selected single forward point motion vector (i.e. ‘approximate Ft->1(x)’).
Regarding claim 5, Bao in view of Barenbrug and Schroers and meets the claim limitations as follows:
The one or more processors of claim 1, wherein the selected single forward pointing motion vector (i.e. ‘approximate Ft->1(x)’) [Fig. 2; Sect. 3.2] corresponds to one or more motions from one more other pixels of a first video frame (e.g. y1, y2 in T=0) to a destination pixel location in a second video frame (e.g. y1, y2 in T=1) [Fig. 2 shows the location of the object pixel y1 and y2 in the frame T=0 is different from the location of y1 and y2 in the frame T-1].
Regarding claim 6, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The one or more processors of claim 1, wherein the plurality of backward pointing motion vectors (i.e. ‘F1->0(y)) [Sect. 3.2.: ‘the projected flow Ft->1 can be obtained from the flow F1->0 and depth map D1’] correspond to one or more pixels of a second video frame (e.g. y1, y2 in T=0) [Fig. 2] that point to pixel locations in a first video frame.
Regarding claim 8, all claim limitations are set forth as claim 1 in the form of ‘A non-transitory computer-readable storage medium’ [Schroers: para. 0023, 0042: ‘a computer readable medium’] and rejected as per discussion for claim 1.
Regarding claim 9, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The non-transitory computer-readable storage medium of claim 8, wherein each of the forward pointing motion vectors [Fig. 2: Ft->1] maps pixels from a first video frame (i.e. frame T=0 [Fig. 2]) to a second video frame (i.e. frame T=1 [Fig. 2]) [Sect. 3.1, 3.2], and the set of instructions, if performed by the one or more processors, further causes the one or more processors to at least: generate a third video frame (i.e. frame T=t [Fig. 2; Sect. 3.1, 3.2]) based, at least in part, on the selected single forward pointing motion vector, wherein the generated third video frame (i.e. frame T=t [Fig. 2; Sect. 3.1, 3.2]) corresponds to a time that is between the first video frame and the second video frame.
Regarding claim 10, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The non-transitory computer-readable storage medium of claim 9, wherein the source pixels (i.e. frame T=1) [See Claim Interpretation: para. 2(B) and 3 for interpretation of source pixels] correspond to one or more objects (i.e. frame T=0 [Fig. 2]) in the first video frame.
Regarding claim 16, all claim limitations are set forth as claim 1 in the method form and rejected as per discussion for claim 1.
Regarding claim 17, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The method of claim 16, wherein the plurality of backward pointing motion vectors (i.e. ‘F1->0(y)’) [Fig. 2] are associated with one or more pixels of a second video frame [Fig. 2: frame T=1: pixels y1 and y2] and point to one or more pixel locations in a first video frame [Fig. 2: frame T=0: pixels y1 and y2].
Regarding claim 18, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The processor of claim 16, wherein the source pixels (i.e. pixels y1 and y2 in frame T=1) correspond to one or more objects in a first video frame (i.e. pixels y1 and y2 in frame T=0) [Fig. 2].
Regarding claim 20, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The method of claim 16, wherein the source pixels are of a first video frame (e.g. y1, x, or y2) [Fig. 2: Frame T=0], and the plurality of forward pointing motion vectors (i.e. ‘approximate Ft->1(x)’) are from the first video frame to a same destination pixel location (i.e. y1 or y2) [Fig. 2 show the vector F0->1 is from source pixels in Frame T=0 to pixels in Frame T=1 and Ft->1 from Frame T=t points to the same pixel location as the vector F0->1] in a second video frame [Fig. 2: Frame T=t].
Regarding claim 23, all claim limitations are set forth as claim 1 in the system form and rejected as per discussion for claim 1.
Regarding claim 24, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The system of claim 23, wherein the one or more processors are further to:
generate a third video frame [Fig. 2: Frame T=t] based at least in part on the selected single forward pointing motion vector (i.e. ‘approximate Ft->1(x)’), wherein the source pixels are of a first video frame (e.g. y1, or y2 in Frame T=0) [Fig. 2], and the plurality of forward pointing motion vectors (i.e. ‘approximate Ft->1(x)’) correspond to a same destination pixel location (e.g. y1, or y2 in Frame T=2) [Fig. 2] in a second video frame.
Regarding claim 25, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The system of claim 23, wherein the source pixels (i.e. pixels y1 and y2 in frame T=1) correspond to one or more objects in a first video frame (i.e. pixels y1 and y2 in frame T=0) [Fig. 2].
Regarding claim 27, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The system of claim 23, wherein the one or more processors are further to: identify a pixel location of a third video frame (e.g. x) [Fig. 2: Frame T=t] that has a corresponding pixel identified using an intermediate motion vector (i.e. Ft->0 or Ft->1) [Fig. 2] in one of a first video frame [Fig. 2: Frame T=0] or a second video frame [Fig. 2: Frame T=1], and sample pixel data of the one of the first video frame or the second video frame having the corresponding pixel for the identified pixel location [Fig. 2], wherein the third video frame [Fig. 2: Frame T=t] is generated using the selected single forward pointing motion vector (i.e. ‘approximate Ft->1(x)’) [Fig. 2; Sect. 3.1, 3.2: ‘synthesizing the intermediate frame’].
Regarding claim 29, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The one or more processors of claim 1, wherein: the one or more processors are to generate a third video frame (i.e. Frame T=t) [Fig. 2; Sect. 3.1, 3.2] based, at least in part, on the selected single forward pointing motion vector (i.e. ‘approximate Ft->1(x)’) [Fig. 2 shows instead the backward pointing motion vector Ft->0(x) as an example. Sect. 3.2: ‘Similarly the projected flow Ft->1 can be obtained from the flow F1->0’]; and the third video frame is between a first video frame (i.e. Frame T=0) [Fig. 2] and a second video frame (i.e. Frame T=1) [Fig. 2];
Regarding claim 31, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The non-transitory computer-readable storage medium of claim 8, wherein: the plurality of backward pointing motion vectors (i.e. ‘Ft->0(x)) [Sect. 3.2.: ‘The projected flow Ft->0 is defined by’ Eq. 1 ] correspond to one or more motions of one or more objects (i.e. pixels y1 and y2) [Fig. 2 shows the location of the object pixel y1 and y2 in the frame T=0 is different from the location of y1 and y2 in the frame T=1] between a second video frame (i.e. frame T=1) and a first video frame (i.e. frame T=0) [Fig. 2].
Regarding claim 32, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The method of claim 16, wherein the plurality of forward pointing motion vectors (i.e. ‘approximate Ft->1(x)’) [Fig. 2 shows instead the backward pointing motion vector Ft->0(x) as an example. Sect. 3.2: ‘Similarly the projected flow Ft->1 can be obtained from the flow F1->0’] correspond to possible motions of one or more objects (e.g. y1, y2) [Fig. 2 shows the location of the object pixel y1 and y2 in the frame T=0 is different from the location of y1 and y2 in the frame T=1] from a first video frame (i.e. frame T=0) [Fig. 2] to a second video frame (i.e. frame T=1) [Fig. 2].
Regarding claim 34, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The one or more processors of claim 1, wherein: the plurality of forward pointing motion vectors (e.g. ‘F0->1(y)’) are generated (i.e. ‘obtained’) by applying an inversion transformation to the plurality of backward pointing motion vectors (e.g. ‘Ft->0(y)’ in view of Specification: Fig. 4) [Sect. 3.2. Eq. (1) and Eq. (2) show mathematical relationship between forward pointing motion vectors (e.g. ‘F0->1(y)’) and backward pointing motion vectors (e.g. ‘Ft->0(y)’); ‘the projected flow Ft->1 can be obtained from the flow F1->0 and depth map D1]; and the inversion transformation includes depth (i.e. depth map D0 and D1) based warping of the plurality of backward pointing motion vectors.
Regarding claim 35, Bao in meets the claim limitations as follows:
The one or more processors of claim 1, wherein the selected singled forward pointing motion vector points to a destination pixel location that is determined to be not included in an occlusion mask [Fig. 2; Sect. 3.1. ‘to aggregate the flow vectors while considering the depth order to detect the occlusion’].
Bao does not disclose explicitly the following claim limitations (emphasis added):
wherein the selected singled forward pointing motion vector points to a destination pixel location that is determined to be not included in an occlusion mask.
However in the same field of endeavor Barenbrug discloses the deficient claim as follows:
wherein the selected singled forward pointing motion vector points to a destination pixel location that is determined to be not included in an occlusion mask [Fig. 4; para. 0032: ‘combine forward motion vectors and backward motion vectors to form occlusion-free motion vectors’].
Bao and Barenbrug are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao and Barenbrug as motivation to calculate the motion vector match error for improving the motion estimation quality in occlusion areas [Barenbrug: para. 0030].
Claims 7, 15, 22, 28, 30 and 33 rejected under 35 U.S.C. 103 as being unpatentable over Bao in view of Barenbrug in further view of Schroers in further view of Sirtory et al. (“Sirtori”) [US 2005/0030316 A1]
Regarding claim 7, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The one or more processors of claim 1, wherein:
the circuitry is further to generate a third video frame [Fig. 2: frame T=t; Sect. 3.1, 3.2] based, at least in part on the selected forward pointing motion vector (i.e. ‘approximate Ft->1(x)’) [Fig. 2: Ft->1];
the source pixels are of a first video frame [Fig. 2: frame T=0; Sect. 3.2];
the plurality of forward pointing motion vectors (i.e. Ft->1(x)’) [Fig. 2 shows instead the backward pointing motion vector Ft->0(x) as an example. Sect. 3.2: ‘Similarly the projected flow Ft->1 can be obtained from the flow F1->0’] correspond to one or more motions of one or more objects (e.g. y1, y2) [Fig. 2 shows the location of the object pixel y1 and y2 in the frame T=0 is different from the location of y1 and y2 in the frame T=1] from the first video frame to one or more destination pixel locations of the second video frame [Fig. 2]; and
the first video frame [Fig. 2: frame T=0; Sect. 3.2], the second video frame [Fig. 2: frame T=1; Sect. 3.2], and the depth values [Sect. 3.2: the depth map D1. It is obvious that the depth map is stored in a buffer] are obtained from one or more buffers.
Bao does not disclose explicitly the following claim limitations (emphasis added):
the first video frame, the second video frame, and the depth values are obtained from one or more buffers.
However in the same field of endeavor Schroers discloses the deficient claim as follows:
the first video frame, the second video frame, and the depth values are obtained from one or more buffers (e.g. ‘Storage 1010’) [Fig. 10: Processor 1004, Memory 1008, Storage 1008; para. 0089-0092].
Bao, Barenbrug and Schroers are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Barenbrug and Schroers as motivation to include a processor with memory to perform ‘machine learning based video compression’ [Schroers: para. 0005-0006].
Schoers does not disclose explicitly the following claim limitations (emphasis added):
the first video frame, the second video frame, and the depth values are obtained from one or more buffers.
However in the same field of endeavor Sirtori discloses the deficient claim as follows:
the first video frame, the second video frame [para. 0034, 0066: ‘each frame f … to be stored in the frame buffer’], and the depth values are obtained from one or more buffers [para. 0082: ‘a depth buffer ZB’].
Bao, Barenbrug, Schroers and Sirtori are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Barenbrug, Schroers and Sirtori as motivation to generate interpolated frame as to permit transmission toward a display unit at a reduced rate [Sirtory: para. 0047, 0058].
Regarding claim 15, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The non-transitory computer-readable storage medium of claim 8, wherein the set of instructions, if performed by the one or more processors, further causes the one or more processors to:
generate a third video frame [Fig. 2: frame T=t; Sect. 3.1, 3.2] based, at least in part, on receiving a first video frame (e.g. y1, y2 of frame T=0) [Fig. 2]), a second video frame [Fig. 2: frame T=1], depth information (i.e. depth map D1) [Fig. 2: multiple flow vectors could be projected to the same position at time t; Sect. 3.2.: ‘we approximate Ft->1(x) by –(1-t) F1->0(y)’. Eq. (1) and Eq. (2) define the backward projected flow. ‘Similarly, the projected flow Ft->1 can be obtained from the flow F1->0 and depth map D1’], and the plurality of backward pointing motion vectors (i.e. F1->0 ) from the second video frame to the first video frame from one or more buffers of a video game engine.
Bao does not disclose explicitly the following claim limitations (emphasis added):
on receiving a first video frame, a second video frame, depth information, and the plurality of backward pointing motion vectors from the second video frame to the first video frame from one or more buffers of a video game engine.
However in the same field of endeavor Schroers discloses the deficient claim as follows:
on receiving a first video frame, a second video frame, depth information, and the plurality of backward pointing motion vectors from the second video frame to the first video frame from one or more buffers of a video game engine (e.g. ‘Storage 1010’) [Fig. 10: Processor 1004, Memory 1008, Storage 1008; para. 0089-0092].
Bao, Barenbrug and Schroers are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Barenbrug and Schroers as motivation to include a processor with memory to perform ‘machine learning based video compression’ [Schroers: para. 0005-0006].
Schoers does not disclose explicitly the following claim limitations (emphasis added):
on receiving a first video frame, a second video frame, depth information, and the plurality of backward pointing motion vectors from the second video frame to the first video frame from one or more buffers of a video game engine.
However in the same field of endeavor Sirtori discloses the deficient claim as follows:
a first video frame, a second video frame [para. 0034, 0066: ‘each frame f … to be stored in the frame buffer’], depth information, and the plurality of backward pointing motion vectors from the second video frame to the first video frame from one or more buffers [para. 0082: ‘a depth buffer ZB’] of a video game engine [para. 0039: ‘for playing 3D games or running various graphic applications’].
Bao, Barenbrug, Schroers and Sirtori are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Barenbrug, Schroers and Sirtori as motivation to generate interpolated frame as to permit transmission toward a display unit at a reduced rate [Sirtory: para. 0047, 0058].
Regarding claim 22, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The method of claim 16, wherein further comprising generating a third video frame [Fig. 2: frame T=t; Sect. 3.1, 3.2] based, at least in part, on receiving a first video frame (e.g. y1, y2 of frame T=0) [Fig. 2]), a second video frame [Fig. 2: frame T=1], depth values (i.e. depth map D1) [Fig. 2: multiple flow vectors could be projected to the same position at time t; Sect. 3.2.: ‘we approximate Ft->1(x) by –(1-t) F1->0(y)’. Eq. (1) and Eq. (2) define the backward projected flow. ‘Similarly, the projected flow Ft->1 can be obtained from the flow F1->0 and depth map D1’] of the first video frame and the second video frame, and the plurality of backward pointing motion vectors (i.e. F1->0 ) from one or more buffers, wherein the third video frame is associated with a time that is between the first video frame (i.e. Frame T=0) and the second video frame (i.e. Frame T=1) [Fig. 2].
Bao does not disclose explicitly the following claim limitations (emphasis added):
wherein further comprising generating a third video frame based, at least in part, on receiving a first video frame, a second video frame, depth values of the first video frame and the second video frame, and the plurality of backward pointing motion vectors from one or more buffers, wherein the third video frame is associated with a time that is between the first video frame and the second video frame.
However in the same field of endeavor Sirtori discloses the deficient claim as follows:
wherein further comprising generating a third video frame based, at least in part, on receiving a first video frame, a second video frame [para. 0034, 0066: ‘each frame f … to be stored in the frame buffer’], depth values [para. 0082: ‘a depth buffer ZB’] of the first video frame and the second video frame, and the plurality of backward pointing motion vectors from one or more buffers [para. 0082: ‘a depth buffer ZB’], wherein the third video frame is associated with a time that is between the first video frame and the second video frame.
Bao, Barenbrug, Schroers and Sirtori are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Barenbrug, Schroers and Sirtori as motivation to generate interpolated frame as to permit transmission toward a display unit at a reduced rate [Sirtory: para. 0047, 0058].
Regarding claim 28, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The system of claim 23, wherein the one or more processors are further to generate a third video frame [Fig. 2: Frame T=t; Sect. 3.1, 3.2] based, at least in part, on the selected single forward pointing motion vector (i.e. Ft->1 ) [Fig. 2], and
the depth values (i.e. depth map D1 ) [Fig. 2, Sect. 3.2: ‘Similarly, the projected flow Ft->1 can be obtained from the flow F1->0 and depth map D1’] of the one or more source pixels to which the plurality of forward pointing motion vectors correspond, wherein the depth values are received from one or more buffers.
Bao does not disclose explicitly the following claim limitations (emphasis added):
the depth values are received from one or more buffers.
However in the same field of endeavor Sirtori discloses the deficient claim as follows:
the depth values are received from one or more buffers [para. 0082: ‘a depth buffer ZB’].
Bao, Barenbrug, Schroers and Sirtori are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Barenbrug, Schroers and Sirtori as motivation to generate interpolated frame as to permit transmission toward a display unit at a reduced rate [Sirtory: para. 0047, 0058].
Regarding claim 30, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The one or more processors of claim 1, wherein the circuitry is further to: generate a video frame (i.e. Frame T=t) [Fig. 2; Sect. 3.1, 3.2] based at least in part on the selected single forward pointing motion vector (i.e. Ft->1 ) [Fig. 2, Sect. 3.2: ‘Similarly, the projected flow Ft->1 can be obtained from the flow F1->0 and depth map D1’]; and transmit the generated video frame over a network.
Bao does not disclose explicitly the following claim limitations (emphasis added):
wherein the circuitry is further to generate a video frame based at least in part on the selected single pointing motion vector; and transmit the generated video frame over a network.
However in the same field of endeavor Sirtori discloses the deficient claim as follows:
wherein the circuitry is further to generate a video frame [Fig. 6: IF] based at least in part on the selected single pointing motion vector; and transmit the generated video frame over a network [Fig. 3: transmitter block R4 and R3 to Remote Smart Display 230].
Bao, Barenbrug, Schroers and Sirtori are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Barenbrug, Schroers and Sirtori as motivation to generate interpolated frame as to permit transmission toward a display unit at a reduced rate [Sirtory: para. 0047, 0058].
Regarding claim 33, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The method of claim 32, further comprising: generating a third video frame (i.e. Frame T=t) [Fig. 2; Sect. 3.1, 3.2] that includes the one or more objects based at least in part on the selected forward pointing motion vector (i.e. Ft->1 ) [Fig. 2, Sect. 3.2: ‘Similarly, the projected flow Ft->1 can be obtained from the flow F1->0 and depth map D1’]; and displaying the generated third video frame, wherein the third video frame is temporally located between the first video frame and the second video frame.
Bao does not disclose explicitly the following claim limitations (emphasis added):
generating a third video frame that includes the one or more objects based at least in part on the selected forward pointing motion vector; and displaying the generated third video frame, wherein the third video frame is temporally located between the first video frame and the second video frame.
However in the same field of endeavor Sirtori discloses the deficient claim as follows:
generating a third video frame (i.e. ‘a specific interpolated IF’) [Fig. 6; para. 0092-0123: describing generation of the ‘i-th interpolated frame IF’] that includes the one or more objects based at least in part on the selected forward pointing motion vectors; and displaying the generated third video frame [Fig. 1, 2, 3: transmitter block R4 and R3 to Remote Smart Display 230; para. 0060, 0068, 0073, 0105, 0147: ‘Interpolated pictures which are fully rendered are then re-ordered and sent to a display in the right temporal order’], wherein the third video frame is temporally located between the first video frame and the second video frame [Fig. 6].
Bao, Barenbrug, Schroers and Sirtori are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Barenbrug, Schroers and Sirtori as motivation to generate interpolated frame as to permit transmission toward a display unit at a reduced rate [Sirtory: para. 0047, 0058].
Claims 11, 19 and 26 rejected under 35 U.S.C. 103 as being unpatentable over Bao in view of Barenbrug in further view of Schroers in further view of Jeon et al. (“Jeon”) [US 2013/0044183]
Regarding claim 11, Bao in view of Barenbrug and Schroers meets the claim limitations set forth in claim 8.
Bao does not disclose explicitly the following claim limitations (emphasis added):
The non-transitory computer-readable storage medium of claim 8, wherein the set of instructions, if performed by the one or more processors, further causes the one or more processors to: determine a change in a camera viewpoint matrix between a first video frame and a second video frame; and generate a third video frame based, at least in part, on the selected single forward pointing motion vector and the determined change in the camera viewpoint matrix between the first video frame and the second video frame.
However in the same field of endeavor Jeon discloses the deficient claim as follows:
determine a change (i.e. ‘obtaining camera information’) in a camera viewpoint matrix [para. 0303: ‘obtaining camera information including … base matrix with respect to positions between views and directions’] between a first video frame and a second video frame (i.e. Key frames); and generate a third video frame (i.e. Side information) [Fig. 2, 3, 26, 27] based, at least in part, on the selected single forward pointing motion vector and the determined change in the camera viewpoint matrix (i.e. ‘a motion of a camera’) [Fig. 2, 3, 26, 27: multi-viewpoint (i.e. a motion of camera); para. 0110: ‘generates the side information (i.e. interpolated frame) by using interpolation … a linear change between frames’; ‘a change between video frames is caused by … a motion of a camera’; para. 0303-0311: disclosing ‘detecting a shield region’ by ‘2660’ based on ‘a motion object’ detected by ‘2650’. The ‘object motion estimating 2650’ is based on camera information based matrix ‘2630’ (i.e. ‘positions between views and directions’)] between the first video frame and the second video frame.
Bao, Barenbrug, Schroers and Jeon are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Barenbrug, Schroers and Jeon as motivation to determine motion vectors in multi-views based on a change of camera viewpoint so as to obtain more precise depth information using multi-view geometry techniques [Schroers: para. 0067-0068] and to reduce encoding complexity, in turn to reduce power consumption [Jeon: para. 0002-0004; 0117: H.264/AVC].
Regarding claim 19, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The method of claim 16, further comprising: determining one or more motions of a camera viewpoint between a first video frame and a second video frame; and generating a third video frame based [Bao: Fig. 2: Frames: T=0; T=1; T=t; Schroers: Fig. 5: a third frame t, Fig. 6: a third frame t-1], at least in part, on the determined one or more motions of the camera viewpoint [Schroers: para. 0067-0068] and the single selected forward pointing motion vector (i.e. ‘approximate Ft->1(x)’) [Bao: Fig. 2; Sect. 3.2], wherein the destination pixel location is of the second video frame.
Bao does not disclose explicitly the following claim limitations (emphasis added):
determining one or more motions of a camera viewpoint (i.e. determining a change in a camera viewpoint) between a first video frame and a second video frame; and generating a third video frame based, at least in part, on the determined one or more motions of the camera viewpoint and the single selected forward pointing motion vectors, wherein the destination pixel location is of the second video frame.
However in the same field of endeavor Jeon discloses the deficient claim as follows:
determining one or more motions of a camera viewpoint (i.e. a motion of camera) [para. 0110, 0303: ‘obtaining camera information including … base matrix with respect to positions between views and directions’] between a first video frame and a second video frame (i.e. Key frames); and generating a third video frame (i.e. Side information) [Fig. 2, 3, 26, 27] based, at least in part, on the determined one or more motions of the camera viewpoint (i.e. ‘a motion of a camera’) [Fig. 2, 3, 26, 27: multi-viewpoint (i.e. a motion of camera); para. 0110: ‘generates the side information (i.e. interpolated frame) by using interpolation … a linear change between frames’; ‘a change between video frames is caused by … a motion of a camera’; para. 0303-0311: disclosing ‘detecting a shield region’ by ‘2660’ based on ‘a motion object’ detected by ‘2650’. The ‘object motion estimating 2650’ is based on camera information based matrix ‘2630’ (i.e. ‘positions between views and directions’)] and the single selected forward pointing motion vector, wherein the destination pixel location is of the second video frame.
Bao, Barenbrug, Schroers and Jeon are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Barenbrug, Schroers and Jeon as motivation to determine motion vectors in multi-views based on a change of camera viewpoint so as to obtain more precise depth information using multi-view geometry techniques [Schroers: para. 0067-0068] and to reduce encoding complexity, in turn to reduce power consumption [Jeon: para. 0002-0004; 0117: H.264/AVC].
Regarding claim 26, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The system of claim 23, wherein the one or more processors are further to: determine a camera viewpoint change between a first video frame and a second video frame; and generate a third video frame based [Bao: Fig. 2: Frames: T=0; T=1; T=t; Schroers: Fig. 5: a third frame t, Fig. 6: a third frame t-1], at least in part, on the determined camera viewpoint change [Schroers: para. 0067-0068] and the selected single forward pointing motion vector (i.e. ‘approximate
Ft->1(x)’) [Bao: Fig. 2; Sect. 3.2], wherein the destination pixel location if of the second video frame.
Bao does not disclose explicitly the following claim limitations (emphasis added):
wherein the one or more processors are further to: determine a camera viewpoint change between a first video frame and a second video frame; and generate a third video frame based, at least in part, on the determined camera viewpoint change and the selected single forward pointing motion vector, wherein the destination pixel location if of the second video frame.
However in the same field of endeavor Jeon discloses the deficient claim as follows:
determine a camera viewpoint change (i.e. ‘obtaining camera information’) [para. 0303: ‘obtaining camera information including … base matrix with respect to positions between views and directions’] between a first video frame and a second video frame (i.e. Key frames); and generate a third video frame (i.e. Side information) [Fig. 2, 3, 26, 27] based, at least in part, on the determined camera viewpoint change (i.e. ‘a motion of a camera’) [Fig. 2, 3, 26, 27: multi-viewpoint (i.e. a motion of camera); para. 0110: ‘generates the side information (i.e. interpolated frame) by using interpolation … a linear change between frames’; ‘a change between video frames is caused by … a motion of a camera’; para. 0303-0311: disclosing ‘detecting a shield region’ by ‘2660’ based on ‘a motion object’ detected by ‘2650’. The ‘object motion estimating 2650’ is based on camera information based matrix ‘2630’ (i.e. ‘positions between views and directions’)] and the selected single forward pointing motion vector, wherein the destination pixel location if of the second video frame.
Bao, Barenbrug, Schroers and Jeon are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Barenbrug, Schroers and Jeon as motivation to determine motion vectors in multi-views based on a change of camera viewpoint so as to obtain more precise depth information using multi-view geometry techniques [Schroers: para. 0067-0068] and to reduce encoding complexity, in turn to reduce power consumption [Jeon: para. 0002-0004; 0117: H.264/AVC].
Claims 12 and 21 rejected under 35 U.S.C. 103 as being unpatentable over Bao in view of Barenbrug in further view of Schroers in further view of Biswas et al. (“Biswas”) [US 8,902,359 B1]
Regarding claim 12, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The non-transitory computer-readable storage medium of claim 8, wherein the plurality of forward pointing motion vectors (i.e. Ft->1(x)’) [Fig. 2 shows instead the backward pointing motion vector Ft->0(x) as an example. Sect. 3.2: ‘Similarly the projected flow Ft->1 can be obtained from the flow F1->0’] correspond to one or more motions (e.g. y1, y2) [Fig. 2 shows the location of the object pixel y1 and y2 in the frame T=0 is different from the location of y1 and y2 in the frame T=1] from a first video frame to a second video frame,
and the set of the instructions, if performed by the one or more processors, further causes the one or more processors to: determine a set of occluded pixel locations [Sect. 3.2: ‘reduce the contribution of occluded pixels which have larger depth values’]; determine that the destination pixel location (i.e. pixel x [Fig. 2]) is not the set of occluded pixel locations [Fig. 2: Average flow vector F avg t->0 (x)]; determine a set of dis-occluded pixel locations; and generate a third video frame based [Fig. 2: Frame T=t; Sect. 3.1, 3.2: Eq. 3 ‘To fill in the holes’], at least in part, on the selected single forward pointing motion vector (i.e. ‘approximate Ft->1(x)’) [Fig. 2 shows instead the backward pointing motion vector Ft->0(x) as an example. Sect. 3.2: ‘Similarly the projected flow Ft->1 can be obtained from the flow F1->0’], the set of occluded pixel locations and the set of dis-occluded pixel locations.
Bao does not disclose explicitly the following claim limitations (emphasis added):
wherein the plurality of forward pointing motion vectors correspond to one or more motions from a first video frame to a second video frame, and the set of the instructions, if performed by the one or more processors, further cause the one or more processors to: determine a set of occluded pixel locations; determine that the destination pixel location is not the set of occluded pixel locations; determine a set of dis-occluded pixel locations; and generate a third video frame based, at least in part, on the selected single forward pointing motion vector, the set of occluded pixel locations and the set of dis-occluded pixel locations.
However in the same field of endeavor Biswas discloses the deficient claim as follows:
wherein the plurality of forward pointing motion vectors correspond to one or more motions from a first video frame to a second video frame, and the set of the instructions, if performed by the one or more processors, further cause the one or more processors to: determine a set of occluded pixel locations (i.e. ‘an occlusion region’) [Fig. 7-8, 10-12; col. 2, ll. 5-15: ‘an occlusion region’]; determine that the destination pixel location is not the set of pixel occluded pixel locations (i.e. ‘an occlusion region’ or ‘a reveal region’) [Fig. 7-8, 10-12; col. 2, ll. 5-15: ‘an occlusion region’; ‘a reveal region’]; determine a set of dis-occluded pixel locations (i.e. ‘an occlusion region’ or ‘a reveal region’) [Fig. 7-8, 10-12; col. 2, ll. 5-15: ‘an occlusion region’; ‘a reveal region’]; and generate a third video frame based, at least in part, on the selected single forward pointing motion vector, the set of occluded pixel locations and the set of dis-occluded pixel locations [Fig. 7-8, 10-12; col. 2, ll. 5-15: ‘a candidate patch that is copied from the interpolated frame, the first frame, or the second frame’].
Bao, Schroers and Biswas are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Schroers and Biswas as motivation to determine the region of discontinuity so as to generate interpolation frames in good quality [Biswas: col. 5, ll. 40-45].
Regarding claim 21, Bao in view of Barenbrug and Schroers meets the claim limitations set forth in claim 20.
Bao does not disclose explicitly the following claim limitations:
The method of claim 20, further comprising:
generating an occlusion mask for a third video frame; generating a dis-occlusion mask for the third video frame; and generating the third video frame based, at least in part, on the occlusion mask and the dis-occlusion mask.
However in the same field of endeavor Biswas discloses the deficient claim as follows:
generating an occlusion mask (i.e. ‘filled using pixel data from the first frame when …. an occlusion region’) for a third video frame (i.e. ‘the interpolated frame’) [Fig. 12: edge mask generation ‘1208’; col. 2, ll. 30-46: ‘A portion of the region of discontinuity in the interpolated frame is filled …’];
generating a dis-occlusion mask (i.e. ‘filled using pixel data from the second frame when … a reveal region’) for the third video frame [Fig. 12: edge mask generation ‘1214’; col. 2, ll. 30-46: ‘A portion of the region of discontinuity in the interpolated frame is filled …’]; and
generating (i.e. filling) the third video frame based, at least in part, on the occlusion mask and the dis-occlusion mask [Fig. 7-8, 10-12; col. 2, ll. 30-46: ‘A portion of the region of discontinuity in the interpolated frame is filled …’; col. 3, ll. 65-68, col. 8, 9, 10].
Bao, Barenbrug, Schroers, and Biswas are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Barenbrug, Schroers, and Biswas as motivation to determine the region of discontinuity so as to generate interpolation frames in good quality [Biswas: col. 5, ll. 40-45].
Claims 13-14 rejected under 35 U.S.C. 103 as being unpatentable over Bao in view of Barenbrug in further view of Schroers in further view of Diggins et al. (“Diggins_225”) [US 2016/0360225 A1]
Regarding claim 13, Bao in view of Barenbrug and Schroers meets the claim limitations as follows:
The non-transitory computer-readable storage medium of claim 8, wherein the selected single forward pointing motion vector (i.e. ‘approximate Ft->1(x)’) is determined based at least in part on selecting the single forward pointing motion vector (i.e. ‘approximate Ft->1(x)’) [Fig. 2. Sect. 3.1 and 3-2] of the plurality of forward pointing motion vectors [Fig. 2 shows a plurality of motion vectors: Fdeptht->0 and Favgt->0 . The vector with a small depth points closer to a source pixel ; Sect. 3.1, 3.2: ‘reduce the contribution of occluded pixels which have larger depth values’] with a corresponding source pixel (i.e. the vector pointing to the closer pixel) [See Claim Interpretation: para. 2(B) and 3 for interpretation of source pixels] with a smallest depth value (i.e. ‘points to the pixel with a smaller depth value’) [Fig. 2 shows a plurality of motion vectors: Fdeptht->0 and Favgt->0 . The vector with a small depth points closer to a source pixel ; Sect. 3.1, 3.2: ‘reduce the contribution of occluded pixels which have larger depth values’],
the plurality of forward pointing motion vectors [Fig. 2 shows a plurality of motion vectors: Fdeptht->0 and Favgt->0 as an example. The vector with a small depth points closer to a source pixel ; Sect. 3.1, 3.2: ‘reduce the contribution of occluded pixels which have larger depth values’] are estimated forward pointing motion vectors (e.g. ‘approximate Ft->1(x)’) [Fig. 2] from a first video frame to a second video frame, and the estimated forward pointing motion vectors (i.e. ‘approximate Ft->1(x)’) are generated based [Fig. 2: multiple flow vectors could be projected to the same position at time t; Sect. 3.1, 3.2.: ‘we approximate Ft->1(x) by –(1-t) F1->0(y)’. Eq. (1) and Eq. (2) define the backward projected flow. ‘Similarly, the projected flow Ft->1 can be obtained from the flow F1->0 and depth map D1’] at least in part on the plurality of backward pointing motion vectors (i.e. F1->0) [Fig. 2; Sect. 3.1, 3.2] that are between the second video frame and the first video frame, and the set of instructions, if performed by the one or more processors, further causes the one or more processors to generate a third video frame [Fig. 2: Frame T=t; Sect. 3.1, 3.2: ‘synthesizing the intermediate frame’] based, at least in part, on the one or more selected forward pointing motion vectors (i.e. ‘approximate Ft->1(x)’) [Fig. 2; Sect. 3.1, 3.2: Eq. 1, Eq. 2] and the plurality of backward pointing motion vectors [Fig. 2; Sect. 3.1, 3.2: ‘to approximate the intermediate flows, i.e., Ft->0, Ft->1, and then apply the backward warping to sample the input frames’; ‘we approximate Ft->1(x) by –(1-t) F1->0(y)’. Eq. (1) and Eq. (2) define the backward projected flow. ‘Similarly, the projected flow Ft->1 can be obtained from the flow F1->0 and depth map D1’].
Bao does not disclose explicitly the following claim limitations (emphasis added):
wherein …;
the plurality of forward pointing motion vectors are estimated forward pointing motion vectors from a first video frame to a second video frame, and the estimated forward pointing motion vectors are generated based at least in part on the plurality of backward pointing motion vectors that are between the second video frame and the first video frame.
However in the same field of endeavor Diggins_225 discloses the deficient claim as follows:
wherein …,
the plurality of forward pointing motion vectors are estimated forward pointing motion vectors from a first video frame to a second video frame, and the estimated forward pointing motion vectors are generated (i.e. ‘607’) [Fig. 6] based at least in part on the plurality of backward pointing motion vectors (i.e. ‘606’) [Fig. 6, 7; para. 0095-0096; 0105-0108: illustrating complimentary bidirectional vector ‘c’ is generated based on the forward and backward vectors ‘a’ and ‘b’] that are between the second video frame (i.e. Input frame (next) ‘804’) [Fig. 6] and the first video frame (i.e. Input frame (previous) ‘803’) [Fig. 6],
Bao, Barenbrug, Schroers and Diggins_225 are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Barenbrug, Schroers and Diggins_225 as motivation to generate interpolated frame as to permit an acceptable level of error [Diggins_225: para. 0104] and transmission toward a display unit at a reduced rate [Sirtory: para. 0047, 0058].
Regarding claim 14, Bao in view of Schroers meets the claim limitations as follows:
The non-transitory computer-readable storage medium of claim 13, wherein the instructions, which if performed by the one or more processors, further cause the one or more processors to:
generate a set of intermediate forward pointing motion vectors (i.e. ‘approximate Ft->1(x)’) [Fig. 2] from the third video frame to the second video frame [Fig. 2: Frame T=t] based, at least in part, on the single selected forward pointing motion vectors (i.e. ‘approximate Ft->1(x)’) [Fig. 2] and the estimated forward pointing motion vectors;
generate a set of intermediate backward pointing motion vectors (i.e. ‘approximate Ft->1(x)’) [Fig. 2] from the third video frame to the first video frame [Fig. 2: frame T=0] based, at least in part, on the plurality of backward pointing motion vectors; and
generate the third video frame based [Fig. 2: frame T=t; Sect. 3.1, 3.2: ‘synthesizing the intermediate frame’ It^], at least in part, on the generated set of intermediate forward pointing motion vectors (i.e. ‘approximate Ft->1(x)’) [Fig. 2] and the generated set of intermediate backward pointing motion vectors.
Bao does not disclose explicitly the following claim limitations (emphasis added):
generate the third video frame based, at least in part, on the generated set of intermediate forward pointing motion vectors and the generated set of intermediate backward pointing motion vectors.
However in the same field of endeavor Diggins_225 discloses the deficient claim as follows:
generate the third video frame [Fig. 6: Output frame (interpolated) ‘605’] based, at least in part, on the generated set of intermediate forward pointing motion vectors and the generated set of intermediate backward pointing motion vectors (i.e. ‘607’) [Fig. 6, 7; para. 0095-0096; 0105-0108: illustrating complimentary bidirectional vector ‘c’ is generated based on the forward and backward vectors ‘a’ and ‘b’].
Bao, Barenbrug, Schroers and Diggins_225 are combinable because they are from the same field of video compression.
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention to combine teachings of Bao, Barenbrug, Schroers and Diggins_225 as motivation to generate interpolated frame as to permit an acceptable level of error [Diggins_225: para. 0104] and transmission toward a display unit at a reduced rate [Sirtory: para. 0047, 0058].
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER D LE whose telephone number is (571)270-5382. The examiner can normally be reached on Monday - Alternate Friday: 10AM-6:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH PERUNGAVOOR can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PETER D LE/
Primary Examiner, Art Unit 2488