Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 11, 14, and 18 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 6 of U.S. Patent No. 12,166,986 (“‘986 patent”, hereafter) in view of the prior art, Aoki, US 2015/0181240 A1.
Instant claim 1, 18/921,274
Claim 1, 12/166,986
A method, comprising:
A system comprising:
a memory to store at least a portion of a plurality of temporally adjacent video frames, the temporally adjacent video frames include a first video frame and one or more second video frames that are temporally previous to the first video frame; and
one or more processors coupled to the memory, the one or more processors to:
segment each of the plurality of temporally adjacent video frames into high quality
and standard quality encode regions, wherein at least a portion of each high quality encode
region of each of the video frames is unique to the video frame relative to high quality encode
regions of other temporally adjacent video frames;
measuring a temporal correlation of a video frame between the video frame;
determine a first temporal correlation value for the first video frame based on a
first temporal correlation between the first video frame and the one or more second video
frames;
determine a first coding quality boost based on the first temporal correlation value;
assigning a coding quality boost to one or more high quality encode regions of the video
frame; and
encode the first video frame by applying the first a coding quality boost to one or more first high quality encode regions of the first video frame relative to one or more first standard quality encode regions of the first video frame to generate at least a portion of a bitstream; and
transmit the portion of the bitstream.
Claim 1, 18/921,274 (continued)
Claim 6, 12/166,986
The system of claim 5, wherein the temporally adjacent video frames include a third video frame;
the one or more processors are further to:
measuring a further temporal correlation of a further video frame that is temporally
adjacent to the video frame, the further temporal correlation being higher than the temporal
correlation;
determine a second temporal correlation value for the third video frame based on
a second temporal correlation between the third video frame and one or more fourth video
frames that are temporally previous to the third video frame; and
assigning a further coding quality boost to one or more further high quality encode
regions of the further video frame,
determine a second coding quality boost based on the second temporal correlation value;
the further coding quality boost being higher than the coding quality boost and the one or more further high quality encode regions being different from the one or more high quality encode regions.
the second temporal correlation value is higher than the first temporal correlation value;
and
the second coding quality boost is higher than the first coding quality boost.
The ‘986 claims do not recite that “the one or more further high quality encode regions being different from the one or more high quality encode regions.”, as found in the instant independent claims However, Aoki, renders this limitation obvious by disclosing an gradual intra frame refresh that updates a unique frame location in each of a series of temporally adjacent frames over a refresh period, as shown in figure 15.
It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to incorporate the feature, disclosed in Aoki, of performing a gradual intra frame refresh instead of instantaneously loading a new intra frame, and concomitantly performing the long-term reference QP adjustment (quality boost) disclosed in Zhang to the refresh regions of an intra frame during a refresh cycle, based on the temporal correlation between respective high quality encode regions of these frames, in order prevent the spikes in required bandwidth frequently caused by an instantaneous refresh (See Aoki), while still utilizing the refresh cycle to selectively enhance the quality of the video at the refreshed regions, leading to a higher quality reference picture/I-frame as a basis for subsequent inter prediction.
Claims 5-10, 12, 19, and 20 are rejected on the grounds of non-statutory double patenting as being unpatentable over claims 1 and 6 of U.S. Patent No. 12,166,986 (“‘986 patent”, hereafter) in view of Aoki, and in further view of Zhang.
See [0030], where Zhang discloses: “Such scene change pictures may be detected using any suitable technique or techniques. In an embodiment, the temporal correlation of an individual frame may be compared to a predetermined threshold such that if the temporal correlation is greater than the threshold, the individual frame is deemed to be a scene change frame. Scene change determination techniques were well known in the art before the time of Applicant’s effective filing date, as evidenced by the above disclosure of Zhang.
Regarding claim 5, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which claim 5 depends. This combination, specifically Zhang, further discloses: the method of claim 1, further comprising:
determining that a yet further video frame is a scene change or intra frame (See [0030]: “Such scene change pictures may be detected using any suitable technique or techniques.”); and
assigning a predefined coding quality boost to one or more yet further high quality encode regions of the yet further video frame (See [0033]: “If a picture of video 121 is a long term reference picture, as shown, the picture may be encoded via encode module 107 using a coding quantization parameter based on an adjustment of quantization parameter 122 (e.g., the rate control based QP for the picture) and an adjustment factor or delta QP (t.QP). Such a reduction of the coding quantization parameter may provide a smaller QP for coding such that long term reference pictures have better quality to achieve better prediction for other pictures of video 121.”).
Regarding claim 6, the combination of Zhang in view of Aoki discloses the limitations of claim 5, upon which claim 6 depends. This combination, specifically Zhang, further discloses: the method of claim 5, wherein determining that the yet further video frame is the scene change or intra frame comprises:
determining that a difference between the yet further video frame and a temporally prior a frame exceeds a threshold (See [0030], where Zhang discloses: “Such scene change pictures may be detected using any suitable technique or techniques. In an embodiment, the temporal correlation of an individual frame may be compared to a predetermined threshold such that if the temporal correlation is greater than the threshold, the individual frame is deemed to be a scene change frame.”).
Regarding claim 7, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which claim 7 depends. This combination, specifically Zhang, further discloses: the method of claim 1, further comprising:
encoding the one or more high quality encode regions of the video frame based on the coding quality boost (See [0074], which discloses, “407. For example, encode module 107 may code the long-term reference picture based on the adjusted quantization parameter (e.g., coding quantization parameter).”); and
encoding the one or more further high quality encode regions of the further video frame based on the further coding quality boost ([0039] discloses “the higher the temporal correlation, the larger the delta QP”. In other words, a larger QP reduction is applied for frames having a higher temporal correlation with a long-term reference picture. A “further quantization parameter reduction” is applied to a frame that has a higher temporal correlation than another frame.”).).
Regarding claim 8, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which claim 8 depends. This combination, specifically Zhang, further discloses: the method of claim 1, further comprising:
determining that the temporal correlation is greater than a threshold (See [0030], “the if the temporal correlation is greater than the threshold, the individual frame is deemed to be a scene change frame.); and
determining that the further temporal correlation is greater than a further threshold, the further threshold being greater than the threshold (See [0094], “generate a second coding quantization parameter for the second individual long term reference picture such that the coding quantization parameter is less than the second coding quantization parameter based on the individual long term reference picture having a higher temporal correlation than the second long term reference picture.”).
Regarding claim 9, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which claim 9 depends. This combination, specifically Zhang, further discloses: the method of claim 1, wherein:
the coding quality boost corresponds to a quantization parameter reduction (See [0033]: “If a picture of video 121 is a long term reference picture, as shown, the picture may be encoded via encode module 107 using a coding quantization parameter based on an adjustment of quantization parameter 122 (e.g., the rate control based QP for the picture) and an adjustment factor or delta QP (t.QP). Such a reduction of the coding quantization parameter may provide a smaller QP for coding such that long term reference pictures have better quality to achieve better prediction for other pictures of video 121.”); and
the further coding quality boost corresponds to a further quantization parameter reduction, the further quantization parameter reduction being greater than the quantization parameter reduction ([0039] discloses “the higher the temporal correlation, the larger the delta QP”. In other words, a larger QP reduction is applied for frames having a higher temporal correlation with a long-term reference picture. A “further quantization parameter reduction” is applied to a frame that has a higher temporal correlation than another frame.”).
Regarding claim 10, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which claim 10 depends. This combination, specifically Zhang, further discloses: the method of claim 1, wherein:
the coding quality boost corresponds to a rate distortion optimization adjustment to provide a number of bits for the one or more high quality encode regions of the video frame (See [0070], “The rate control-based quantization parameter may be generated using any suitable technique or techniques such as standard rate control operations including rate distortion optimization or the like.); and
the further coding quality boost corresponds to a further rate distortion optimization adjustment to provide a further number of bits for the one or more further high quality encode regions of the video frame (As disclosed in [0037], there are gradations of adjustment: “A smaller prediction distortion may be indicative of higher temporal correlation (e.g., low motion) and a larger adjustment factor 126 may be generated by quantization parameter adjustment module 106.”), the further number of bits being greater than the number of bits.
Claims 2 and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 6 of the ‘986 patent in further view of Lai.
Lai discloses in an analogous art skip mode is one among several types of inter prediction designed to exploit temporal correlation between frames. See Lai [0012].
It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to use a number of skip blocks encoded in a current frame to determine a temporal correlation with a previous frame, as suggested by Lai, because it was known in the art before the time of the Applicant’s effective filing date that skip mode is used for inter prediction where an especially high degree of temporal correlation exists. See Lai [0012]. Incorporating a temporal correlation measure based on determining the presence of skip mode would have been obvious, and would have had predictable results for one of ordinary skill in the art.
Claims 3 and 16 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 6 of the ‘986 patent, in further view of Moon.
Moon discloses in [0042] determining a number of skip macroblocks within a frame as a way to ascertain amount temporal correlation with other frames. A threshold number of skip macroblocks is determined. Zhang discloses that if a greater than threshold number of skip macroblocks is found in a frame, it may be determined that temporal correlation is lacking.
It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to incorporate the use the median, number of skip macroblocks, as Zhang discloses using a weighted average temporal correlation between a last long-term reference picture (LTRP) and the current LTRP, and it would have been obvious to try median, another well-known statistic, (along with mode and variance), as a selection from one among a finite number of identified, predictable solutions, with a reasonable expectation of success, with a reasonable expectation of success; average, median, and variance, are all ways of characterizing a set of data, such as number of skip blocks, aggregated from a number of frames. See MPEP 2143.I.E.
Claims 4 and 17 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 6 of the ‘986 patent, in further view of Kusakabe.
See [0010] in Kusakabe discloses determining an inter-frame difference absolute value sum (CurrSAD), which is a pixel-wise absolute difference value between a current frame’s pixel values and those of another frame.
It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to incorporate the use of CurrSAD as disclosed in Kusakabe, to determine the temporal correlation in Zhang. As disclosed in [0010], this method is used for determining whether to apply skip mode or not, as an indicator of inter-frame difference, and one of ordinary skill in the art could have integrated this pixel-wise difference method into Zhang for determining temporal correlation, with each element- the pixel-wise differencing, and the quality boosting- each performing the same function in the combined invention that it performs separately. See MPEP 2143.I.A.
Claim 13 is rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 6 of the ‘986 patent, in further view of Sadhwani.
Sadhwani discloses dithering the location within a frame where refresh intra slices are introduced, in [0040].
It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to introduce a randomization of the spatial locations of intra slices during a refresh period, as disclosed in Sadhwani, in order to reduce or eliminate the appearance of striation artifacts in a picture, producing a less obtrusive appearance to the refresh phenomenon, thereby improving perceived picture quality. See Sadhwani at [0040].
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 14, and 18 recite the limitation measuring a temporal correlation of a video frame between the video frame.” This limitation is unclear because a temporal correlation can only be made between two temporally separated pieces of data, e.g. two video frames or prediction blocks thereof, not between a video frame and itself. For purposes of prior art analysis, this limitation is construed as meaning “measuring a temporal correlation between a video frame and a second video frame”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5-12, 14, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang, US 2017/0214938 A1 in view of Aoki, US 2015/0181240 A1.
Regarding claim 1, Zhang discloses: A method, comprising:
measuring a temporal correlation of a video frame between the video frame (See step 103 in figure 1.);
measuring a further temporal correlation of a further video frame that is temporally adjacent to the video frame, the further temporal correlation being higher than the temporal correlation (See “Look ahead analysis, in [0040], which discloses measuring temporal correlation between a long-term reference picture and subsequent frames to determine an adjustment factor. In general, the correlation will degrade with increasing temporal distance between two given frames.);
assigning a coding quality boost to one or more high quality encode regions of the video frame (See [0039], which discloses with respect to figure 1 that the “adjustment factor 126 (e.g., the delta QP) may be determined based on estimated temporal correlation such that the higher the temporal correlation, the larger the delta QP.” In other words, the higher the temporal correlation between a reference picture and another picture, the higher the quality of encoding of that reference picture.); and
assigning a further coding quality boost to one or more further high quality encode regions of the further video frame, the further coding quality boost being higher than the coding quality boost (See [0093], which discloses applying a lower quantization parameter to a second long term reference picture based on its having a higher temporal correlation.)
Zhang does not disclose:
and the one or more further high quality encode regions being different from the one or more high quality encode regions.
However, Aoki discloses this limitation in an analogous art directed to gradual intra-frame refresh in which different portions of a frame are sequentially updated according to partial regions of a frame, as shown in figures 3 and 15. These regions are different from one another, so that at the end of a refresh period an entire intra frame has been displayed.
It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to incorporate the feature, disclosed in Aoki, of performing a gradual intra frame refresh instead of instantaneously loading a new intra frame, and concomitantly performing the long-term reference QP adjustment (quality boost) disclosed in Zhang to the refresh regions of an intra frame during a refresh cycle, based on the temporal correlation between respective high quality encode regions of these frames, in order prevent the spikes in required bandwidth frequently caused by an instantaneous refresh (See Aoki), while still utilizing the refresh cycle to selectively enhance the quality of the video at the refreshed regions, leading to a higher quality reference picture/I-frame as a basis for subsequent inter prediction.
Regarding claim 5, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which claim 5 depends. This combination, specifically Zhang, further discloses: the method of claim 1, further comprising:
determining that a yet further video frame is a scene change or intra frame (See [0030]: “Such scene change pictures may be detected using any suitable technique or techniques.”); and
assigning a predefined coding quality boost to one or more yet further high quality encode regions of the yet further video frame (See [0033]: “If a picture of video 121 is a long term reference picture, as shown, the picture may be encoded via encode module 107 using a coding quantization parameter based on an adjustment of quantization parameter 122 (e.g., the rate control based QP for the picture) and an adjustment factor or delta QP (t.QP). Such a reduction of the coding quantization parameter may provide a smaller QP for coding such that long term reference pictures have better quality to achieve better prediction for other pictures of video 121.”).
Regarding claim 6, the combination of Zhang in view of Aoki discloses the limitations of claim 5, upon which claim 6 depends. This combination, specifically Zhang, further discloses: the method of claim 5, wherein determining that the yet further video frame is the scene change or intra frame comprises:
determining that a difference between the yet further video frame and a temporally prior a frame exceeds a threshold (See [0030], where Zhang discloses: “Such scene change pictures may be detected using any suitable technique or techniques. In an embodiment, the temporal correlation of an individual frame may be compared to a predetermined threshold such that if the temporal correlation is greater than the threshold, the individual frame is deemed to be a scene change frame.”).
Regarding claim 7, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which claim 7 depends. This combination, specifically Zhang, further discloses: the method of claim 1, further comprising:
encoding the one or more high quality encode regions of the video frame based on the coding quality boost (See [0074], which discloses, “407. For example, encode module 107 may code the long-term reference picture based on the adjusted quantization parameter (e.g., coding quantization parameter).”); and
encoding the one or more further high quality encode regions of the further video frame based on the further coding quality boost ([0039] discloses “the higher the temporal correlation, the larger the delta QP”. In other words, a larger QP reduction is applied for frames having a higher temporal correlation with a long-term reference picture. A “further quantization parameter reduction” is applied to a frame that has a higher temporal correlation than another frame.”).).
Regarding claim 8, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which claim 8 depends. This combination, specifically Zhang, further discloses: the method of claim 1, further comprising:
determining that the temporal correlation is greater than a threshold (See [0030], “the if the temporal correlation is greater than the threshold, the individual frame is deemed to be a scene change frame.); and
determining that the further temporal correlation is greater than a further threshold, the further threshold being greater than the threshold (See [0094], “generate a second coding quantization parameter for the second individual long term reference picture such that the coding quantization parameter is less than the second coding quantization parameter based on the individual long term reference picture having a higher temporal correlation than the second long term reference picture.”).
Regarding claim 9, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which claim 9 depends. This combination, specifically Zhang, further discloses: the method of claim 1, wherein:
the coding quality boost corresponds to a quantization parameter reduction (See [0033]: “If a picture of video 121 is a long term reference picture, as shown, the picture may be encoded via encode module 107 using a coding quantization parameter based on an adjustment of quantization parameter 122 (e.g., the rate control based QP for the picture) and an adjustment factor or delta QP (t.QP). Such a reduction of the coding quantization parameter may provide a smaller QP for coding such that long term reference pictures have better quality to achieve better prediction for other pictures of video 121.”); and
the further coding quality boost corresponds to a further quantization parameter reduction, the further quantization parameter reduction being greater than the quantization parameter reduction ([0039] discloses “the higher the temporal correlation, the larger the delta QP”. In other words, a larger QP reduction is applied for frames having a higher temporal correlation with a long-term reference picture. A “further quantization parameter reduction” is applied to a frame that has a higher temporal correlation than another frame.”).
Regarding claim 10, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which claim 10 depends. This combination, specifically Zhang, further discloses: the method of claim 1, wherein:
the coding quality boost corresponds to a rate distortion optimization adjustment to provide a number of bits for the one or more high quality encode regions of the video frame (See [0070], “The rate control-based quantization parameter may be generated using any suitable technique or techniques such as standard rate control operations including rate distortion optimization or the like.); and
the further coding quality boost corresponds to a further rate distortion optimization adjustment to provide a further number of bits for the one or more further high quality encode regions of the video frame (As disclosed in [0037], there are gradations of adjustment: “A smaller prediction distortion may be indicative of higher temporal correlation (e.g., low motion) and a larger adjustment factor 126 may be generated by quantization parameter adjustment module 106.”), the further number of bits being greater than the number of bits.
Regarding claim 11, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which claim 11 depends. This combination, specifically Aoki, further discloses: the method of claim 1, wherein one or more locations of the one or more further high quality encode regions are shifted from one or more further locations of the one or more high quality encode regions (See figure 15, which illustrates shifting of refreshed region, which is an intra region and a quality boosted region according to the combination of Zhang in view of Aoki).
Regarding claim 12, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which claim 12 depends. This combination, specifically Zhang, further discloses: the method of claim 1, further comprising:
determining the coding quality boost and the further coding quality boost using a monotonically increasing function that maps temporal correlations to coding quality boosts (See [0037], “A smaller prediction distortion may be indicative of higher temporal correlation (e.g., low motion) and a larger adjustment factor 126 may be generated by quantization parameter adjustment module 106.” This paragraph describes that the higher temporal correlation, the higher the QP adjustment (quality boost), which is a monotonically increasing relationship between temporal correlation and quality boost.).
Apparatus claim 14 is drawn to an apparatus implementing the corresponding method claimed in claim 1. Therefore, apparatus claim 14 corresponds to method claim 1 and is rejected for the same reasons of obviousness as used above.
Non-transitory computer readable medium claims 18-20 are rejected for the same reasons of obviousness as given above for claims 1, 5, and 12, respectively.
Claims 2 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang, in view of Aoki, in further view of Lai, US 2020/0396467 A1.
Regarding claim 2, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which depends claim 2. Zhang does not disclose: the method of claim 1, wherein measuring the temporal correlation of the video frame comprises:
determining a number of skip blocks from a temporally previous frame.
Lai discloses in an analogous art skip mode is one among several types of inter prediction designed to exploit temporal correlation between frames. See Lai [0012].
It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to use a number of skip blocks encoded in a current frame to determine a temporal correlation with a previous frame, as suggested by Lai, because it was known in the art before the time of the Applicant’s effective filing date that skip mode is used for inter prediction where an especially high degree of temporal correlation exists. See Lai [0012]. Incorporating a temporal correlation measure based on determining the presence of skip mode would have been obvious, and would have had predictable results for one of ordinary skill in the art.
Apparatus claim 15 is drawn to an apparatus implementing the corresponding method claimed in claim 2. Therefore, apparatus claim 15 corresponds to method claim 2, and is rejected for the same reasons of obviousness as used above.
Claims 3 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang, in view of Aoki, in further view of Moon, US 2011/0135286 A1
Regarding claim 3, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which depends claim 3. This combination does not disclose: the method of claim 1, wherein measuring the temporal correlation of the video frame comprises:
determining a median of numbers of skip blocks from a plurality of temporally previous frames.
Moon discloses in [0042] determining a number of skip macroblocks within a frame as a way to ascertain amount temporal correlation with other frames. A threshold number of skip macroblocks is determined. Zhang discloses that if a greater than threshold number of skip macroblocks is found in a frame, it may be determined that temporal correlation is lacking.
It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to incorporate the use the median, number of skip macroblocks, as Zhang discloses using a weighted average temporal correlation between a last long-term reference picture (LTRP) and the current LTRP, and it would have been obvious to try median, another well-known statistic, (along with mode and variance), as a selection from one among a finite number of identified, predictable solutions, with a reasonable expectation of success, with a reasonable expectation of success; average, median, and variance, are all ways of characterizing a set of data, such as number of skip blocks, aggregated from a number of frames. See MPEP 2143.I.E.
Apparatus claim 16 is drawn to an apparatus implementing the corresponding method claimed in claim 3. Therefore, apparatus claim 16 corresponds to method claim 3, and is rejected for the same reasons of obviousness as used above.
Claims 4, 17 are rejected under 35 U.S.C. 103 as being unpatentable in view of Kusakabe, US 2009/0190660 A1
Regarding claim 4, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which claim 4 depends. This combination does not disclose: the method of claim 1, wherein measuring the temporal correlation of the video frame comprises:
determining pixel-wise differences between the video frame and an immediately temporally previous frame.
See [0010] in Kusakabe discloses determining an inter-frame difference absolute value sum (CurrSAD), which is a pixel-wise absolute difference value between a current frame’s pixel values and those of another frame.
It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to incorporate the use of CurrSAD as disclosed in Kusakabe, to determine the temporal correlation in Zhang. As disclosed in [0010], this method is used for determining whether to apply skip mode or not, as an indicator of inter-frame difference, and one of ordinary skill in the art could have integrated this pixel-wise difference method into Zhang for determining temporal correlation, with each element- the pixel-wise differencing, and the quality boosting- each performing the same function in the combined invention that it performs separately. See MPEP 2143.I.A.
Apparatus claim 17 is drawn to an apparatus implementing the corresponding method claimed in claim 4. Therefore, apparatus claim 17 corresponds to method claim 4, and is rejected for the same reasons of obviousness as used above.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang, in view of Aoki, in further view of Sadhwani, US 2017/0013274 A1.
Regarding claim 13, the combination of Zhang in view of Aoki discloses the limitations of claim 1, upon which claim 13 depends. This combination does not disclose: the method of claim 1, wherein one or more locations of the one or more further high quality encode regions and one or more further locations of the one or more high quality encode regions are selected using a random position generator.
However, Sadhwani discloses dithering the location within a frame where refresh intra slices are introduced, in [0040].
It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to introduce a randomization of the spatial locations of intra slices during a refresh period, as disclosed in Sadhwani, in order to reduce or eliminate the appearance of striation artifacts in a picture, producing a less obtrusive appearance to the refresh phenomenon, thereby improving perceived picture quality. See Sadhwani at [0040].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE M LOTFI whose telephone number is (571)272-8762. The examiner can normally be reached 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KYLE M LOTFI/Examiner, Art Unit 2425