DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Application
Claims 1-3, 5-7, 12, 13, and 15-19 are currently pending in this application.
Response to Arguments
Applicant's arguments filed 01/05/2026 have been fully considered but they are not persuasive.
On page 6 of the Applicant’s Remarks, the Applicant argues that none of the references disclose the amended feature of claim, “wherein the predetermined values are a fixed value independent of availability of a reference picture for the picture region.” Further, Applicant argues that Tu fails to mention whether such values are set regardless of whether a reference picture for the picture region exists or not.
However, the Examiner respectfully disagrees with the Applicant’s Remarks. Li discloses that Intra BC prediction is a form of intra-picture prediction—intra BC prediction for a block in a picture does not use any sample values other than sample values in the same picture. [See Li, 0114]. Thus, Li discloses that intra-picture prediction does not use reference samples from a different (reference) picture.
Tu discloses that during intra-decoding of the inter-layer residual video, motion compensation is bypassed. [See Tu, 0073]. Thus, it is known that intra-prediction decoding does not use reference pictures (i.e., previously reconstructed video). Tu discloses that the intra skip mode is used for skipped macroblocks in an intra-coded picture of the inter-layer residual video [See Tu, 0074]. For the intra skip mode, the decoder uses defined intra skip values (e.g., zero or another selected value that results in zero values after inverse remapping) for the skipped macroblock. [See Tu, 0074].
Further, Tu discloses that the decoder selects a skip mode for the current MB if skipped/not skipped status information from a bit stream indicates a skipped MB. [See Tu, 0104 and Fig. 9]. Tu discloses that for intra-coded inter-layer residual video content, an encoder and decoder have a single skip mode--the intra-skip mode. A given macroblock in the intra-coded content can be skipped using the intra-skip mode or not skipped. [See Tu, 0137]. Thus, the use of the defined intra skip values (i.e., zero) for the current MB is dependent on whether the signalled skip information indicates that the current MB is skipped and thus, would indicate that the intra-skip mode would be used for the decoding of the current MB.
Tu discloses that motion compensation is bypassed for intra-coding and when the decoding mode is “intra skip mode” and the picture is intra-coded, the decoder uses defined intra skip values. Thus, one of ordinary skill in the art would understand that the intra skip mode being performed on the intra-coded picture would use defined intra skip values regardless of the availability of reference pictures because the intra-coded picture does not use the reference pictures for the intra decoding process [Official Notice].
On pages 6-7 of the Applicant’s Remarks, the Applicant argues that, “the Examiner's rejection is based on a combination of multiple references, dividing the claim into discrete elements and assigning each to a different reference. This approach fails to consider the operation of the claim as a whole, as required by the Manual of Patent Examining Procedure (MPEP). According to MPEP § 2106.05, the examiner must consider the claimed invention as a whole and not dissect the claims into individual elements and then search for prior art references which disclose those elements individually. Here, there is no articulated rationale or motivation to combine the cited references in the manner proposed by the Examiner. The references do not suggest that a person of ordinary skill in the art would have been motivated to combine their teachings to arrive at the claimed invention, particularly the features of parsing a bitstream, selectively generating a decoded representation from the bitstream, and applying different decoding methods, wherein one of the decoding methods includes setting values of pixels in a picture region to predetermined values independent of availability of a reference picture for the picture region.”
However, the Examiner respectfully disagrees with the Applicant’s Remarks. Obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007).
In this case, references are in the same field of video encoding/decoding wherein a skip mode is applied or known. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Kazui with teachings of Yamamoto, Li, and Tu.
“The test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art.” In re Keller, 642 F.2d 413, 425, 208 USPQ 871, 881 (CCPA 1981). See also In re Sneed, 710 F.2d 1544, 1550, 218 USPQ 385, 389 (Fed. Cir. 1983) (“[I]t is not necessary that the inventions of the references be physically combinable to render obvious the invention under review.”); and In re Nievelt, 482 F.2d 965, 179 USPQ 224, 226 (CCPA 1973) (“Combining the teachings of references does not involve an ability to combine their specific structures.”).
Kazui discloses a video image decoding (and encoding) device wherein the encoded blocks have been skipped or subjected to non-skip encoding [See Kazui, 0085-0086]. Kazui fails to explicitly disclose wherein the second decoding method includes setting values of pixels in the picture region to predetermined values upon a type of the picture region indicating intra prediction, wherein the predetermined values are a fixed value independent of availability of a reference picture for the picture region.
Yamamoto discloses encoding/decoding of the video data depending on the significance of the video data wherein the flag indicating the significance (skipping) is signaled [See Yamamoto, Abstract and Section 3.2]. Yamamoto fails to explicitly disclose wherein the second decoding method includes setting values of pixels in the picture region to predetermined values upon a type of the picture region indicating intra prediction, wherein the predetermined values are a fixed value independent of availability of a reference picture for the picture region.
It would have been obvious to one of ordinary skill in the art before the effective filing date of Kazui with the explicit flag for indication of significance (skipping) as taught by Yamamoto in order for bitstream constraint on conformance cropping window to include only non-skipped tiles [See Yamamoto, Abstract].
Li discloses a decoder that receives an encoded bitstream including a flag that indicates if the current block is encoded using intra BC prediction in skip mode [See Li, 0016] and a skip-mode block, the decoder uses the values of prediction as the reconstruction [See Li, 0106]. However, Li fails to explicitly disclose wherein the predetermined values are a fixed value independent of availability of a reference picture for the picture region.
It would have been obvious to one of ordinary skill in the art before the effective filing date of Kazui and Yamamoto with the use of values of prediction for the reconstruction of an intra-prediction skip-mode block as taught by Li in order improve coding efficiency for intra-BC-predicted blocks.
Tu discloses that the decoder selects a skip mode for the current MB if skipped/not skipped status information from a bit stream indicates a skipped MB. [See Tu, 0104 and Fig. 9]. Tu discloses that for intra-coded inter-layer residual video content, an encoder and decoder have a single skip mode--the intra-skip mode. A given macroblock in the intra-coded content can be skipped using the intra-skip mode or not skipped. [See Tu, 0137]. Thus, the use of the defined intra skip values (i.e., zero) for the current MB is dependent on whether the signalled skip information indicates that the current MB is skipped and thus, would indicate that the intra-skip mode would be used for the decoding of the current MB.
Tu discloses that motion compensation is bypassed for intra-coding and when the decoding mode is “intra skip mode” and the picture is intra-coded, the decoder uses defined intra skip values. Thus, one of ordinary skill in the art would understand that the intra skip mode being performed on the intra-coded picture would use defined intra skip values regardless of the availability of reference pictures because the intra-coded picture does not use the reference pictures for the intra decoding process [Official Notice].
It would have been obvious to one of ordinary skill in the art before the effective filing date of Kazui, Yamamoto, and Li with the known teachings of encoding and decoding for the intra skip mode using defined zero skip values wherein the values can all be zero as taught by Tu in order to help improve the efficiency of encoding inter-layer residual video by allowing macroblock skip modes for intra-coded inter-layer residual video content [See Tu, 0031] and by using efficient skip modes to represent common patterns of values in the inter-layer residual video, the enhancement layer encoder improves rate-distortion performance [See Tu, 0032].
The motivational statements show that a prima facie case supporting the obviousness rejection of the claims has been established through at least the rationale G listed below. The key to supporting any rejection under 35 U.S.C. 103 is the clear articulation of the reason(s) why the claimed invention would have been obvious. The Supreme Court in KSR noted that the analysis supporting a rejection under 35 U.S.C. 103 should be made explicit. The Court quoting In re Kahn, 441 F.3d 977, 988, 78 USPQ2d 1329, 1336 (Fed. Cir. 2006), stated that '"[R]ejections on obviousness cannot be sustained by mere conclusory statements; instead, there must be some articulated reasoning with some rational underpinning to support the legal conclusion of obviousness.'" KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Exemplary rationales that may support a conclusion of obviousness include: ... (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. See MPEP 2141, Section III.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 5-7, 12, 13, 15-17, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over KAZUI (Hereafter, “Kazui”) [US 2018/0109800 A1] in view of YAMAMOTO et al., "MV-HEVC/SHVC HLS: Skipped slice and use case," Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 (Hereafter, “Yamamoto”) in further view of Li et al. (Hereafter, “Li”) [US 2017/0070748 A1] in even further view of Tu et al. (Hereafter, “Tu”) [US 2010/0061447 A1].
In regards to claim 1 and 12 (encoding method form (encoding is the opposite of decoding)), Kazui discloses a method of bitstream processing ([Abstract] A video image decoding device executes a separation process for extracting, from encoded video image data including multiple images.), comprising: parsing a bitstream [0094], wherein the picture region includes N picture blocks [Fig. 2], where N is an integer ([0082] The separator 21 extracts, from the bit stream including the encoded video image data, the encoded data of the color-difference components of the pictures of the base layer, the encoded data of the color-difference components of the simultaneous encoding pictures of the enhancement layer, and the encoded data of the luminance components of the pictures. Then, the separator 21 outputs the encoded data of the color-difference components of the base layer to the base layer decoder 22 and outputs the encoded data of the color-difference components of the enhancement layer to the enhancement layer decoder 24. In addition, the separator 21 outputs the encoded data of the luminance components to the luminance component decoder 26. [0085] In addition, the base layer decoder 22 identifies, from the header information, a coding mode applied to the blocks subjected to the non-skip encoding.); and selectively generating, based on a value of the ([Abstract] A video image decoding device executes a separation process for extracting, from encoded video image data including multiple images, first encoded data obtained by encoding reduced images of first images included in the multiple images, and second encoded data obtained by encoding second images included in the multiple images, executes a first decoding process for decoding the reduced images of the first images from the first encoded data, executes a second decoding process for decoding the second images from the second encoded data, executes a recording process that includes recording first region included in the reduced images, the first region including motions with respect to the second images immediately preceding the first images, and executes a synthesis process for reproducing the multiple images by modifying second region in the second images immediately preceding the first images in accordance with the first region. [0081] The bit stream including the encoded video image data is input to the buffer 20. Then, encoded data of two color-difference components of each of the pictures of the layers of the encoded video image data and encoded data of a luminance component of each of the pictures are sequentially read in the order of the pictures to be displayed. In addition, the buffer 20 may store various types of data that has been generated during a video image decoding process and is the decoded color-difference components of the layers and the decoded luminance components.); wherein the selectively generating includes one of: in case that the value of the picture region flag is a first value, using a first decoding method to generate the decoded representation from the bitstream ([0085] In addition, the base layer decoder 22 identifies, from the header information, a coding mode applied to the blocks subjected to the non-skip encoding. Then, if a target block is already subjected to the inter-predictive coding, the base layer decoder 22 decodes a motion vector of the block and determines, as a predictive block, a region specified by the motion vector and included in a color-difference component of a decoded picture. In addition, if the target block is already subjected to the intra-predictive coding, the base layer decoder 22 calculates a predictive block from a decoded region of a color-difference component to be decoded. Then, the base layer decoder 22 reproduces each of the blocks subjected to the non-skip encoding by adding, to values of pixels of predictive blocks corresponding to the blocks, reproduced predictive error signals corresponding to the pixels.); or in case that the value of the picture region flag is a second value different from the first value, using a second decoding method different from the first decoding method to generate the decoded representation from the bitstream ([0086] In addition, regarding a block for which the encoding has been skipped, the base layer decoder 22 may copy a block, which is included in a decoded immediately preceding picture and located at the same position as the block for which the encoding has been skipped, to the block for which the encoding has been skipped. [0087] Then, the base layer decoder 22 synthesizes the reproduced blocks with each other in the order of the blocks to be encoded for each of the color-difference components, thereby reproducing the color-difference components. The base layer decoder 22 causes the reproduced color-difference components to be stored in the buffer 20.),
Yamamoto discloses a method of bitstream processing, comprising: parsing a bitstream to obtain a picture region flag from a data unit corresponding to a picture region in the bitstream ([Section 3.2] non_significant_tile_flag indicates whether or not the tile containing the slice is the non-significant tile), wherein the picture region includes N picture blocks, where N is an integer ([Section 3.2] num_ctb_in_slice_segment_minus1 plus 1 specifies the number of CTUs in the current slice segment); and selectively generating, based on a value of the picture region flag ([Section 3.2] non_significant_tile_flag indicates whether or not the tile containing the slice is the non-significant tile), a decoded representation of the picture region from the bitstream; wherein the selectively generating includes one of: in case that the value of the picture region flag is a first value, using a first decoding method to generate the decoded representation from the bitstream ([Abstract and Section 3.2] encoding/decoding the BL and EL with no skipping); or in case that the value of the picture region flag is a second value different from the first value, using a second decoding method different from the first decoding method to generate the decoded representation from the bitstream ([Abstract and Section 3.2] skipping part or all of the EL from being encoded/decoded [Section 3.4] All pixels in all CTU in a non-significant slice segment are not required to be decoded.),
Li discloses a method of bitstream processing [Fig. 6], comprising: parsing a bitstream to obtain a picture region flag from a data unit corresponding to a picture region in the bitstream, wherein the picture region includes N picture blocks, where N is an integer ([0016] A corresponding decoder receives from a bitstream encoded data including a flag indicating that a current block (e.g., coding unit, prediction unit) in a picture is encoded using intra BC prediction in skip mode. [0102 and Fig. 6] parser and entropy decoder 210 [Fig. 7a and 7b]); and selectively generating, based on a value of the picture region flag, a decoded representation of the picture region from the bitstream ([0016] A corresponding decoder receives from a bitstream encoded data including a flag indicating that a current block (e.g., coding unit, prediction unit) in a picture is encoded using intra BC prediction in skip mode. [0156] In some previous approaches to intra BC prediction, a flag for a current CU indicates whether the CU is coded in intra BC prediction mode.); wherein the selectively generating includes one of: in case that the value of the picture region flag is a first value, using a first decoding method to generate the decoded representation from the bitstream ([0106] For a non-skip-mode block, the decoder (600) combines the prediction (658) with reconstructed residual values to produce the reconstruction (638) of the content from the video signal.); or in case that the value of the picture region flag is a second value different from the first value, using a second decoding method different from the first decoding method to generate the decoded representation from the bitstream ([0106] For a skip-mode block, the decoder (600) uses the values of the prediction (658) as the reconstruction (638).), wherein the second decoding method includes setting values of pixels in the picture region to predetermined values upon a type of the picture region indicating intra prediction ([0106] For a skip-mode block, the decoder (600) uses the values of the prediction (658) as the reconstruction (638). [0108] For intra-picture prediction, the values of the reconstruction (638) can be fed back to the intra-picture predictor (645).),
Tu discloses in case that the value of the picture region flag is a second value different from the first value ([0074] the decoder (340) parses skipped/not skipped status information for the macroblocks from the enhancement layer bit stream (304)), using a second decoding method different from the first decoding method to generate the decoded representation from the bitstream ([0074] For skipped macroblocks in an inter-coded picture of the inter-layer residual video, the decoder (340) switches between using an intra skip mode and a predicted-motion skip mode. For the intra skip mode, the decoder (340) uses defined intra skip values (e.g., zero or another selected value that results in zero values after inverse remapping) for the skipped macroblock.), wherein the second decoding method includes setting values of pixels in the picture region to predetermined values upon a type of the picture region indicating intra prediction, wherein the predetermined values are a fixed value ([0074] For the intra skip mode, the decoder (340) uses defined intra skip values (e.g., zero or another selected value that results in zero values after inverse remapping) for the skipped macroblock. [0075] For the zero skip mode, the decoder (340) uses defined zero skip values (e.g., zero or another selected value that results in zero values after inverse remapping) for the skipped channel. [0063] For the zero skip mode, the encoder (240) uses defined zero skip values for the skipped channel. The defined zero skip values can simply be zero.) independent of availability of a reference picture for the picture region ([0073] during intra-decoding of the inter-layer residual video, motion compensation is bypassed [0074] The intra skip mode is used for skipped macroblocks in an intra-coded picture of the inter-layer residual vide. For the intra skip mode, the decoder uses defined intra skip values (e.g., zero or another selected value that results in zero values after inverse remapping) for the skipped macroblock. [0104 and Fig. 9] the decoder selects a skip mode for the current MB if skipped/not skipped status information from a bit stream indicates a skipped MB. [0137] For intra-coded inter-layer residual video content, an encoder and decoder have a single skip mode--the intra-skip mode. A given macroblock in the intra-coded content can be skipped using the intra-skip mode or not skipped.).
Tu discloses that motion compensation is bypassed for intra-coding and when the decoding mode is “intra skip mode” and the picture is intra-coded, the decoder uses defined intra skip values. Thus, one of ordinary skill in the art would understand that the intra skip mode being performed on the intra-coded picture would use defined intra skip values regardless of the availability of reference pictures because the intra-coded picture does not use the reference pictures for the intra decoding process [Official Notice].
It would have been obvious to one of ordinary skill in the art before the effective filing date of Kazui with the explicit flag for indication of significance (skipping) as taught by Yamamoto in order for bitstream constraint on conformance cropping window to include only non-skipped tiles [See Yamamoto, Abstract]. It would have been obvious to one of ordinary skill in the art before the effective filing date of Kazui and Yamamoto with the use of values of prediction for the reconstruction of an intra-prediction skip-mode block as taught by Li in order improve coding efficiency for intra-BC-predicted blocks. It would have been obvious to one of ordinary skill in the art before the effective filing date of Kazui, Yamamoto, and Li with the known teachings of encoding and decoding for the intra skip mode using defined zero skip values wherein the values can all be zero as taught by Tu in order to help improve the efficiency of encoding inter-layer residual video by allowing macroblock skip modes for intra-coded inter-layer residual video content [See Tu, 0031] and by using efficient skip modes to represent common patterns of values in the inter-layer residual video, the enhancement layer encoder improves rate-distortion performance [See Tu, 0032].
In regards to claim 2, the limitations of claim 1 have been addressed. Kazui discloses wherein a type of the picture region indicates inter prediction ([0128] enhancement layer encoder executes the inter-predictive coding [0038] the picture encoding is skipped for the enhancement layer) and wherein the second decoding method includes setting values of pixels in the picture region equal to values of co-located pixels in a reference picture of the picture region ([0086] In addition, regarding a block for which the encoding has been skipped, the base layer decoder 22 may copy a block, which is included in a decoded immediately preceding picture and located at the same position as the block for which the encoding has been skipped, to the block for which the encoding has been skipped. [0087] Then, the base layer decoder 22 synthesizes the reproduced blocks with each other in the order of the blocks to be encoded for each of the color-difference components, thereby reproducing the color-difference components. The base layer decoder 22 causes the reproduced color-difference components to be stored in the buffer 20.).
In regards to claim 3, the limitations of claim 1 have been addressed. Kazui discloses wherein a type of the picture region indicates inter prediction and a reference picture does not exists, and wherein the second decoding method includes setting values of pixels in the picture region equal to a predetermined value ([0061] According to another modified example, the base layer encoder 13 may compare, for each CU or CTU, the sum of absolute values of differential values between pixels corresponding to each other with a predetermined threshold other than 0, and skip the encoding for a CU or CTU whose sum of absolute values of differential values is smaller than the predetermined threshold. The predetermined threshold may be set to, for example, a value that makes acceptable the degradation, caused by the replacement of any pixel value within the CU or CTU with a pixel value of a past picture, of an image quality. Thus, since the number of blocks for which the encoding is skipped increases, the video image encoding device 1 may reduce the amount of information to be generated due to the encoding. [0062] The enlarger 14 upsamples locally decoded images of two color-difference components of each of the simultaneous encoding pictures in the base layer, thereby generating locally decoded images (hereinafter referred to as enlarged locally decoded images) to be referenced upon the encoding of each of the color-difference components of the enhancement layer.).
In regards to claim 5, the limitations of claim 1 have been addressed. Kazui discloses wherein the first decoding method includes using intra decoding or inter decoding of corresponding bits from the bitstream ([0085] the target block in non-skip encoding can be either intra-predictive coding or inter-predictive coding).
In regards to claim 6, the limitations of claim 1 have been addressed. Kazui discloses wherein N is greater than 1 ([0061] According to a modified example, the base layer encoder 13 may determine whether or not the encoding is to be skipped for each of CTUs of the pictures other than the simultaneous encoding pictures. In this case, the CTUs are another example of blocks.).
In regards to claim 7, the limitations of claim 6 have been addressed. Kazui discloses wherein a first picture block in the picture region is coded using a coding mode that is different from that of a second picture block in the picture region, wherein the coding mode is an inter-prediction coding mode or an intra-prediction coding mode ([0061] According to a modified example, the base layer encoder 13 may determine whether or not the encoding is to be skipped for each of CTUs of the pictures other than the simultaneous encoding pictures. In this case, the CTUs are another example of blocks. [0085] the target block in non-skip encoding can be either intra-predictive coding or inter-predictive coding).
In regards to claim 13, the limitations of claim 12 have been addressed. Kazui discloses wherein the first coding method includes intra coding ([0085] the target block in non-skip encoding can be either intra-predictive coding or inter-predictive coding).
In regards to claim 15, the limitations of claim 12 have been addressed. Kazui discloses wherein the first coding method codes the N picture blocks and writes a coding bit of the N picture blocks into a bitstream ([0059] Thus, the video image encoding device 1 may execute the non-skip encoding on a CU included in a motion region and skip the encoding on other CUs, thereby reducing the amount of information to be generated due to the encoding. [0085] In addition, the base layer decoder 22 identifies, from the header information, a coding mode applied to the blocks subjected to the non-skip encoding.).
In regards to claim 16, the limitations of claim 12 have been addressed. Kazui discloses wherein the second coding method skips coding the N picture blocks and writing a coding bit of the N pictures blocks into a bitstream ([0059] Thus, the video image encoding device 1 may execute the non-skip encoding on a CU included in a motion region and skip the encoding on other CUs, thereby reducing the amount of information to be generated due to the encoding. [0085] In addition, the base layer decoder 22 identifies, from the header information, a coding mode applied to the blocks subjected to the non-skip encoding.).
In regards to claim 17, the limitations of claim 12 have been addressed. Kazui discloses wherein N is greater than 1 ([0061] According to a modified example, the base layer encoder 13 may determine whether or not the encoding is to be skipped for each of CTUs of the pictures other than the simultaneous encoding pictures. In this case, the CTUs are another example of blocks.).
In regards to claim 19, the limitations of claim 1 have been addressed. Kazui discloses disclose wherein the picture region is one of a plurality of picture regions that together form a picture [Fig. 2],
Yamamoto discloses wherein the picture region is one of a plurality of picture regions that together form a picture, and wherein the picture regions in the plurality of picture regions are non-overlapping with each other ([Section 3.4] slice segment with tiles in CTUs).
It would have been obvious to one of ordinary skill in the art before the effective filing date of Kazui with the teachings of Yamamoto in order for bitstream constraint on conformance cropping window to include only non-skipped tiles [See Yamamoto, Abstract].
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kazui in view of Yamamoto in further view of Li in even further view of Tu in even further view of Chan et al. (Hereafter, “Chan”) [US 2014/0002594 A1].
In regards to claim 18, the limitations of claim 12 have been addressed. Kazui fails to explicitly disclose wherein the coding criterion is dependent on a current viewport information of the picture.
Chan discloses wherein the coding criterion is dependent on a current viewport information of the picture ([0014] inter-prediction skip mode for coding texture views).
It would have been obvious to one of ordinary skill in the art before the effective filing date of Kazui, Yamamoto, and Li with the teachings of skip mode for depth map images’ views as taught by Chan in order to improve coding efficiency.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kaitlin A Retallick whose telephone number is (571)270-3841. The examiner can normally be reached Monday-Friday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at (571) 272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KAITLIN A RETALLICK/Primary Examiner, Art Unit 2482