DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 8/18/2025 have been fully considered but they are not persuasive.
Applicant argues that the combination of Wang and Wang III does not explicitly teach “…each coded tile group comprises a group identifier (ID) and a plurality of coded tiles.”
In response the Examiner respectfully disagrees. Wang teaches Destination device 14 (e.g., decapsulation unit 32) receives the bitstream (168). In some examples, the receipt of the bitstream by destination device 14 is through the source device 12 and destination device 14 communicating with each other. In various examples, destination device 14 may receive the bitstream generated by source device at some time after the generation via for example, a server 37 or medium 36, as discussed above.
After receipt of the bitstream, destination device 14 (e.g., decapsulation unit 32) may determine the grouping of tiles based on syntax elements in the bitstream that indicate how the tiles of a picture have been grouped (170). Example syntax elements for this purpose are described in greater detail below.
Destination device 14 (e.g., decapsulation unit 32) directs the encoded video data to video decoder 34 based on the syntax elements indicating the grouping of tiles (172). For example, in examples in which video decoder 34 comprises a plurality of parallel processing cores, decapsulation unit 32 may provide encoded video data associated with different tile groups to different ones of the parallel processing cores. As another example, decapsulation unit 32 may provide encoded video data associated with a tile group that covers an ROI to video decoder 34, and discard (or otherwise differentially provide to the video decoder) video data associated with one or more tile groups that do not cover the ROI. Destination device 14 (e.g., video decoder 34) decodes the video data according to the partitioning of the picture into tiles (174). For example, destination device 14 (e.g., video decoder 34) may decode the video data according to whether intra-prediction was allowed across tile boundaries, across tile boundaries only within tile groups, from outside of a tile group to within the tile group, or from within a tile group to outside of a tile group. As another example, destination device 14 (e.g., video decoder 34) may decode the video data in an order established by the grouping of tiles. In one example, the decoding order is from the tile group with the smallest tile group ID to the largest tile group ID, while in another the decoding order is from largest tile group ID to smallest. Tile group ID is an example syntax element discussed in greater detail below. In some examples, the decoding order for tile groups is explicitly signaled in the coded video bitstream, e.g., by encapsulation unit 22.
Examples of syntax, semantics and coding for the tile grouping techniques described herein are now provided (for those syntax elements that are not removed and that no semantics are provided, their semantics may be the same as or similar to the pertinent syntax and semantics presented in JCTVC-F335). In some examples, each tile in a picture is associated with a tile ID, which is equal to the index to the list of all tiles in the picture in tile raster scan order, starting from 0 (for the top-left tile). Tiles are assigned to one or more tile groups, identified by unsigned integer tile group ID values starting from 0. In some examples, tiles are decoded in order according to the tile group ID value, e.g., from smallest to largest tile group ID value. [0120] – [0121].
FIG. 3 is a conceptual diagram illustrating another example picture 80 that is partitioned into a plurality of tiles, where each of the tiles is assigned to one of two tile groups. In the illustrated example, picture 80 is partitioned into seventy-two tiles by eight vertical tile boundaries and seven horizontal tile boundaries (in addition to the two vertical and two horizontal picture boundaries. Each of the hatched or empty squares illustrated in FIG. 3 is a tile. Of the seventy-two tiles, tiles 82A-E of a first tile group, and tiles 84A-D of a second tile group are labeled for ease of illustration. As illustrated in FIG. 3, the tiles of the two groups alternate in a "checkerboard" pattern. Like the groupings illustrated in FIGS. 2A-2D, the grouping of tiles illustrated in FIG. 3 may provide advantages with respect to parallelization efficiency and error resilience. [0073].
Applicant argues that the combination of Wang and Wang III does not explicitly teach “…responsive to the indicator having a first value, decoding the one or more coded tiles in the first coded tile group in a predetermined order, wherein the decoding comprises performing loop filtering operations across tile group boundary between the first coded tile group and a second coded tile group received in the bitstream.”
In response, the Examiner respectfully disagrees. Wang III teaches “in response to the value indicating that the loop filtering operations are not allowed across tile boundaries (304, no), the video coder may code the tiles without performing loop filtering operations on a boundary between tiles of at least one of the pictures (306). Loop filter may be disallowed, for example, in instances where it is desirable to code two or more tiles in parallel. In response to the value indicating that the loop filtering operations are allowed (304, yes), then the video coder may optionally code values representative of one or more boundaries for which the loop filtering operations are (or are not) allowed (308). The video coder may, for example, code a series of flags, with each flag corresponding to a particular boundary, and the value of flag indicating if cross-tile-boundary loop filtering is allowed or disallowed for each boundary. The video coder may also code explicit indications of for which boundaries cross-tile-boundary loop filtering operations are allowed (or not allowed). The explicit indication may, for example, include an index of one or more tiles on the boundary. The video coder may perform the loop filtering operations on at least one boundary between tiles of at least one of the pictures (310).” [0137] – [0138].
FIG. 14 shows a flowchart depicting an example method of controlling loop filtering across tile boundaries according to this disclosure. The techniques shown in FIG. 14 may be implemented by either video encoder 20 or video decoder 30 (generally by a video coder). A video coder may be configured to code, for one or more pictures of video data that are partitioned into tiles, a value representative of whether loop filtering operations are allowed across tile boundaries within the pictures (310). The value may, for example, be one of three possible values, where a first value indicates loop filtering is not allowed across all tile boundaries, a second value indicates loop filtering is allowed across all tile boundaries, and a third value indicates that separate syntax elements for horizontal boundaries and vertical boundaries will be coded separately. In response to the value indicating that the loop filtering operations are not allowed across tile boundaries (312, no), then the video coder may code the tiles without performing the loop filtering operations across boundaries between tiles of at least one of the pictures (314). In response to the value indicating that the loop filtering operations are allowed across all tile boundaries (316, yes), then the video coder may perform the loop filtering operations across at least one of a horizontal tile boundary and a vertical tile boundary (318).
In response to the value indicating that the loop filtering operations are neither disallowed across all tile boundaries nor allowed across all tile boundaries (316, no), then the video coder may code a second value indicating if loop filtering operations are allowed across a tile boundary in the horizontal direction (320). The video coder may also code a third value indicating if loop filtering operations are allowed across a tile boundary in a vertical direction (322). Based on the second and third values, the video coder may perform filtering operations across a horizontal boundary between tiles, a vertical boundary between tiles, or both (324). [0139] – [0140].
In some video coders, the first value for the first syntax element may indicate that loop filtering operations are allowed across all tile boundaries within the picture, while in other video coders the first value for the first syntax element may indicate that additional syntax element will be used to identify boundaries for which cross-tile-boundary loop filtering operations are allowed (or disallowed). In video coders where the first value indicates that additional syntax element will be used to identify boundaries for which cross-tile-boundary loop filtering operations are allowed (or disallowed), the video coder may code a value representative of a horizontal boundary for which the loop filtering operations are allowed and/or code a value representative of a horizontal boundary for which the loop filtering operations are not allowed. The video coder may code a value representative of a vertical boundary for which the loop filtering operations are allowed and/or code a value representative of a vertical boundary for which the loop filtering operations are not allowed. [0142].
In video coders where the first value indicates that additional syntax element will be used to identify boundaries for which cross-tile-boundary loop filtering operations are allowed (or disallowed), the video coders may code a third value for the first syntax element to indicate that loop filtering operations are allowed across all tile boundaries within the picture. [0144].
Wang III further teaches a value may be signaled indicating whether loop filtering operations are allowed across tile boundaries, e.g., for one or more particular boundaries or for all tiles within a frame or within a sequence. [0038].
As part of controlling loop filtering, video encoder 20 may include in a coded bitstream a value for a syntax element indicating if loop filtering is enabled across tile boundaries, e.g., for one or more particular boundaries or for all tiles within a frame or within a sequence. [0072].
Wang III teaches a value indicating loop filtering is enabled across all tiles within a frame or a picture.
Wang teaches grouping tiles into tile groups with tile group ID assigned to each tile group in the picture. The combination of Wang and Wang III would yield the combination that the tiles in a picture or a frame are grouped into plurality of tile groups and each tile group is assigned an ID. The combination would also comprise a value indicating whether loop filtering operations are allowed across tile boundaries for all tiles with the picture or the frame. Because the value covers tile boundaries for all tiles within the picture or the frame, it also covers the tile group boundaries.
Applicant argues that Wang teaches away from performing operations across tile group boundaries.
In response, the Examiner respectfully disagrees. The combination of Wang and Wang III does not teach away from the current application. Wang discloses coding video data including techniques for coding pictures partitioned into tiles. [0006]. Wang III discloses loop filtering operations for video coding including controlling loop filtering operations at the boundaries of tiles within pictures of video data. [0008]. The current application is related to video encoding and decoding techniques, which are useful for encoding and decoding a picture partitioned into picture segments referred to as “tiles.” [0006]. Wang, Wang III, and the current application are related to encoding and decoding techniques that are related to pictures partitioned into tiles. Thus, Wang and Wang III are analogous arts.
Further, the motivation to combine Wang and Wang III is that the combination would allow for loop filtering across tile boundaries to be enabled when it will improve quality and allow for loop filtering across tile boundaries to disabled when it may be desirable to enable parallel decoding of slices. [0029] of Wang III.
Even further, Wang teaches in some examples, source device 12 may allow limited in-picture prediction across tile boundaries, e.g., based upon the assignment of tiles to tile groups. For example, mode select unit 110 may, in some examples, disallow in-picture prediction from a region covered by a tile group to a region not covered by the tile group, while allowing in-picture prediction from a region not covered by the tile group to the region covered by the tile group. In other examples, mode select unit 110 may allow in-picture prediction from a region covered by a tile group to a region not covered by the tile group, while disallowing in-picture prediction from a region not covered by tile group to the region covered by the tile group. [0115]. Thus, Wang suggests in-picture prediction from one tile group to another tile group.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5 and 20-26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 2013/0101035 A1) in view of Wang et al. (US 2013/0107973 A1) (hereafter “Wang III”.)
Consider claim 1, Wang teaches a method of decoding a picture (destination device (e.g., video decoder) decodes the video data…. [0120] – [0123]), the method comprising: receiving a bit stream (destination device receives the bitstream. [0120]) comprising two or more coded tile groups (after receiving the bitstream, destination device may determine the grouping of tiles…. [0120] – [0123]), wherein each coded picture tile comprises a group identifier (ID) and a plurality of coded tiles (each tile in a picture is associated with a tile ID…. [0120] – [0123] and [0073]); obtaining a first coded tile group from the bit stream as a single entity (destination device (e.g., decapsulation unit) directs the encoded video data to the video decoder based on the syntax elements indicating the grouping tiles. [0120] – [0123]); and decoding the one or more coded tiles in the first coded tile group in a predetermined order (destination device may decode the video data in an order established by the grouping of tiles. The decoding order is from the tile group with the smallest tile group ID to the largest tile group ID, while in another the decoding order is from the largest tile group ID to smallest. Each tile in a picture is associated with a tile ID, which is equal to the index to the list of all tiles in the picture tile raster scan order, starting from 0 (for top-left tile). [0120] – [0123]).
However, Wang does not explicitly teach obtaining an indicator from the bitstream, the indicator specifying whether in-loop filtering operations may be performed across tile group boundaries; responsive to the indicator having a first value, decoding the one or more coded tiles in the first coded tile group in a predetermined order, wherein the decoding comprises performing loop filtering operations across the group boundaries.
Wang III teaches obtaining an indicator from the bitstream, the indicator specifying whether in-loop filtering operations may be performed across tile group boundaries of different coded tile groups ([0136] – [0145] and Fig. 13-14); responsive to the indicator having a first value, decoding the one or more coded tiles in the first coded tile group in a predetermined order ([0127] – [0135], [0136] – [0145] and Fig. 13-14), wherein the decoding comprises performing loop filtering operations across a tile group boundary between the first coded tile group and a second coded tile group received in the bitstream ([0127] – [0135], [0136] – [0145] and Fig. 13-14; [0038], [0072]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of controlling loop filtering operations at tile boundaries because such incorporation allow for loop filtering across tile boundaries to be enabled when it will improve coding quality, but also allow for loop filtering across tile boundaries to be disabled when it may be desirable enable parallel decoding of slices. [0029].
Consider claim 2, Wang teaches decoding the plurality of coded tiles in the first coded tile group in a predetermined order comprises scanning the plurality of coded tiles in the first coded tile group contiguously (destination device may decode the video data in an order established by the grouping of tiles. The decoding order is from the tile group with the smallest tile group ID to the largest tile group ID, while in another the decoding order is from the largest tile group ID to smallest. Each tile in a picture is associated with a tile ID, which is equal to the index to the list of all tiles in the picture tile raster scan order, starting from 0 (for top-left tile). [0120] – [0123]. At the start of each slice, decapsulation unit may create a tile to tile group map (TileGroupMap) based on the syntax provided by the active SPS and PPS. TileGroupMap may comprise NumTiles values, each corresponding to the tile group ID value of one tile, indexed in tile raster scan order. TileGroupMap is the same for all slices of a picture. [0138]. See also [0073]).
Consider claim 3, Wang teaches the plurality of coded tiles in the first coded tile group are decoded prior to decoding the plurality of coded tiles in a second coded tile group received in the bit stream (Each tile in a picture is associated with a tile ID, which is equal to the index to the list of all tiles in the picture tile raster scan order, starting from 0 (for top-left tile). Tiles are assigned to one or more tile groups, identified by unsigned integer tile group ID values starting from 0. In some examples, tiles are decoded in order according to the tile group ID value, e.g., from smallest to largest tile group ID value. [0120] – [0123] and [0073]).
Consider claim 4, Wang teaches decoding each of the two or more coded groups in group ID order (Each tile in a picture is associated with a tile ID, which is equal to the index to the list of all tiles in the picture tile raster scan order, starting from 0 (for top-left tile). Tiles are assigned to one or more tile groups, identified by unsigned integer tile group ID values starting from 0. In some examples, tiles are decoded in order according to the tile group ID value, e.g., from smallest to largest tile group ID value. [0120] – [0123] and [0073].).
Consider claim 5, Wang teaches receiving an order indicator indicating how the coded tiles in each of the two or more coded tile groups are to be scanned during decoding ([0124] – [0130]); and scanning the plurality of coded tiles in the two or more coded tile groups according to the order indicator ([0124] – [0130]).
Consider claim 20, Wang teaches a decoder (destination device (e.g., video decoder) decodes the video data…. [0120] – [0123]) comprising: communications interface circuitry (receiver in Fig. 1) configured to receive a bit stream (destination device receives the bitstream. [0120]) comprising two or more coded tile groups from an encoder (after receiving the bitstream, destination device may determine the grouping of tiles…. [0120] – [0123]), wherein each coded tile group comprises a group identifier (ID) and a plurality of coded tiles (each tile in a picture is associated with a tile ID…. [0120] – [0123] and [0073]); and processing circuitry configured to: obtain a first coded tile group from the bit stream as a single entity (destination device (e.g., decapsulation unit) directs the encoded video data to the video decoder based on the syntax elements indicating the grouping tiles. [0120] – [0123]); and decode the plurality of coded tiles in the first coded tile group in a predetermined order (destination device may decode the video data in an order established by the grouping of tiles. The decoding order is from the tile group with the smallest tile group ID to the largest tile group ID, while in another the decoding order is from the largest tile group ID to smallest. Each tile in a picture is associated with a tile ID, which is equal to the index to the list of all tiles in the picture tile raster scan order, starting from 0 (for top-left tile). [0120] – [0123]).
However, Wang does not explicitly teach obtaining an indicator from the bitstream, the indicator specifying whether in-loop filtering operations may be performed across tile group boundaries; responsive to the indicator having a first value, decoding the one or more coded tiles in the first coded tile group in a predetermined order, wherein the decoding comprises performing loop filtering operations across the group boundaries.
Wang III teaches obtaining an indicator from the bitstream, the indicator specifying whether in-loop filtering operations may be performed across tile group boundaries of different coded tile groups ([0136] – [0145] and Fig. 13-14); responsive to the indicator having a first value, decoding the plurality of coded tiles in the first coded tile group in a predetermined order ([0127] – [0135], [0136] – [0145] and Fig. 13-14), wherein the decoding comprises performing loop filtering operations across the group boundary between the first coded tile group and a second coded tile group ([0127] – [0135], [0136] – [0145] and Fig. 13-14, [0038] and [0072]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of controlling loop filtering operations at tile boundaries because such incorporation allow for loop filtering across tile boundaries to be enabled when it will improve coding quality, but also allow for loop filtering across tile boundaries to be disabled when it may be desirable enable parallel decoding of slices. [0029].
Consider claim 21, claim 21 recites a non-transitory computer-readable storage medium having executable instructions stored ([0059] of Wang) thereon that, when executed by a processing circuit in a decoder, causes the decoder to perform the perform recited in claim 1. Thus, it is rejected for the same reasons.
Consider claim 22, Wang III teaches the indicator is a flag ([0136]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of controlling loop filtering operations at tile boundaries because such incorporation allow for loop filtering across tile boundaries to be enabled when it will improve coding quality, but also allow for loop filtering across tile boundaries to be disabled when it may be desirable enable parallel decoding of slices. [0029].
Consider claim 23, Wang III teaches the indicator is signaled per tile group ([0128] – [0129], Table 1 & 2, and [0136],).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of controlling loop filtering operations at tile boundaries because such incorporation allow for loop filtering across tile boundaries to be enabled when it will improve coding quality, but also allow for loop filtering across tile boundaries to be disabled when it may be desirable enable parallel decoding of slices. [0029].
Consider claim 24, Wang III teaches one or more of the coded tile groups in the picture is dependent on the content of other coded tile groups in the picture ([0031], [0065], [0101] – [0106], [0134]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of controlling loop filtering operations at tile boundaries because such incorporation allow for loop filtering across tile boundaries to be enabled when it will improve coding quality, but also allow for loop filtering across tile boundaries to be disabled when it may be desirable enable parallel decoding of slices. [0029].
Consider claim 25, Wang III teaches the in-loop filtering operations comprise at least one of a deblocking filter operation and a sample adaptive offset filter operation ([0138]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of controlling loop filtering operations at tile boundaries because such incorporation allow for loop filtering across tile boundaries to be enabled when it will improve coding quality, but also allow for loop filtering across tile boundaries to be disabled when it may be desirable enable parallel decoding of slices. [0029].
Consider claim 26, Wang III teaches obtaining a second indicator from the bitstream before obtaining the indicator, wherein the second indicator specifies whether in-loop filtering operations may be performed across tile group boundaries between two coded tile groups ([0136] – [0145] and Fig. 14); and in response to the second indicator indicating that in-loop filtering operations may be performed across tile group boundaries between two coded tile groups, determine that loop filtering operations are disabled also across tile group boundaries ([0136] – [0145] and Fig. 14).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of controlling loop filtering operations at tile boundaries because such incorporation allow for loop filtering across tile boundaries to be enabled when it will improve coding quality, but also allow for loop filtering across tile boundaries to be disabled when it may be desirable enable parallel decoding of slices. [0029].
Claim 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 2013/0101035 A1) in view of Wang et al. (US 2013/0107973 A1) (hereafter “Wang III”.) and Curcio et al. (US 2018/0249163 A1).
Consider claim 14, the combination of Wang and Wang III teaches all the limitations in claim 1 but does not explicitly teach at least one of the plurality of coded picture segment groups comprises a cube map, with each face of the cube map representing a corresponding one of the plurality of coded picture segment groups.
Curcio teaches at least one of the two or more coded picture segment groups comprises a cube map, with each face of the cube map representing a corresponding one of the two or more coded picture segment groups (Input images are stitched and projected onto a three-dimensional projection structure, such as a sphere or a cube. There may be a pre-defined set of representation formats of the projected frame, including a cube map representation format. [0089] – [0092]. In tile rectangle based encoding and streamlining, each cube face may be separately encoded and encapsulated in its own track (and representation). More than one encoded bit stream for each cube face may be provided, each with different spatial resolution. Players can choose tracks (or representations) to be decoded and played based on the current viewing orientation. High resolution tracks (or representations) may be selected for the cube faces used for rendering of the present viewing orientation, while the remaining cube faces may be obtained from their corresponding low-resolution tracks (or representations). [0101] – [0111]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of receiving first view of the picture in a first resolution and one or more second views of the picture in a second resolution, and the first resolution is higher than the second resolution because such incorporation would allow the players to choose tracks to be decoded and played based on the current viewing orientation. [0106].
Claims 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 2013/0101035 A1) in view of Wang et al. (US 2013/0107973 A1) (hereafter “Wang III”) and Huang et al. (US 2019/0273923 A1).
Consider claim 15, the combination of Wang and Wang III teaches all the limitations in claim 1 but does not explicitly teach receiving, for each of the plurality of coded picture segment groups, a corresponding delta_QP value.
Huang teaches receiving, for each of the two or more coded picture segment groups, a corresponding delta_quantization parameter (QP) value ([0044] – [0049]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of receiving a corresponding delta_QP value for each of the plurality of coded picture segment groups because such incorporation would enable the derivation of a quantization parameter. [0044].
Consider claim 16, Huang teaches the delta_QP value comprises: a difference between a quantization parameter (QP) value of a reference and a QP value for the coded tile group corresponding to the delta_QP; or a difference between a QP value of a previous coded tile group and a QP value of a current coded tile group ([0044] – [0049]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of receiving a corresponding delta_QP value for each of the plurality of coded picture segment groups because such incorporation would enable the derivation of a quantization parameter. [0044].
Claim 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 2013/0101035 A1) in view of Wang et al. (US 2013/0107973 A1) (hereafter “Wang III”), Huang et al. (US 2019/0273923 A1) and Coban et al. (US 2012/0328004 A1).
Consider claim 17, the combination of Wang and Huang teaches all the limitations in claim 15 but does not explicitly teach determining a reference QP from at least one of a sequences parameter set and a picture parameter set.
Coban teaches determining a reference QP from at least one of a sequences parameter set and a picture parameter set (To reduce an amount of reference QP data that is stored, quantization unit may subsample reference QPs from a particular area. For example, quantization unit may identify an area that includes a number of blocks having a number of associated QPs, which may be used as reference QPs during coding. Quantization unit may select one of the reference QPs as a representative QP of the area. Video encoder may signal the averaging area and/or sub-sampling selection criteria in an encoded bitstream for use by a video decoder, such as video decoder. For example, video encoder may include an indication of the averaging area in header information (e.g., a slice header) or a parameter set (e.g., a picture parameter set (PPS) or a sequence parameter set (SPS)) of an encoder bitstream. In another example, video encoder may include an indication of which block to sub-sample when determining a reference QP in a header, parameter set, or the like. [0108] – [0109]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of determining a reference QP from at least one of a sequences parameter set and a picture parameter set because such incorporation would reduce an amount of reference QP data that is stored. [0108] – [0109].
Claim 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 2013/0101035 A1) in view of Wang et al. (US 2013/0107973 A1) (hereafter “Wang III”) and Bruls (US 2016/0295200 A1).
Consider claim 18, Wang teaches all the limitations in claim 1 but does not explicitly teach the one or more coded picture segments are contiguous in the bit stream and belong to a same viewpoint.
Bruls teaches the plurality of coded tiles are contiguous in the bit stream and belong to a same viewpoint ([0095] – [0097] and [0101] – [0105]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of arranging picture data such that the one or more coded picture segments in the first picture segment group are contiguous in the bitstream and belong to a same viewpoint because such incorporation would allow improved rendering of three dimensional images. [0022].
Claim 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 2013/0101035 A1) in view of Wang et al. (US 2013/0107973 A1) (hereafter “Wang III”) and Wang et al. (US 2021/0211664 A1) (hereinafter “Wang IV”).
Consider claim 19, Wang teaches all the limitations in claim 1 but does not explicitly teach the group ID and the one or more coded picture segments comprised in the first coded picture segment group are obtained as the single entity.
Wang IV teaches the group ID and the plurality of coded tiles comprised in the first coded tile group are obtained as the single entity ([0090] – [0096]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of obtaining the group ID and the one or more coded picture segments comprised in the first coded picture segment group as the single entity because such incorporation would allow the decoder to correctly locate the video data for fast decoding, parallel processing, and other video display mechanisms. Accordingly, computing tile IDs, entry point offsets, and/or CTU addresses allows for implementation of efficient decoding and display mechanisms while reducing the size of the bitstream and hence increasing coding efficiency. [0096].
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAT CHI CHIO whose telephone number is (571)272-9563. The examiner can normally be reached Monday-Thursday 10am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAMIE J ATALA can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TAT C CHIO/ Primary Examiner, Art Unit 2486