Notice of Pre-AIA or AIA Status
The present application is being examined under the pre-AIA first to invent provisions.
Claim Rejection – 35 U.S.C. § 112
The following is a quotation of 35 U.S.C. 112(b):
(B) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of pre-AIA 35 U.S.C. 112, second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 3-5 and 15-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter, which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claims 3 and 15 recite “for a current coding block that is a first coding block from a left end of a current row of the picture in accordance with a raster scan order, initializing the current symbol probability associated with the current portion based on the previous symbol probability as acquired in context adaptive entropy encoding the previously encoded portion up to an end of a second coding block of a preceding row of the picture”. It is noted that when current row is the first row of the picture, hence there is no preceding row. As a result, there is no second coding block on the preceding row. Hence, it is not clear to readers how this method will work. Therefore, claims 3, 15, and their dependent claims are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. These claim limitations raise a question to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a).
Claims 2, 6-7, 10-11, 14, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Au (US Patent 6,646,578 B1), (“Au”), in view of Mauchly et al. (US Patent Application Publication 2007/0086528 A1), (“Mauchly”), in view of Demircin et al. (US Patent Application Publication 2010/0098155 A1), (“Demircin”).
Regarding claim 2, Au meets the claim limitations as follow.
A method (method) [Au: col. 1, line 7] for transmitting a data stream comprising encoded information related to a picture (video data transmission. Video data is compressed or coded for transmission by taking advantage of the spatial
redundancies within a given frame and the temporal redundancies
between successive frames) [Au: col. 1, line 17-20; Fig. 1], the method comprising: transmitting (video data is transmitted) [Au: col. 1, line 55] the data stream (The blocks are processed and compressed for transmission as the bit stream) [Au: col. 7, line 49-50] comprising the encoded information related to the picture (frame 22
is encoded by the encoder 18) [Au: col. 8, line 66-67; Please see Fig. 1], wherein:the picture is partitioned into portions (The slice 33 in the bit stream 15 contains a picture data 35 representing a sub-set of the macroblocks 24 of the complete picture 22) [Au: col. 8, line 36-38; Fig. 4], each portion including coding blocks arranged in rows and columns (Note: Please see rows and columns of blocks in Figs. 1 & 4);each row of the picture is capable of being transmitted separately; and the encoded information is encoded into the data stream (the bit stream 15 contains a picture data 35 representing a sub-set of the macroblocks 24) [Au: col. 8, line 36-38; Fig. 4] using operations including:generating a residual signal related to a current portion of the portions of the picture ((the residual (difference) blocks) [Au: col. 11, line 37] (prediction error between the source frame and the predicted frame) [Au: col. 1, line 28-29]), entropy encoding (entropy coded) [Au: col. 11, line 8] the residual signal related to the current portion of the portions (coded information of the residual (difference) blocks) [Au: col. 11, line 37], wherein the residual signal related to the current portion is to be entropy encoded ((the residual pixel data as coded by the CAVLC process) [Au: col. 8, line 65]; (The resulting prediction residuals are processed through a frequency domain transform and a quantizer that sets the values of the transform coefficients to discrete values within a pre-specified range. Further compression of the video information is realized by entropy coding the resulting quantized transform coefficients before transmission or storage of the encoded bit stream) [Au: col. 1, line 37-44] – Note: Please see the residual is calculated in Figs. 3 & 4) according to one of at least two modes, wherein the entropy encoding includes (In the H.264 standard, two different entropy-coding modes are supported) [Au: col. 11, line 10-11]:in accordance with a first mode of the at least two modes (i.e. depending on the mode of each slice) [Au: col. 8, line 2], encoding the residual signal using context adaptive entropy encoding (i.e. The complexity of the CABAC method derives from the need to continually update a large set of context models) [Au: col. 7, line 48-50] including deriving contexts across portion boundaries ((selection from the plurality of decoding tables for subsequent Coefficient_levels is determined solely by a previous decoded coefficient_level) [Au: col. 6, line 42-44] – Note: Au discloses an entropy coding mode that uses the previous decoded coefficient level from a previous slice. Hence it uses the context decoding tables of the previous slice across the slice boundaries; (i.e. the method decodes the Coefficient_levels and Run_before using multiple variable-length decoding tables where a table is selected to decode each symbol based on the context of previously decoded symbols) [Au: col. 3, line 17-21] - Note: Au teaches that the context of a previous slice can be used for the current slice. In order word, the context across boundaries of slices has been used. Is also noted that the context table is based on probability) and initializing a current symbol probability associated with the current portion (i.e. performs an initial table selection from a primary table or a secondary table chosen from the plurality of tables) [Au: col. 19, line 36-38] depending on a saved state of a previous symbol probability of a previously encoded portion ((i.e. selection from the plurality of decoding tables for subsequent Coefficient_levels is determined solely by a previous decoded coefficient_level and an experimentally pre-determined table) [Au: col. 19, line 26-30]; (i.e. The first stage in the decoding process includes the parsing and decoding of the entropy coded bitstream 15 symbols that are stored in the buffer 500 to produce the syntax elements 503 used by the other decoder components) [Au: col. 10, line 48-50], and in accordance with a second mode of the at least two modes (i.e. depending on the mode of each slice) [Au: col. 8, line 2], encoding the residual signal using context adaptive entropy encoding (i.e. The complexity of the CABAC method derives from the need to continually update a large set of context models) [Au: col. 7, line 48-50] with restricting the derivation of the contexts so as to not cross the portion boundaries and initializing symbol probabilities independent of any previously encoded portion ((each slice 33 of the frame 22 is encoded by the encoder 18 (see FIG. 1), independently from the other slices 33 in the frame 22) [Au: col. 8, line 66 – col 9, line 8]; (wherein selection from the plurality of decoding tables for the first Coefficient_level is determined solely by local variables) [Au: col. 6, line 38-40] – Note: Au discloses an entropy coding mode that uses only coefficient level from local variables within the slice, but not from the other slices across the slice boundaries).
Au does not explicitly disclose the following claim limitations (Emphasis Added).
each row of the picture is capable of being transmitted separately.
However, in the same field of endeavor Mauchly further discloses the claim limitations and the deficient claim limitations, as follows:
each row of the picture is capable of being transmitted separately (The final output bitstream of a row is transmitted 55 from the bitstream splicer at the end of each row) [Mauchly: para. 0087; Figs. 1-4].
It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Au with Mauchly to program the encoder to implement Mauchly’s method.
Therefore, the combination of Au and Mauchly will enable the coding system to be implemented in parallel [Mauchly: para. 0010].
Morever, in the same field of endeavor Demircin further discloses the claim limitations as follows:
deriving of contexts across portion boundaries (i.e. in some embodiments of the invention, the parallel decoding of entropy slices is structured such that information from previously decoded entropy slices may used to estimate the initial context states for context models in subsequent entropy slices) [Demircin: para. 0024]; derivation of the contexts so as to not cross the portion boundaries (i.e. context model updates are not made across entropy slice boundaries) [Demircin: para. 0011],
It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Au and Mauchly with Demircin to program the encoder to implement two different modes for the derivation of contexts by restricting the derivation of the contexts within a slice or allowing the derivation of the contexts across slice boundaries.
Therefore, the combination of Au and Mauchly with Demircin will enable the video coding system to have flexibility to obtain high coding efficiency by using information across the slice boundaries [Demircin: para. 0023].
Regarding claim 6, Au meets the claim limitations as set forth in claim 2. Au further meets the claim limitations as follow.
generating prediction parameters related to the current portion of the portions of the picture based on a prediction signal (Depending on the coding mode of each macroblock 24, the predicted macroblock 24c can be generated either temporally (inter coding) or spatially (intra coding). The prediction for an inter coded macroblock 24c is determined by the motion vectors 38 that are associated with that macroblock 24c. The motion vectors 38 indicate the position within the set of previously decoded frames 22 from which each block of pixels will be predicted) [Au: col. 11, line 41-48; Figs. 1-4]; and
encoding the prediction parameters into the data stream (Motion vectors 38 are coded using either median or directional prediction, depending on the partition that is used) [Au: col. 11, line 58-59; Figs. 1-4]. ,
Regarding claim 7, Au meets the claim limitations as set forth in claim 2. Au further meets the claim limitations as follow.
each portion of the portions is a slice or slice segment (a slice 33 contains the macroblocks 24) [Au: col. 7, line 31-32];
the current portion includes information indicating a position within the picture at which entropy encoding of the current portion begins (Each of the slices 33 has the slice header 27 that provides information, such as but not limited to the position of the respective slice 33 in the frame 22) [Au: col. 9, line 1-4]; and
the operations further comprise associating each portion with a continuous subset of the coding blocks in a raster scan order so that subsets of the coding blocks follow each other along the raster scan order in accordance with a slice order (i.e. The slice 33 in the bit stream 15 contains a picture data 35 representing a sub-set of the macroblocks 24 of the complete picture 22. The macroblocks 24 in a slice 33 are ordered in raster scan order. The coded slice 33 includes the slice header 27 and the slice data 35 (coded macro blocks 24). The slice header 27 contains a coded representation of data elements 35 that pertain to the decoding of the slice data that follow the slice header 27.) [Au: col. 8, line 36-48].
Regarding claims 10 and 19, Au meets the claim limitations as set forth in claims 2 and 14. Au further meets the claim limitations as follow.
according to the first and second modes (In the H.264 standard, two different entropy-coding modes are supported) [Au: col. 11, line 10-11], continuously updating the symbol probabilities from a beginning to an end of the current portion during the entropy encoding (The complexity of the CABAC method derives from the need to continually update a large set of context models throughout the decoding process, and the arithmetic decoding of symbols) [Au: col. 11, line 19-22].
Morever, in the same field of endeavor Demircin further discloses the claim limitations as follows:
continuously updating the symbol probabilities from a beginning to an end of the current portion during the entropy encoding (i.e. Default initial values defined in the H.264 standard are used to initialize the context variables for a context model and the variable values are updated after each bin is encoded) [Demircin: para. 0008].
It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Au and Mauchly with Demircin to program the encoder to implement two different modes for the derivation of contexts by restricting the derivation of the contexts within a slice or allowing the derivation of the contexts across slice boundaries.
Therefore, the combination of Au and Mauchly with Demircin will enable the video coding system to have flexibility to obtain high coding efficiency by using information across the slice boundaries [Demircin: para. 0023].
Regarding claims 11 and 20, Au meets the claim limitations as set forth in claim 2 and 14. Au further meets the claim limitations as follow.
saving symbol probabilities as acquired in context adaptive entropy encoding the previously encoded portion up to an end of the previously encoded portion (The first stage in the decoding process includes the parsing and decoding of the entropy coded bitstream 15 symbols that are stored in the buffer 500 to produce the syntax elements 503 used by the other decoder components) [Au: col. 11, line 19-22].
Regarding claim 14, Au meets the claim limitations as follow.
A method (method) [Au: col. 1, line 7] for transmitting a data stream comprising encoded information related to a picture (video data transmission. Video data is compressed or coded for transmission by taking advantage of the spatial
redundancies within a given frame and the temporal redundancies
between successive frames) [Au: col. 1, line 17-20; Fig. 1], wherein the picture is partitioned in portions (The slice 33 in the bit stream 15 contains a picture data 35 representing a sub-set of the macroblocks 24 of the complete picture 22) [Au: col. 8, line 36-38; Fig. 4] and in coding blocks arranged in rows and columns (Note: Please see rows and columns of blocks in Figs. 1 & 4), the method comprising:
transmitting (video data is transmitted) [Au: col. 1, line 55] the data stream (The blocks are processed and compressed for transmission as the bit stream) [Au: col. 7, line 49-50] comprising the encoded information related to the picture (frame 22
is encoded by the encoder 18) [Au: col. 8, line 66-67; Please see Fig. 1], wherein each respective row of the picture is capable of being transmitted once the respective row is encoded, wherein the encoded information is encoded into the data stream (the bit stream 15 contains a picture data 35 representing a sub-set of the macroblocks 24) [Au: col. 8, line 36-38; Fig. 4] using operations including:generating a residual signal related to a current portion of the portions of the picture ((the residual (difference) blocks) [Au: col. 11, line 37] (prediction error between the source frame and the predicted frame) [Au: col. 1, line 28-29]), entropy encoding (entropy coded) [Au: col. 11, line 8] the residual signal related to the current portion of the portions (coded information of the residual (difference) blocks) [Au: col. 11, line 37], wherein the residual signal related to the current portion is to be entropy encoded ((the residual pixel data as coded by the CAVLC process) [Au: col. 8, line 65]; (The resulting prediction residuals are processed through a frequency domain transform and a quantizer that sets the values of the transform coefficients to discrete values within a pre-specified range. Further compression of the video information is realized by entropy coding the resulting quantized transform coefficients before transmission or storage of the encoded bit stream) [Au: col. 1, line 37-44] – Note: Please see the residual is calculated in Figs. 3 & 4) according to one of at least two modes, wherein the entropy encoding includes (In the H.264 standard, two different entropy-coding modes are supported) [Au: col. 11, line 10-11]:in accordance with a first mode of the at least two modes (i.e. depending on the mode of each slice) [Au: col. 8, line 2], encoding the residual signal using context adaptive entropy encoding (i.e. The complexity of the CABAC method derives from the need to continually update a large set of context models) [Au: col. 7, line 48-50] including deriving contexts across portion boundaries ((selection from the plurality of decoding tables for subsequent Coefficient_levels is determined solely by a previous decoded coefficient_level) [Au: col. 6, line 42-44] – Note: Au discloses an entropy coding mode that uses the previous decoded coefficient level from a previous slice. Hence it uses the context decoding tables of the previous slice across the slice boundaries; (i.e. the method decodes the Coefficient_levels and Run_before using multiple variable-length decoding tables where a table is selected to decode each symbol based on the context of previously decoded symbols) [Au: col. 3, line 17-21] - Note: Au teaches that the context of a previous slice can be used for the current slice. In order word, the context across boundaries of slices has been used. Is also noted that the context table is based on probability) and initializing a current symbol probability associated with the current portion (i.e. performs an initial table selection from a primary table or a secondary table chosen from the plurality of tables) [Au: col. 19, line 36-38] depending on a saved state of a previous symbol probability of a previously encoded portion ((i.e. selection from the plurality of decoding tables for subsequent Coefficient_levels is determined solely by a previous decoded coefficient_level and an experimentally pre-determined table) [Au: col. 19, line 26-30]; (i.e. The first stage in the decoding process includes the parsing and decoding of the entropy coded bitstream 15 symbols that are stored in the buffer 500 to produce the syntax elements 503 used by the other decoder components) [Au: col. 10, line 48-50], and in accordance with a second mode of the at least two modes (i.e. depending on the mode of each slice) [Au: col. 8, line 2], encoding the residual signal using context adaptive entropy encoding (i.e. The complexity of the CABAC method derives from the need to continually update a large set of context models) [Au: col. 7, line 48-50] with restricting the derivation of the contexts so as to not cross the portion boundaries and initializing symbol probabilities independent of any previously encoded portion ((each slice 33 of the frame 22 is encoded by the encoder 18 (see FIG. 1), independently from the other slices 33 in the frame 22) [Au: col. 8, line 66 – col 9, line 8]; (wherein selection from the plurality of decoding tables for the first Coefficient_level is determined solely by local variables) [Au: col. 6, line 38-40] – Note: Au discloses an entropy coding mode that uses only coefficient level from local variables within the slice, but not from the other slices across the slice boundaries).
Au does not explicitly disclose the following claim limitations (Emphasis Added).
each respective row of the picture is capable of being transmitted once the respective row is encoded.
However, in the same field of endeavor Mauchly further discloses the claim limitations and the deficient claim limitations, as follows:
each respective row of the picture is capable of being transmitted once the respective row is encoded (The final output bitstream of a row is transmitted 55 from the bitstream splicer at the end of each row) [Mauchly: para. 0087; Figs. 1-4].
It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Au with Mauchly to program the encoder to implement Mauchly’s method.
Therefore, the combination of Au and Mauchly will enable the coding system to be implemented in parallel [Mauchly: para. 0010].
Morever, in the same field of endeavor Demircin further discloses the claim limitations as follows:
deriving of contexts across portion boundaries (i.e. in some embodiments of the invention, the parallel decoding of entropy slices is structured such that information from previously decoded entropy slices may used to estimate the initial context states for context models in subsequent entropy slices) [Demircin: para. 0024]; derivation of the contexts so as to not cross the portion boundaries (i.e. context model updates are not made across entropy slice boundaries) [Demircin: para. 0011],
It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Au and Mauchly with Demircin to program the encoder to implement two different modes for the derivation of contexts by restricting the derivation of the contexts within a slice or allowing the derivation of the contexts across slice boundaries.
Therefore, the combination of Au and Mauchly with Demircin will enable the video coding system to have flexibility to obtain high coding efficiency by using information across the slice boundaries [Demircin: para. 0023].
Claims 8-9, 12 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Au (US Patent 6,646,578 B1), (“Au”), in view of Mauchly et al. (US Patent Application Publication 2007/0086528 A1), (“Mauchly”), in view of Demircin et al. (US Patent Application Publication 2010/0098155 A1), (“Demircin”), in view of Bross et al. (High efficiency video coding (HEVC) text specification draft 6; February 1-10, 2012), (“Bross”).
Regarding claim 8, Au, Mauchly, and Demircin meet the claim limitations as set forth in claim 2. Au and further Demircin meet the claim limitations as follow.
coding a syntax element portion within the data stream ((the bitstream 15 is organizing into a hierarchy of syntax levels) [Au: col. 8, line 6-7]; (i.e. As shown in FIG. 4, a syntax element value of a slice is entropy encoded using CABAC (400) to generate a bin string representing the syntax element. Consecutive syntax element values are encoded until either sufficient syntax elements have been encoded to fulfill the size criteria for an entropy slice or the last syntax element value in a slice is encoded) [Demircin: para. 0040]), the syntax element portion indicating which of the first and second modes to use for encoding.
Au, Mauchly, and Demircin do not explicitly disclose the following claim limitations (Emphasis Added).
the syntax element portion indicating which of the first and second modes to use for encoding.
However, in the same field of endeavor Bross further discloses the claim limitations and the deficient claim limitations, as follows:
the syntax element portion indicating which of the first and second modes to use for encoding ((i.e. slice_loop_filter_across_slices_enabled_flag equal to 1 specifies that in-loop filtering operations are performed across slice boundaries; otherwise, the in-loop operations are slice-independent and not applied across slice boundaries) Bross: page. 0077]; (i.e. loop_filter_across_tiles_enabled_flag equal to 1 specifies that in-loop filtering operations are performed across tile
boundaries. loop_filter_across_tiles_enabled_flag equal to 0 specifies that in-loop filtering operations are not performed across tile boundaries) Bross: page. 0065]).
It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Au, Mauchly, and Demircin with Bross to program the coding system to process syntax elements in the slice header to decide whether or not to restrict the predictive coding within a tile or a slice.
Therefore, the combination of Au, Mauchly, and Demircin with Bross will enable the coding system to support the international video coding standard.
Regarding claims 9 and 18, Au, Mauchly, and Demircin meet the claim limitations as set forth in claims 2 and 14. Au and further Demircin meet the claim limitations as follow.
wherein the operations further comprise determining a syntax element and writing the syntax element into the data stream (the bitstream 15 is organizing into a hierarchy of syntax levels with the 3 main levels being a sequence level 17, a picture (or frame) level 19, and slice level 21) [Au: col. 8, line 6-9] with operating in one of at least two operating modes depending on the syntax element (In the H.264 standard, two different entropy-coding modes are supported) [Au: col. 11, line 10-11], based at least in part by: according to a first operating mode, coding the syntax element portion for each portion (The encoder 18 (see FIG. 1) emulates the behaviour of the decoder 20 for coded blocks 22 to make sure the encoder 18 of the transmitting participant A,B and the decoder 20 of the receiving participant A, B work from the same reference frames 22b. Further, a deblocking filter 32 may be applied on the reconstructed frame 58 block boundaries, which helps to reduce the visibility of coding artifacts that can be introduced at those boundaries) [Au: col. 10, line 33-40; Fig. 1]; and
according to a second operating mode, inevitably using a different one of the at least two modes other than the first mode (Two different modes are supported in intra coding of macroblocks 24. In the 4x4 Intra mode, each 4x4 block 25 within the macroblock 24 can use a different prediction mode) [Au: col. 12, line 25-28].
Au, Mauchly, and Demircin do not explicitly disclose the following claim limitations (Emphasis Added).
according to a first operating mode.according to a second operating mode.
However, in the same field of endeavor Bross further discloses the claim limitations and the deficient claim limitations, as follows:
according to a first operating mode ((i.e. lice_loop_filter_across_slices_enabled_flag equal to 1 specifies that in-loop filtering operations are performed across slice boundaries) Bross: page. 0077]; (i.e. loop_filter_across_tiles_enabled_flag equal to 1 specifies that in-loop filtering operations are performed across tile
boundaries) Bross: page. 0065]).
according to a second operating mode ((i.e. the in-loop operations are slice-independent and not applied across slice boundaries) Bross: page. 0077]; (loop_filter_across_tiles_enabled_flag equal to 0 specifies that in-loop filtering operations are not performed across tile boundaries) Bross: page. 0065]).
It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Au, Mauchly, and Demircin with Bross to program the coding system to process syntax elements in the slice header to decide whether or not to restrict the predictive coding within a tile or a slice.
Therefore, the combination of Au, Mauchly, and Demircin with Bross will enable the coding system to support the international video coding standard.
Regarding claim 12, Au, Mauchly, and Demircin meet the claim limitations as set forth in claim 2. Au and further Demircin meet the claim limitations as follow.
in the first and second modes (In the H.264 standard, two different entropy-coding modes are supported) [Au: col. 11, line 10-11], restricting the entropy encoding within tiles (Two different modes are supported in intra coding of macroblocks 24. In the 4x4 Intra mode, each 4x4 block 25 within the macroblock 24 can use a different prediction mode) [Au: col. 12, line 25-28] – Note: A tile can have only one macroblock. In intra mode, entropy encodes information within a macroblock) into which the picture is sub-divided.
Au, Mauchly, and Demircin do not explicitly disclose the following claim limitations (Emphasis Added).
tiles into which the picture is sub-divided.
However, in the same field of endeavor Bross further discloses the claim limitations and the deficient claim limitations, as follows:
tiles into which the picture is sub-divided ((tiles_or_entropy_coding_sync_idc equal to 1 specifies that there may be more than one tile in each picture in the coded
video sequence, and no specific synchronization process for context variables is invoked before decoding the first coding tree block of a row of coding treeblocks) Bross: page. 0064]).
It would have been obvious to one with an ordinary skill in the art at the time of invention to modify the teachings of Au, Mauchly, and Demircin with Bross to program the coding system to process syntax elements in the slice header to decide whether or not to restrict the predictive coding within a tile or a slice.
Therefore, the combination of Au, Mauchly, and Demircin with Bross will enable the coding system to support the international video coding standard.
Allowable Subject Matter
Claims 13 and 21 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. This objection is given with a condition that all other objections and rejections of related claims are addressed.
The above identified claims recite an initialize operation that performs on unique data at specific locations for entropy coding specific parameters. The prior arts fail to teach or render obvious this set of operations.
Reference Notice
Additional prior arts, included in the Notice of Reference Cited, made of record and not relied upon is considered pertinent to applicant's disclosure.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Philip Dang whose telephone number is (408) 918-7529. The examiner can normally be reached on Monday-Thursday between 8:30 am - 5:00 pm (PST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath Perungavoor can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Philip P. Dang/Primary Examiner, Art Unit 2488