Prosecution Insights
Last updated: April 19, 2026
Application No. 19/027,831

METHOD AND DEVICE FOR PICTURE ENCODING AND DECODING

Non-Final OA §103
Filed
Jan 17, 2025
Examiner
WONG, ALLEN C
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Interdigital Vc Holdings Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
669 granted / 805 resolved
+25.1% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
27 currently pending
Career history
832
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 805 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 1/17/25 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (US 2018/0205946) in view of Holcomb (US 2018/0115776). Regarding claim 1, Zhang discloses an encoding method (paragraph [115], fig.10, Zhang discloses an encoder 200 for performing the method of encoding of video image data) comprising: determining a chroma block and a colocated luma block from separate luma and chroma tree partitioning of a picture (paragraph [77], Zhang discloses that QTBT block structure can implement the feature that luma and chroma have separate QTBT (quadtree binary tree) structures, and paragraph [79], Zhang discloses a separate luma QTBT structure in fig.3A, and a separate chroma QTBT structure fig.3B, thus permitting the determination of a chroma block and a colocated luma block from separate luma and chroma tree partitioning of a picture); determining a size for the luma block and a size for the chroma block (paragraph [65], Zhang discloses that in order to set the ratio of the luma and chroma blocks, a size of the coding block is determined in order to determine a size for the luma block and a size for the chroma block; paragraph [67], Zhang discloses that block size is checked, and paragraph [74], Zhang discloses that quadtree leaf node size is determined with a minimum quadtree size, wherein paragraph [76], Zhang discloses for a QTBT (quadtree binary tree) structure, the MinQTSize and MaxBTSize are set for luma and chroma blocks); enabling a cross-component linear model tool for predicting chroma samples based on luma samples (paragraph [68], Zhang discloses the video encoder 200 can code a one bit flag for indicating the enabling of CCLM (cross component linear model) mode, and paragraph [105], Zhang discloses implementing cross-component linear model mode for chroma intra prediction mode for video coding by performing prediction from luma to chroma, thus enabling the cross component linear model tool for compression of video data, and paragraph [108], Zhang discloses implementing plural CCLM modes; paragraph [109], Zhang discloses cross-component linear model mode is enabled); applying quadtree partitioning of video data (paragraph [75], Zhang discloses application of quadtree partitioning of video data); and encoding the block responsive to the enabling of the cross-component linear model tool (paragraph [140], fig.10, Zhang discloses that element 220 entropy encodes the video data into a bitstream, wherein paragraph [68], Zhang discloses the video encoder 200 can code a one bit flag for indicating the enabling of CCLM (cross component linear model) mode, and paragraph [105], Zhang discloses implementing cross-component linear model mode for chroma intra prediction mode for video coding by performing prediction from luma to chroma, thus enabling the cross component linear model tool for compression of video data, and paragraph [108], Zhang discloses implementing plural CCLM modes; paragraph [109], Zhang discloses cross-component linear model mode is enabled). Zhang does not disclose “enabling a cross-component linear model tool for predicting chroma samples based on luma samples in the condition that the luma block is of size 64 by 64 pixels or 32 by 32 pixels resulting from a quad-tree partitioning, and the chroma block is of size 32 by 32 pixels or 16 by 16 pixels resulting from a quad-tree partitioning”. However, Holcomb teaches partitioning data based on the condition that the luma block is of size 64 by 64 pixels or 32 by 32 pixels resulting from a quad-tree partitioning (paragraph [72], Holcomb discloses, according to HEVC video encoding standard for a quadtree syntax, a coding tree unit (CTU) can comprise of a 64x64 luma sample values (CTB or coding tree block) and two 32x32 chroma CTB, in that the CTU can be split into four CUs, wherein each CU can comprise one 32x32 luma coding block and two 16x16 chroma blocks, thus Holcomb discloses the condition that luma blocks can be of 64x64 or 32x32 pixel sizes resulting from quad-tree partitioning, and chroma blocks can be of 32x32 or 16x16 pixel sizes resulting from a quad-tree partitioning), and the chroma block is of size 32 by 32 pixels or 16 by 16 pixels resulting from a quad-tree partitioning (paragraph [72], Holcomb discloses, according to HEVC video encoding standard for a quadtree syntax, a coding tree unit (CTU) can comprise of a 64x64 luma sample values (CTB or coding tree block) and two 32x32 chroma CTB, in that the CTU can be split into four CUs, wherein each CU can comprise one 32x32 luma coding block and two 16x16 chroma blocks, thus Holcomb discloses the condition that luma blocks can be of 64x64 or 32x32 pixel sizes resulting from quad-tree partitioning, and chroma blocks can be of 32x32 or 16x16 pixel sizes resulting from a quad-tree partitioning). Since Zhang discloses “enabling a cross-component linear model tool for predicting chroma samples based on luma samples”, and Holcomb discloses “…partitioning data based on the condition that the luma block is of size 64 by 64 pixels or 32 by 32 pixels resulting from a quad-tree partitioning, and the chroma block is of size 32 by 32 pixels or 16 by 16 pixels resulting from a quad-tree partitioning”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Zhang and Holcomb together as a whole for ascertaining the limitation “…enabling a cross-component linear model tool for predicting chroma samples based on luma samples in the condition that the luma block is of size 64 by 64 pixels or 32 by 32 pixels resulting from a quad-tree partitioning, and the chroma block is of size 32 by 32 pixels or 16 by 16 pixels resulting from a quad-tree partitioning” so as to improve the encoding efficiency of video data by accurate predictions of video components necessary for compression. Regarding claim 2, Zhang discloses a non-transitory computer readable medium containing computer program comprising instructions for performing the method of claim 1 when executed by one or more processors (paragraph [174], Zhang discloses utilizing a computer readable medium that stores computer executable instructions of a computer program to be executed by computer or processor). Regarding claim 3, Zhang discloses an encoding apparatus comprising one or more processors configured to perform the encoding method of claim 1 when executed by the one or more processors (paragraph [174], Zhang discloses utilizing a computer readable medium that stores computer executable instructions of a computer program to be executed by computer or processor). Regarding claim 4, Zhang discloses a decoding method (paragraph [145], fig.11, Zhang discloses a decoder 300 for performing the method of decoding of video image data) comprising: obtaining a chroma block and a colocated luma block from separate luma and chroma tree partitioning of a picture (paragraph [147], fig.11, Zhang discloses receiving the encoded video data bitstream at element 320, wherein paragraph [77], Zhang discloses that QTBT block structure can implement the feature that luma and chroma have separate QTBT (quadtree binary tree) structures, and paragraph [79], Zhang discloses a separate luma QTBT structure in fig.3A, and a separate chroma QTBT structure fig.3B, thus permitting the determination of a chroma block and a colocated luma block from separate luma and chroma tree partitioning of a picture); determining a size for the luma block and a size for the chroma block (paragraph [65], Zhang discloses that in order to set the ratio of the luma and chroma blocks, a size of the coding block is determined in order to determine a size for the luma block and a size for the chroma block; paragraph [67], Zhang discloses that block size is checked, and paragraph [74], Zhang discloses that quadtree leaf node size is determined with a minimum quadtree size, wherein paragraph [76], Zhang discloses for a QTBT (quadtree binary tree) structure, the MinQTSize and MaxBTSize are set for luma and chroma blocks); enabling a cross-component linear model tool for predicting chroma samples based on luma samples (paragraph [68], Zhang discloses the video encoder 200 can code a one bit flag for indicating the enabling of CCLM (cross component linear model) mode to the decoder 300, and paragraph [105], Zhang discloses implementing cross-component linear model mode for chroma intra prediction mode for video coding by performing prediction from luma to chroma, thus enabling the cross component linear model tool for compression of video data, and paragraph [108], Zhang discloses implementing plural CCLM modes; paragraph [109], Zhang discloses cross-component linear model mode is enabled); applying quadtree partitioning of video data (paragraph [75], Zhang discloses application of quadtree partitioning of video data); and decoding the block responsive to the enabling of the cross-component linear model tool (paragraph [152], fig.11, Zhang discloses decoding the encoded bitstream as encoded by entropy encoder 220 of video encoder 200 of fig.10, wherein paragraph [68], Zhang discloses the video encoder 200 can code a one bit flag for indicating the enabling of CCLM (cross component linear model) mode to the decoder 300, and paragraph [105], Zhang discloses implementing cross-component linear model mode for chroma intra prediction mode for video coding by performing prediction from luma to chroma, thus enabling the cross component linear model tool for compression of video data, and paragraph [108], Zhang discloses implementing plural CCLM modes; paragraph [109], Zhang discloses cross-component linear model mode is enabled). Zhang does not disclose “enabling a cross-component linear model tool for predicting chroma samples based on luma samples in the condition that the luma block is of size 64 by 64 pixels or 32 by 32 pixels resulting from a quad-tree partitioning, and the chroma block is of size 32 by 32 pixels or 16 by 16 pixels resulting from a quad-tree partitioning”. However, Holcomb teaches partitioning data based on the condition that the luma block is of size 64 by 64 pixels or 32 by 32 pixels resulting from a quad-tree partitioning (paragraph [72], Holcomb discloses, according to HEVC video encoding standard for a quadtree syntax, a coding tree unit (CTU) can comprise of a 64x64 luma sample values (CTB or coding tree block) and two 32x32 chroma CTB, in that the CTU can be split into four CUs, wherein each CU can comprise one 32x32 luma coding block and two 16x16 chroma blocks, thus Holcomb discloses the condition that luma blocks can be of 64x64 or 32x32 pixel sizes resulting from quad-tree partitioning, and chroma blocks can be of 32x32 or 16x16 pixel sizes resulting from a quad-tree partitioning), and the chroma block is of size 32 by 32 pixels or 16 by 16 pixels resulting from a quad-tree partitioning (paragraph [72], Holcomb discloses, according to HEVC video encoding standard for a quadtree syntax, a coding tree unit (CTU) can comprise of a 64x64 luma sample values (CTB or coding tree block) and two 32x32 chroma CTB, in that the CTU can be split into four CUs, wherein each CU can comprise one 32x32 luma coding block and two 16x16 chroma blocks, thus Holcomb discloses the condition that luma blocks can be of 64x64 or 32x32 pixel sizes resulting from quad-tree partitioning, and chroma blocks can be of 32x32 or 16x16 pixel sizes resulting from a quad-tree partitioning). Since Zhang discloses “enabling a cross-component linear model tool for predicting chroma samples based on luma samples”, and Holcomb discloses “…partitioning data based on the condition that the luma block is of size 64 by 64 pixels or 32 by 32 pixels resulting from a quad-tree partitioning, and the chroma block is of size 32 by 32 pixels or 16 by 16 pixels resulting from a quad-tree partitioning”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Zhang and Holcomb together as a whole for ascertaining the limitation “…enabling a cross-component linear model tool for predicting chroma samples based on luma samples in the condition that the luma block is of size 64 by 64 pixels or 32 by 32 pixels resulting from a quad-tree partitioning, and the chroma block is of size 32 by 32 pixels or 16 by 16 pixels resulting from a quad-tree partitioning” so as to improve the encoding and decoding efficiency of video data by accurate predictions of video components necessary for compression and decompression. Regarding claim 5, Zhang discloses a non-transitory computer readable medium containing computer program comprising instructions for performing the decoding method of claim 4 when executed by one or more processors (paragraph [174], Zhang discloses utilizing a computer readable medium that stores computer executable instructions of a computer program to be executed by computer or processor). Regarding claim 6, Zhang discloses a decoding apparatus (paragraph [145], Zhang discloses a decoder 300 for performing the method of decoding of video image data) comprising one or more processors (paragraph [174], Zhang discloses a computer readable medium storing a computer program comprising instructions to be executed by computer or processor) configured to: obtain a chroma block and a colocated luma block from separate luma and chroma tree partitioning of a picture (paragraph [147], fig.11, Zhang discloses receiving the encoded video data bitstream at element 320, wherein paragraph [77], Zhang discloses that QTBT block structure can implement the feature that luma and chroma have separate QTBT (quadtree binary tree) structures, and paragraph [79], Zhang discloses a separate luma QTBT structure in fig.3A, and a separate chroma QTBT structure fig.3B, thus permitting the determination of a chroma block and a colocated luma block from separate luma and chroma tree partitioning of a picture); determine a size for the luma block and a size for the chroma block (paragraph [65], Zhang discloses that in order to set the ratio of the luma and chroma blocks, a size of the coding block is determined in order to determine a size for the luma block and a size for the chroma block; paragraph [67], Zhang discloses that block size is checked, and paragraph [74], Zhang discloses that quadtree leaf node size is determined with a minimum quadtree size, wherein paragraph [76], Zhang discloses for a QTBT (quadtree binary tree) structure, the MinQTSize and MaxBTSize are set for luma and chroma blocks); enable a cross-component linear model tool for predicting chroma samples based on luma samples (paragraph [68], Zhang discloses the video encoder 200 can code a one bit flag for indicating the enabling of CCLM (cross component linear model) mode to the decoder 300, and paragraph [105], Zhang discloses implementing cross-component linear model mode for chroma intra prediction mode for video coding by performing prediction from luma to chroma, thus enabling the cross component linear model tool for compression of video data, and paragraph [108], Zhang discloses implementing plural CCLM modes; paragraph [109], Zhang discloses cross-component linear model mode is enabled); apply quadtree partitioning of video data (paragraph [75], Zhang discloses application of quadtree partitioning of video data); and decode the block responsive to the enabling of the cross-component linear model tool (paragraph [152], fig.11, Zhang discloses decoding the encoded bitstream as encoded by entropy encoder 220 of video encoder 200 of fig.10, wherein paragraph [68], Zhang discloses the video encoder 200 can code a one bit flag for indicating the enabling of CCLM (cross component linear model) mode to the decoder 300, and paragraph [105], Zhang discloses implementing cross-component linear model mode for chroma intra prediction mode for video coding by performing prediction from luma to chroma, thus enabling the cross component linear model tool for compression of video data, and paragraph [108], Zhang discloses implementing plural CCLM modes; paragraph [109], Zhang discloses cross-component linear model mode is enabled). Zhang does not disclose “enable a cross-component linear model tool for predicting chroma samples based on luma samples in the condition that the luma block is of size 64 by 64 pixels or 32 by 32 pixels resulting from a quad-tree partitioning, and the chroma block is of size 32 by 32 pixels or 16 by 16 pixels resulting from a quad-tree partitioning”. However, Holcomb teaches partitioning data based on the condition that the luma block is of size 64 by 64 pixels or 32 by 32 pixels resulting from a quad-tree partitioning (paragraph [72], Holcomb discloses, according to HEVC video encoding standard for a quadtree syntax, a coding tree unit (CTU) can comprise of a 64x64 luma sample values (CTB or coding tree block) and two 32x32 chroma CTB, in that the CTU can be split into four CUs, wherein each CU can comprise one 32x32 luma coding block and two 16x16 chroma blocks, thus Holcomb discloses the condition that luma blocks can be of 64x64 or 32x32 pixel sizes resulting from quad-tree partitioning, and chroma blocks can be of 32x32 or 16x16 pixel sizes resulting from a quad-tree partitioning), and the chroma block is of size 32 by 32 pixels or 16 by 16 pixels resulting from a quad-tree partitioning (paragraph [72], Holcomb discloses, according to HEVC video encoding standard for a quadtree syntax, a coding tree unit (CTU) can comprise of a 64x64 luma sample values (CTB or coding tree block) and two 32x32 chroma CTB, in that the CTU can be split into four CUs, wherein each CU can comprise one 32x32 luma coding block and two 16x16 chroma blocks, thus Holcomb discloses the condition that luma blocks can be of 64x64 or 32x32 pixel sizes resulting from quad-tree partitioning, and chroma blocks can be of 32x32 or 16x16 pixel sizes resulting from a quad-tree partitioning). Since Zhang discloses “enable a cross-component linear model tool for predicting chroma samples based on luma samples”, and Holcomb discloses “…partitioning data based on the condition that the luma block is of size 64 by 64 pixels or 32 by 32 pixels resulting from a quad-tree partitioning, and the chroma block is of size 32 by 32 pixels or 16 by 16 pixels resulting from a quad-tree partitioning”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Zhang and Holcomb together as a whole for ascertaining the limitation “…enable a cross-component linear model tool for predicting chroma samples based on luma samples in the condition that the luma block is of size 64 by 64 pixels or 32 by 32 pixels resulting from a quad-tree partitioning, and the chroma block is of size 32 by 32 pixels or 16 by 16 pixels resulting from a quad-tree partitioning” so as to improve the encoding and decoding efficiency of video data by accurate predictions of video components necessary for compression and decompression. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALLEN C WONG whose telephone number is (571)272-7341. The examiner can normally be reached on Flex Monday-Thursday 9:30am-7:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath V Perungavoor can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALLEN C WONG/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Jan 17, 2025
Application Filed
Feb 20, 2025
Response after Non-Final Action
Feb 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604009
IMAGE ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12598321
ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12587671
VIDEO ENCODING APPARATUS AND A VIDEO DECODING APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12581134
FEATURE ENCODING/DECODING METHOD AND DEVICE, AND RECORDING MEDIUM STORING BITSTREAM
2y 5m to grant Granted Mar 17, 2026
Patent 12581091
METHODS AND APPARATUS OF ENCODING/DECODING VIDEO PICTURE DATA
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
95%
With Interview (+11.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 805 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month