Prosecution Insights
Last updated: April 19, 2026
Application No. 19/005,435

IMAGE ENCODING/DECODING METHOD AND DEVICE

Non-Final OA §102§103§DP
Filed
Dec 30, 2024
Examiner
TARKO, ASMAMAW G
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Industry Academy Cooperation Foundation Of Sejong University
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
81%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
284 granted / 395 resolved
+13.9% vs TC avg
Moderate +9% lift
Without
With
+9.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
24 currently pending
Career history
419
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
58.2%
+18.2% vs TC avg
§102
23.9%
-16.1% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 395 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/30/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 15-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by LIM et al. (US 20140307788 A1, hereinafter “Lim”). Regarding claims 15-20. Claims 15-20 are directed to a non-transitory computer-recordable medium (CRM) for storing a bitstream (i.e. content of information) and the body of the claim recites steps/elements that describe how the bitstream is generated. These steps are not performed by an intended computer, and the video is not a form of programming that causes functions to be performed by an intended computer. This shows that the computer-readable medium merely serves as support for the bitstream and provides no functional relationship between the steps/elements that describe the generation of the bitstream and intended computer system. As result, the claim limitations that describe the generation of the bitstream are non-functional descriptive material (see MPEP §2111.05) and are afforded no patentable weight. To be given patentable weight, the CRM and the bitstream (i.e. descriptive material) must be in a functional relationship. A functional relationship can be found where the descriptive material performs some function with respect to the CRM to which it is associated. See MPEP §2111.05(I)(A). When a claimed “computer-readable medium merely serves as a support for information or data, no functional relationship exists”. MPEP §2111.05(III). The CRM storing the claimed bitstream in claims 15-20 merely services as a support for the CRM of the bitstream and provides no functional relationship between the stored bitstream and the CRM. Therefore, the structure bitstream, which scope is implied by the method steps, is non-functional descriptive material and given no patentable weight. MPEP §2111.05(III). Thus, the claim scope is just a storage medium storing data and is anticipated by Lim which recites a storage medium storing a bitstream (0049-0050 or 0197; Figures 1 and 8). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims, the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-6, 8-13 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over LIM et al. (US 20140307788 A1, hereinafter “Lim”) in view of Zhao et al. (US 20170094314 A1, hereinafter “Zhao”). Regarding claims 1, 8 and 15. Lim teaches an image encoding method (0008-0009; Figure 3), comprising: determining a transform kernel of the current block (0033, 0064, 0074; describing extracting prediction mode information for a current block set by the encoder); performing a transform on residual samples of the current block, based on the determined transform kernel (0046-0048, 0058 and 0064; Figures 3-4 and 7; “[0046] … encoder 320 performs transform and quantization on the residual data block for generating transformed and quantized residual data block. In this case, the transform uses various schemes for transforming spatial domain signals to frequency domain signals such as Hadamard transform, discrete cosine transform, …”); and encoding transform kernel information indicating the transform kernel of the current block and the at least one coefficient of the current block (Figures 3 and 7; 0057; “[0057] … prediction unit 310 of video encoding apparatus 300 by using information necessary for the prediction delivered from bitstream decoder 710”; the prediction mode information for the current block contains the index information for the transform mode of the current block or kernel as disclosed), the transform kernel information 0028 and 0030; Figures 1A; describing multiple directional intra prediction modes where a predictive motion vector is determined and the predictive motion vector index (i.e., transform mode information) is transmitted note vertical:0 and horizontal :1). Lim failed to disclose generating at least one coefficient of the current block by performing a transform on residual samples of the current block, based on the determined transform kernel; wherein the transform comprises a primary transform and a secondary transform, the at least one coefficient is generated by performing the secondary transform on a result of the primary transform on the residual samples using the determined transform kernel, and the transform kernel information indicates one of predefined transform kernel sets comprising a horizontal transform kernel and a vertical transform kernel. Zhao, however, in the same field of endeavor, shows an image encoding method, comprising: generating at least one coefficient of the current block by performing a transform on residual samples of the current block, based on the transform kernel (0077; “[0077] Following intra-predictive or inter-predictive coding using the PUs of a CU, video encoder 20 may calculate residual data for the TUs of the CU. … Video encoder 20 may form the TUs to include quantized transform coefficients representative of the residual data for the CU. That is, video encoder 20 may calculate the residual data (in the form of a residual block), transform the residual block to produce a block of transform coefficients, and then quantize the transform coefficients to form quantized transform coefficients. Video encoder 20 may form a TU including the quantized transform coefficients, as well as other syntax information (e.g., splitting information for the TU)”); and wherein the transform comprises a primary transform and a secondary transform (0225, 0109, 0122 and 0133; Figures 12, 4 and 5; “[0225] FIG. 12 is a flow diagram illustrating a first example encoding of video data that may implement techniques described in this disclosure. As described, the example techniques of FIG. 12 may be performed by encoder 20. In the example of FIG. 12, an encoder (e.g., video encoder 20) forms a residual video block (1002). ... The encoder applies a first transform to the residual video block to generate a first coefficient block (1004). For example, the first transform converts the residual video block from a pixel domain to a frequency domain. For instance, the encoder may apply a DCT or DST on the residual video block. The encoder applies a second transform to at least part of the first coefficient block to generate a second coefficient block (1006). For example, the second transform is a non-separable transform. For instance, the encoder may apply a KLT on the second coefficient block. Next, the encoder quantizes the second coefficient block for entropy encoding (1008).”, “[0109] … an Enhanced Multiple Transforms (EMT) technique is proposed for both intra and inter prediction residual. In EMT, a CU-level flag may be signaled to indicate whether only the conventional DCT-2 or other non-DCT2 type transforms are used. If the CU-level is signaled as 1, a two-bit TU-level index may be further signaled for each TU inside the current CU to indicate which horizontal/vertical transform from a transform subset is used for the current TU. The transform subset may contain two transforms selected from DST-VII, DCT-VIII, DST-V and DST-I, and selection may be based on the intra prediction mode and whether it is a horizontal or a vertical transform subset.”), the at least one coefficient is generated by performing the secondary transform on a result of the primary transform on the residual samples using the determined transform kernel (0225; Figure 12; “[0225] FIG. 12 is a flow diagram illustrating a first example encoding of video data that may implement techniques described in this disclosure. As described, the example techniques of FIG. 12 may be performed by encoder 20. In the example of FIG. 12, an encoder (e.g., video encoder 20) forms a residual video block (1002). … The encoder applies a first transform to the residual video block to generate a first coefficient block (1004). For example, the first transform converts the residual video block from a pixel domain to a frequency domain. For instance, the encoder may apply a DCT or DST on the residual video block. The encoder applies a second transform to at least part of the first coefficient block to generate a second coefficient block (1006). For example, the second transform is a non-separable transform. For instance, the encoder may apply a KLT on the second coefficient block. Next, the encoder quantizes the second coefficient block for entropy encoding (1008).”), and the transform kernel information indicates one of predefined transform kernel sets comprising a horizontal transform kernel and a vertical transform kernel (0105; “[0105] However, as described in X. Zhao et al., “Video coding with rate-distortion optimized transform,” IEEE Trans. Circuits Syst. Video Technol., vol. 22, no. 1, pp. 138-151, January 2012, more transforms may be used and such examples may explicitly signal an index (e.g., encode data indicative of the index) to the transforms from a pre-defined set of transform candidates which are derived from off-line training process. Similar to MDDT, in such examples, each intra prediction direction may have its unique set of pairs of transforms. An index may be signaled to specify which transform pair is chosen from the set. For example, there may be up to four vertical KLTs and up to four horizontal KLTs for smallest block sizes 4×4. Therefore, in this example, 16 combinations may be chosen. For larger block sizes, a smaller number of combinations may be used. The techniques proposed in this disclosure may apply to both intra and inter prediction residual. In this disclosure, intra prediction residual refers to residual data generated using intra prediction. … inter prediction residual refers to residual data generated using inter prediction. For inter prediction residual, up to 16 combinations of KLTs may be chosen and an index to one of the combinations (four for 4×4 and sixteen for 8×8) may be signaled for each block”, wherein KLT is Karhunen-Loeve transform). It would have been obvious to the person of ordinary skill in the art before the effective filing date of the invention to modify Lim with the teachings of Zhao in order to efficiently process a video signal by hierarchically partitioning a unit used for coding, prediction and transform, primary and secondary non-separable transform to both the residual block and the coefficient block generated using primary transform, and a syntax structure in order to enhance coding efficiency by employing spatial distribution characteristics of residual signals, and to provide a method for efficiently transmitting coded block pattern information in the course of hierarchically partitioning a transform unit and also to yield a predictable result. The image decoding method claim 8 is the reverse of encoding method claim 1. Lim further shows a method for decoding an image (0008 and 0009; Figure 3), and therefore Lim in view of Zhao shows all reverse steps of the encoding method as disclosed above, and is rejected for the same reasons of obviousness as used above. A non-transitory computer-recordable medium for storing a bitstream claim 15 is drawn to the recording medium for storing a bitstream corresponding to the image encoding method of using same as claimed in claim 1. Therefore, the recording medium for storing a bitstream claim 15 corresponds to image encoding method claim 1 and is rejected for the same reasons of obviousness as used above. Regarding claims 2, 9 and 16. Zhao shows the image encoding method wherein the transform kernel information is index information indicating at least one of predefined transform kernel sets (0105, 0109 and 0186; Figures 7A-7B; “[0105] However, as described in X. Zhao et al., “Video coding with rate-distortion optimized transform,” IEEE Trans. Circuits Syst. Video Technol., vol. 22, no. 1, pp. 138-151, January 2012, more transforms may be used and such examples may explicitly signal an index (e.g., encode data indicative of the index) to the transforms from a pre-defined set of transform candidates which are derived from off-line training process. Similar to MDDT, in such examples, each intra prediction direction may have its unique set of pairs of transforms. An index may be signaled to specify which transform pair is chosen from the set. For example, there may be up to four vertical KLTs and up to four horizontal KLTs for smallest block sizes 4×4. ... The techniques proposed in this disclosure may apply to both intra and inter prediction residual. In this disclosure, intra prediction residual refers to residual data generated using intra prediction. … inter prediction residual refers to residual data generated using inter prediction. For inter prediction residual, up to 16 combinations of KLTs may be chosen and an index to one of the combinations (four for 4×4 and sixteen for 8×8) may be signaled for each block.”, “[0186] … according to FIG. 7A, for luma intra prediction modes (IPM) greater than 34, the same transform set index for the intra mode 68-IPM may be applied. However, to utilize the symmetry between intra prediction mode IPM and 68-IPM, at the encoder/decoder, the transform coefficient block may be transposed before/after doing the secondary transform. More specifically, in the example of FIG. 7B, intra prediction mode (IPM) ranges from 0 to 66. As illustrated in FIG. 7B, intra mode IPM and 68-IPM are symmetric. For instance, intra mode 18 (horizontal prediction) and 50 (vertical prediction) are symmetric. Since IPM and 68-IPM are symmetric, the non-separable transform applied on these two modes has some connection. For instance, if we transpose the residual block predicted from mode 50 (vertical prediction), the residual statistics should be very similar to the residual blocks predicted from mode 18 (horizontal prediction). ...”). The motivation used on the rejection of claims 1, 8 and 15 to combine Lim and Zhao still applies to the rejection of claims 2, 9 and 16. Regarding claims 3, 10 and 17. Lim further teaches the image encoding method of claim 1, wherein the transform kernel information is encoded for each of coding units (0055; “… information required to decode an encoded bit string within encoded data (i.e. bitstream) and the same information includes, for example size information of coding unit (CU), prediction unit (PU), transform unit (TU), information”). Regarding claims 4, 11 and 18. Lim further teaches the image encoding method of claim 1, wherein the transform kernel information is encoded when a size of the current block is less than or equal to a predefined size (0064; describing the current block has a predetermined size). Regarding claims 5, 12 and 19. Zhao shows the image encoding method of claim 1, wherein the transform kernel information is encoded when a non-zero transform coefficient exists in the current block (0174; “[0174] … when a secondary transform is enabled, the particular mode may be disabled for some conditions but enabled for other conditions. The conditions may include, but are not limited to, block size, number of non-zero transform coefficients, whether coding is for the luma or chroma component, the neighboring prediction modes …”). The motivation used on the rejection of claims 1, 8 and 15 to combine Lim and Zhao still applies to the rejection of claims 5, 12 and 19. Regarding claims 6, 13 and 20. Lim further teaches the image encoding method of claim 1, wherein the transform kernel information is encoded when a transform skip mode is not performed on the current block (0010-0012 and 0062; generate a predicted block for the current block based on the extracted prediction information when the extracted prediction mode information is not indicative of the SKIP mode). Claim Rejections - 35 USC § 103 Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Lim in view of Zhao as applied to claims 1 and 8 above, and further in view of Joshi et al. (US 20130114730 A1, hereinafter “Joshi”). Regarding claims 7 and 14. Lim in view of Zhao shows all of the limitations of the image encoding method of claim 1, but failed to show wherein the horizontal transform kernel and the vertical transform kernel are determined independently. However, in the same field of endeavor, Joshi shows wherein the horizontal transform kernel and the vertical transform kernel are determined independently (0161; “[0161] .. the selected transform skip mode is not signaled to video decoder 30. Instead, the determination of whether to skip a transform for the video block in a given direction is boundary-dependent. An indication of whether a transform is skipped is, .. derived based on the determined boundaries of the video block. The choice of whether a transform is skipped is independent in the horizontal and vertical directions.”). It would have been obvious to person of having ordinary skilled in the art before the effective filing date of the invention to combine the teachings of Joshi on the independently determination of the horizontal and vertical kernel in to the teachings of Lim in view of Zhao in order to obtain a predictable result by using the skipping mode. The image decoding method claim 14 is the reverse of encoding method claim 8, and is rejected for the same reasons of obviousness as used above. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,225,240 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the pending application is broader than the patented claim (please refer blow to the tabular analysis of the claim). 19/005,435 US 12,225,240 B2 1. An image encoding method, comprising: determining a transform kernel of a current block; generating at least one coefficient of the current block by performing a transform on residual samples of the current block, based on the determined transform kernel; and encoding transform kernel information indicating the transform kernel of the current block and the at least one coefficient of the current block, wherein the transform comprises a primary transform and a secondary transform, the at least one coefficient is generated by performing the secondary transform on a result of the primary transform on the residual samples using the determined transform kernel, and the transform kernel information indicates one of predefined transform kernel sets comprising a horizontal transform kernel and a vertical transform kernel. 1. An image encoding method, comprising: performing prediction for a current block using a prediction mode for the current block; determining a transform kernel of the current block; generating at least one coefficient of the current block by performing a transform on residual samples of the current block, based on the transform kernel; and encoding transform kernel information indicating the transform kernel of the current block and the at least one coefficient of the current block, wherein the transform comprises a primary transform and a secondary transform, the at least one coefficient is generated by performing the secondary transform on a result of the primary transform on the residual samples using the transform kernel of the current block, the transform kernel information indicates one of predefined transform kernel sets as the transform kernel of the current block, each of the predefined transform kernel sets comprises a predefined horizontal transform kernel and a predefined vertical transform kernel, the transform kernel of the current block comprises a horizontal transform kernel and a vertical transform kernel, and the horizontal transform kernel and the vertical transform kernel of the transform kernel of the current block are determined based on a size of the current block, whether inter prediction is used for the prediction for the current block or not and whether intra prediction is used for the prediction for the current block or not. 2. The image encoding method of claim 1, wherein the transform kernel information is index information indicating at least one of predefined transform kernel sets. 2. The image encoding method of claim 1, wherein the transform kernel information is index information indicating at least one of the predefined transform kernel sets. 3. The image encoding method of claim 1, wherein the transform kernel information is encoded for each of coding units. 3. The image encoding method of claim 1, wherein the transform kernel information is encoded for each of coding units. 4. The image encoding method of claim 1, wherein the transform kernel information is encoded when a size of the current block is less than or equal to a predefined size. 4. The image encoding method of claim 1, wherein the transform kernel information is encoded when the size of the current block is less than or equal to a predefined size. 5-7. 5-7. 8-14. 8-14. 15-20. 15-20. Double Patenting Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-13 of U.S. Patent No. US 11,863,798 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because while the claims slightly differ in language, the scope is similar and is an obvious variant of language used (please refer blow to the tabular analysis of the claim). 19/005,435 US 11,863,798 B2 1. An image encoding method, comprising: determining a transform kernel of a current block; generating at least one coefficient of the current block by performing a transform on residual samples of the current block, based on the determined transform kernel; and encoding transform kernel information indicating the transform kernel of the current block and the at least one coefficient of the current block, wherein the transform comprises a primary transform and a secondary transform, the at least one coefficient is generated by performing the secondary transform on a result of the primary transform on the residual samples using the determined transform kernel, and the transform kernel information indicates one of predefined transform kernel sets comprising a horizontal transform kernel and a vertical transform kernel. 1. An image encoding method, comprising: … determining a transform kernel of the current block; generating at least one coefficient of the current block by performing a transform on residual samples of the current block, based on the transform kernel; and encoding transform kernel information indicating the transform kernel of the current block and the at least one coefficient of the current block, wherein the transform comprises a primary transform and a secondary transform, wherein the at least one coefficient is generated by performing the secondary transform on a result of the primary transform on the residual samples using the transform kernel, wherein the transform kernel information is an index indicating one of predefined transform kernel sets … comprises a horizontal transform kernel and a vertical transform kernel … 2. The image encoding method of claim 1, wherein the transform kernel information is index information indicating at least one of predefined transform kernel sets. 1. An image encoding method ... wherein the transform kernel information is an index indicating one of predefined transform kernel sets …, 3. The image encoding method of claim 1, wherein the transform kernel information is encoded for each of coding units. 2. The image encoding method of claim 1, wherein the transform kernel information is encoded for each of coding units. 4. The image encoding method of claim 1, wherein the transform kernel information is encoded when a size of the current block is less than or equal to a predefined size. 3. The image encoding method of claim 1, wherein the transform kernel information is selectively encoded based on whether a size of the current block is less than or equal to a predefined size. 5. The image encoding method of claim 1, wherein the transform kernel information is encoded when a non-zero transform coefficient exists in the current block. 4. The image encoding method of claim 1, wherein the transform kernel information is selectively encoded based on whether a non-zero transform coefficient exists in the current block. 6. The image encoding method of claim 1, wherein the transform kernel information is encoded when a transform skip mode is not performed on the current block. 5. The image encoding method of claim 1, wherein … the transform kernel information is encoded in a case that it is determined that the transform skip kernel is not performed on the current block. 7. The image encoding method of claim 1, wherein the horizontal transform kernel and the vertical transform kernel are determined independently. 6. The image encoding method of claim 1, wherein the horizontal transform kernel and the vertical transform kernel are determined independently. 8-14. 7-12. 15-20. 13. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASMAMAW TARKO whose telephone number is (571)272-9205. The examiner can normally be reached Monday -Friday 9:00AM-5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at (571) 272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ASMAMAW G TARKO/ Patent Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Dec 30, 2024
Application Filed
Dec 09, 2025
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12529288
SYSTEMS AND METHODS FOR ESTIMATING RIG STATE USING COMPUTER VISION
2y 5m to grant Granted Jan 20, 2026
Patent 12511768
METHOD AND APPARATUS FOR DEPTH IMAGE ENHANCEMENT
2y 5m to grant Granted Dec 30, 2025
Patent 12506865
SYSTEMS AND METHODS FOR REDUCING A RECONSTRUCTION ERROR IN VIDEO CODING BASED ON A CROSS-COMPONENT CORRELATION
2y 5m to grant Granted Dec 23, 2025
Patent 12498482
CAMERA APPARATUS
2y 5m to grant Granted Dec 16, 2025
Patent 12469164
VEHICLE EXTERNAL DETECTION DEVICE
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
81%
With Interview (+9.3%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 395 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month