Prosecution Insights
Last updated: April 19, 2026
Application No. 19/022,954

METHOD, APPARATUS, AND MEDIUM FOR VISUAL DATA PROCESSING

Non-Final OA §102§103§112
Filed
Jan 15, 2025
Examiner
SULLIVAN, TYLER
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Bytedance Inc.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
98%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
251 granted / 380 resolved
+8.1% vs TC avg
Strong +32% interview lift
Without
With
+31.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
31 currently pending
Career history
411
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
2.8%
-37.2% vs TC avg
§112
30.3%
-9.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 380 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55 (Chinese PCT Application CN2022/106139 filed July 16th, 2022). Information Disclosure Statement The information disclosure statement (IDS) submitted on January 15th, 2025 was filed before the mailing date of the First Action on the Merits (this Office Action). The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the Examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Interpretation – Functional Analysis The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that use the word “means” or “step” or a generic placeholder but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are: “cause the processor to perform …” in claim 18. The Examiner notes the claimed “non-transitory memory with instructions thereon” and the “processor” claimed are being afforded status as connoting sufficient structure to one of ordinary skill in the art. Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof. If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 – 15 and 18 – 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, the claim recites "conversion" which has Indefinite metes and bounds as processes (e.g. encoding) and their inverse (e.g. decoding) are encompassed thus the claim has Indefinite metes and bounds as the steps of the claim have no distinction between such processes. Further, the claim raises issues regarding Essential Steps as the claim omits the encoder deriving the partition information and the decoder receiving the tile partition information. Regarding claims 18 – 20, see claim 1 which performs the steps of the claimed apparatus (claim 18), program (claim 19), and product by process (claim 20) and thus are similarly Rejected. Regarding claims 2 – 15, the dependent claims do not cure the deficiencies of their respective independent claims and thus are similarly Rejected. While claims 16 and 17 are not Rejected, the inclusion of only one of the respective dependent claims (not both simultaneously) would overcome the Rejection of claim 1 and the recitation of the other would result in improper dependency issues (e.g. a decoder depending on an encoder as the inverse process depending on the forward process or vice versa). Regarding claim 18, the recited “apparatus for visual data processing” has Indefinite metes and bounds as the preamble is ordinarily not afforded patentable weight and additionally the claim is Indefinite if the recitation is intended as a functional recitation of the claimed “apparatus”. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 20 is rejected under 35 U.S.C. 102(a)(1) or 102(a)(2) as being anticipated by Jia, et al. (WO2024/002497 A1 referred to as “Jia” throughout). Regarding claim 20, Jia teaches a non-transitory computer-readable recording medium storing a bitstream of visual data which is generated by a method performed by an apparatus for visual data processing [Jia Figures 1 – 3 and 23 – 24 (see at least reference characters 2400, 2410, and 2416) as well as Page 15 line 4 – Page 16 line 11 (processors for encoding / decoding with memory storing instructions), Page 62 line 23 – Page 63 line 17 (system with programming to generate a bitstream) and Page 99 line 32 – Page 100 line 30 (non-transitory medium storing a program for execution to generate / read a bitstream) where there is no structural differences in the bitstream see MPEP2113 I], wherein the method comprises [The method steps do not carry patentable weight as the claim is a product-by-process claim in which only the bitstream (product) generated / created is given weight (see MPEP2113 I and II)]: Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Jia, et al. (WO2024/002497 A1 referred to as “Jia” throughout), and further in view of Ikonin, et al. (US PG PUB 2023/0262243 A1 referred to as “Ikonin” throughout) and Alshina, et al (US PG PUB 2023/0353766 A1 referred to as “Alshina” throughout). Regarding claim 1, see claim 18 which is the apparatus performing the steps of the claimed method. Regarding claim 19, see claim 18 which is the apparatus performing the steps of the claimed program. Regarding claim 20, while the method steps are not afforded patentable weight (product by process claim), in the sole interest to expedite prosecution the claim when afforded patentable weight is similarly rejected to claim 1 as the method claimed and Jia Figures 1 – 3 and 23 – 24 (see at least reference characters 2400, 2410, and 2416) as well as Page 15 line 4 – Page 16 line 11 (processors for encoding / decoding with memory storing instructions), Page 62 line 23 – Page 63 line 17 (system with programming to generate a bitstream) and Page 99 line 32 – Page 100 line 30 (non-transitory medium storing a program for execution to generate / read a bitstream) as the non-transitory medium storing a program to perform a method to generate a bitstream. Regarding claim 18, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches a processor [Jia Figures 1 – 4 (see at least reference characters 230, 302, and 2410) as well as Page 15 line 21 – Page 16 line 11 (processors for encoding / decoding)] and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor [Jia Figures 1 – 3 and 23 – 24 (see at least reference characters 260, 304, 2400, 2410, and 2416) as well as Page 15 line 4 – Page 16 line 11 (processors for encoding / decoding with memory storing instructions), Page 62 line 23 – Page 63 line 17 (system with programming to generate a bitstream) and Page 99 line 32 – Page 100 line 30 (non-transitory medium storing a program for execution to generate / read a bitstream)], cause the processor to perform acts comprising: obtaining, for a conversion between visual data and a bitstream of the visual data [Jia Figures 1 – 6 (subfigures included and see at least reference characters 20 and 30), 9 (see at least reference characters 901, 902, 904, 905, and 906), and 21 – 25 as well as Page 13 line 22 – Page 14 line 19 (encoder / decoder), Page 18 line 3 – Page 19 line 10 (encoding / decoding picture (obvious variant of images see Page 1 lines 15 – 32) and video as the claimed visual data), Page 31 line 31 – Page 32 line 25 (auto-en(de)coder / en(de)coder network), and Page 36 line 18 – Page 37 line 2 (decoder / encoder as NN / NN in an encoding / decoding framework)], region information indicating sizes of a plurality of regions in a quantized latent representation of the visual data [Jia Figures 15 – 20 (partitions of times) and 25 – 26 as well as Pages 68 – 76 (tables in Pages 70 – 76 are included which include information regarding the size (width and height) of tiles and Pages 68 – 69 describe partitioning images / latents into tiles and tile size considerations) and 77 – 79 (see Page 77 lines 23 – 29 (size using width and height) and Page 79 line 1 – 18 (see at least the “image_tile.size” syntax element)]; selecting, based on the region information, a set of target neighboring samples from a plurality of candidate neighboring samples of a current sample in the quantized latent representation [Jia Figures 10 – 16 and 17 – 21 (subfigures included) as well as Page 37 lines 7 – 21 (padding (target samples) based on current samples in the latent for NN processing), 51 (based on region sizes / determination of regions sizes for NN processing), Page 52 lines 8 – 35 (target samples available including nearest neighbor in the same region / tile); Alshina Figures 16 – 17, 21, and 23 as well as Paragraphs 292 – 303 (see at least reflection padding techniques which uses samples in the same region / tile)], the set of target neighboring samples being in the same region as the current sample [Jia Figures 10 – 14 and 17 – 21 (subfigures included) as well as Page 37 lines 7 – 21 (padding (target samples) based on current samples in the latent for NN processing), 51 (based on region sizes / determination of regions sizes for NN processing), Page 52 lines 8 – 35 (target samples available including nearest neighbor in the same region / tile); Alshina Figures 16 – 17, 21, and 23 as well as Paragraphs 292 – 303 (see at least reflection padding techniques which uses samples in the same region / tile to process latents – combinable with Jia)]; determining the current sample based on the set of target neighboring samples [Jia Figures 14 – 21 (subfigures included – see bitstream1 and bittream2 generated at least) as well as Page 32 line 18 – Page 33 line 24 (samples processed in quantized latent representation and context modelling present), Page 37 lines 7 – 21 (padding (target samples) based on current samples in the latent for NN processing), 51 (based on region sizes / determination of regions sizes for NN processing), Page 52 lines 8 – 35 (target samples available including nearest neighbor in the same region / tile), and Page 86 lines 9 – 30 (padding to reconstruct current samples based on target neighbor samples); Alshina Figures 16 – 17, 21, and 23 as well as Paragraphs 292 – 303 (see at least reflection padding techniques which uses samples in the same region / tile – to combine with Jia); Ikonin Figures 3 and 13 as well as Paragraphs 135 – 139 (probability modelling for the entropy coding / decoding to determine the current sample uses and auto-regressive process) and 188 – 192 (autoregressive modelling / processing used in a decoder half / affects encoding the current sample)]; and performing the conversion based on the current sample [See Jia citations above regarding “conversion” and additionally Jia Figure 9 as well as Page 32 line 18 – Page 33 line 24 (samples processed in quantized latent representation and context modelling present) in combination with Ikonin Figures 3 and 13 as well as Paragraphs 135 – 139 (probability / context modelling for the entropy coding / decoding to determine the current sample uses and auto-regressive process) and 188 – 192 (autoregressive modelling / processing used in a decoder half / affects the context model)]. The motivation to combine Ikonin with Jia is to combine features in the same / related field of invention of neural network processing and side information for video compression [Ikonin Paragraphs 3 – 5] in order to improve compression using neural networks [Ikonin Paragraphs 7 – 8 and 107 where the Examiner observes KSR Rationales (D) or (F) are also applicable]. The motivation to combine Alshina with Ikonin and Jia is to combine features in the same / related field of invention of neural networks for image encoding / decoding [Alshina Paragraphs 2 – 4] in order to improve efficiency in NN processing [Alshina Paragraphs 7 – 10 where the Examiner observes KSR Rationales (D) or (F) are also applicable]. This is the motivation to combine Jia, Ikonin, and Alshina which will be used throughout the Rejection. Regarding claim 2, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches wherein the current sample is a quantized latent sample of the visual data [Jia Figure 9 as well as Page 30 line 20 – Page 31 line 24 (samples of quantized latent currently processed in encoder / decoder) and Page 32 line 26 – Page 33 line 24 (samples processed in quantized latent representation)]. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Regarding claim 3, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches wherein the determination of the current sample and a determination of a further sample in the quantized latent representation is allowed to be performed in parallel [Jia Figures 15 – 21 as well as Page 41 lines 4 – 26 (wavefront processing to process regions / tiles in parallel in further combination / view of Ikonin Paragraph 154), Page 45 lines 13 – 35 (parallel processing regions / tiles and samples therein), Page 58 lines 17 – 35 (parallel processing of tiles / samples in latents), and Page 66 lines 1 – 31 (tile independently coded / decoded and done in parallel)], and the further sample is located in a region different from a region in which the current sample located []. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Regarding claim 4, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches wherein the current sample is determined by using an auto-regressive process [Jia Figure 9 as well as Page 32 line 18 – Page 33 line 24 (samples processed in quantized latent representation and context modelling present) in combination with Ikonin Figures 3 and 13 as well as Paragraphs 135 – 139 (probability modelling for the entropy coding / decoding to determine the current sample uses and auto-regressive process) and 188 – 192 (autoregressive modelling / processing used in a decoder half / affects encoding the current sample)]. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Regarding claim 5, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches wherein the auto-regressive process is a context model or a multistage context model [Jia Figure 9 as well as Page 32 line 18 – Page 33 line 24 (samples processed in quantized latent representation and context modelling present) in combination with Ikonin Figures 3 and 13 as well as Paragraphs 135 – 139 (probability / context modelling for the entropy coding / decoding to determine the current sample uses and auto-regressive process) and 188 – 192 (autoregressive modelling / processing used in a decoder half / affects the context model)]. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Regarding claim 6, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches wherein the region information is determined [See claim 1 or 18 “region information” limitation for citations] based on at least one of the following: a depth of a transform that is performed to obtain a latent representation of the visual data [Jia Page 41 lines 4 – 18 (levels of wavefront processing) and Page 59 line 7 – Page 60 line 4 or alternatively Ikonin Paragraphs 114 – 116 (depth of kernels / filters to perform transforms)], the number of regions in the plurality of regions [Jia Page 90 lines 14 – 30], the sizes of the plurality of regions [Jia Figures 14 – 18 (subfigures included) as well as Page 84 line 1 – Page 85 line 8 (size / height / width signaled and used in a tile / region size determination)], positions of the plurality of regions [Jia Figures 14 – 18 (subfigures included) as well as Page 84 line 1 – Page 85 line 8 (start positions of the region / tile used for size determinations)], a size of the latent representation [Jia Figures 6 – 11 as well as Page 35 line 12 – Page 36 line 17], a size of the quantized latent representation [Jia Figures 6 – 11 as well as Page 35 line 12 – Page 36 line 17], a size of a reconstruction of the visual data [Jia Figures 6 – 11 as well as Page 35 line 12 – Page 36 line 17], a color format of the visual data [Jia Page 17 lines 28 – 34], a color component of the visual data [Jia Page 59 line 7 – Page 60 line 4 and Page 97 line 30 – Page 99 line 31], or information regarding whether the visual data is resized [Jia Page 89 line 20 – Page 90 line 3 and Page 93 line 25 – Page 94 line 19 (signaling resizing ratio); Alshina Paragraphs 287 – 294 (resizing determinations and signaling)]. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Regarding claim 7, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches wherein the bitstream comprises at least one indication associated with the number of regions in the plurality of regions [Jia Page 81 line 27 – Page 82 line 29 (signaling the number of tiles / regions)], or wherein the bitstream comprises at least one indication associated with the sizes of the plurality of regions [Jia Figures 14 – 18 (subfigures included) as well as Page 84 line 1 – Page 85 line 8 (size / height / width signaled and used in a tile / region size determination)]. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Regarding claim 8, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches determining statistical information of the current sample based on the set of target neighboring samples [Jia Figures 6, 9, and 16 as well as Page 34 lines 1 – 23 (computing statistical properties / information including means and variances), Page 55 line 23 – Page 56 line 5 (combine with the use of target samples in Page 52 lines 21 – 25 (target samples in the latent / partition)), and Page 66 line 8 – Page 67 line 14 (encoding / decoding the current sample based on the statistical information computed); Ikonin Figures 12 – 13 as well as Paragraphs 187 – 192 (code segment included in which the side information / statistics are used to determine current samples)]; and determining the current sample based on the statistical information [Jia Figures 6, 9, 11, 16 – 20, and 23 – 24 as well as Page 52 lines 21 – 25 (target samples in the latent / partition) and Page 66 line 8 – Page 67 line 14 (encoding / decoding the current sample based on the statistical information computed); Ikonin Figures 12 – 13 as well as Paragraphs 187 – 192 (code segment included in which the side information / statistics are used to determine current samples); Alshina Figures 4 – 6 as well as Paragraphs 186 and 213 – 221 (mean and variance computed as statistical properties used to compute samples)]. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Regarding claim 9, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches wherein the plurality of candidate neighboring samples are dependent on a processing kernel used to processing the current sample [Jia Figures 6 – 9, 11, and 16 – 19 (subfigures included) as well as Page 49 line 22 – Page 51 line 12 (crop / pad latent to fit the processing kernel size / dimensions to combine with Page 52 line 21 – Page 53 line 22)]. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Regarding claim 10, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches determining values for a part of samples in the processing kernel based on values for the set of target neighboring samples [Jia Figures 6 – 9, 11, and 16 – 19 (subfigures included) as well as Page 49 line 22 – Page 51 line 12 (crop / pad latent to fit the processing kernel size / dimensions to combine with Page 52 line 21 – Page 53 line 22 (target samples in the latent / partition))]; determining values for the rest of the samples in the processing kernel based on a predetermined value [Jia Figures 6 – 9, 11, and 16 – 19 (subfigures included) as well as Page 49 line 22 – Page 51 line 12 (crop / pad latent to fit the processing kernel size / dimensions to combine with Page 52 line 21 – Page 53 line 22 (constant / zero fill values)); Alshina Paragraphs 51 – 55, 92 – 96, and 255 – 259 (filling / padding with 0 values where one of ordinary skill understands zero as a constant / fixed term)]; and determining the statistical information based on values for the samples in the processing kernel [Jia Figures 6, 9, and 16 as well as Page 34 lines 1 – 23 (computing statistical properties / information including means and variances), Page 55 line 23 – Page 56 line 5 (combine with the use of target samples in Page 52 lines 21 – 25 (target samples in the latent / partition)), and Page 66 line 8 – Page 67 line 14 (encoding / decoding the current sample based on the statistical information computed); Ikonin Figures 12 – 13 as well as Paragraphs 187 – 192 (code segment included in which the side information / statistics are used to determine current samples and the autoregressive approach /model based on NN / kernels)]. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Regarding claim 11, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches wherein the predetermined value is constant [Jia Figure 11 as well as Page 52 lines 21 – 35 (constant / zero fill values); Alshina Paragraphs 51 – 55, 92 – 96, and 255 – 259 (filling / padding with 0 values where one of ordinary skill understands zero as a constant / fixed term)]. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Regarding claim 12, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches wherein the predetermined value is 0 [Jia Figure 11 as well as Page 52 lines 21 – 35 (constant or zero fill values); Alshina Paragraphs 51 – 55, 92 – 96, and 255 – 259 (filling / padding with 0 values)]. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Regarding claim 13, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches wherein the transform comprises one of the following: an analysis transform [Ikonin Figures 1 – 3 as well as Paragraphs 135 – 137 (analysis / synthesis transform)], a wavelet-based forward transform [Jia Figure 15 as well as Page 41 lines 4 – 26 (wavefront processing an obvious variant of the claimed wavelet transform) or alternatively Ikonin Figures 1 – 3 as well as Paragraph 132 (suggests using a wavelet transform)], or a discrete cosine transform (DCT) [Jia Page 29 line 25 – Page 30 line 3 (DCT suggested to use as a transform); Ikonin Figures 5 – 7 and 21 as well as Paragraphs 142 – 144, 150 (using DCT transforms) and 250 – 257 (transforms to use including DCTs)]. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Regarding claim 14, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches wherein the statistical information [See claim 1 for citations] comprises at least one of the following: a mean value [See next limitation for citations], or a variance [Jia Figures 6 (subfigures included) and 9 as well a Page 33 lines 1 – 24 (statistical side information of quantized latents computed included means or variances) and Page 60 lines 5 – 21 (means and variances computed and affects entropy tables / contexts used); Alshina Figures 4 – 6 as well as Paragraphs 186 and 219 – 221 (mean and variance computed as statistical properties)]. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Regarding claim 15, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches wherein the visual data comprise a picture of a video or an image [Jia Figures 4 – 5 (see at least reference characters 20 and 30) as well as Page 18 line 3 – Page 19 line 10 (encoding / decoding picture (obvious variant of images see Page 1 lines 15 – 32) and video)]. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Regarding claim 16, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches wherein the conversion includes encoding the visual data into the bitstream [Jia Figures 1 – 6 (subfigures included and see at least reference character 20) and 9 (see at least reference characters 901, 902, 904, 905, and 906) as well as Page 13 line 22 – Page 14 line 19 (encoder), Page 31 line 31 – Page 32 line 25 (auto-encoder / encoder network), and Page 36 line 18 – Page 37 line 2 (encoder as NN / NN in an encoding framework)]. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Regarding claim 17, Jia teaches a neural network (NN) based coding / decoding approach using quantized latents and pads the regions / tiles partitions and computes statistics affecting the context model / coding method used. Ikonin teaches additional considerations in using padding and the use of auto-regressive processed in NNs. Alshina teaches signaling considerations on the size of regions / tiles for processing. It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Jia to include auto-regressive configurations as taught by Ikonin and to signal partitioning information as taught by Alshina. The combination teaches wherein the conversion includes decoding the visual data from the bitstream [Jia Figures 1 – 6 (subfigures included and see at least reference character 30) and 9 (see at least reference characters 901, 902, 904, 905, and 906) as well as Page 13 line 22 – Page 14 line 19 (decoder), Page 31 line 31 – Page 32 line 25 (auto-decoder / decoder network), and Page 36 line 18 – Page 37 line 2 (decoder as NN / NN in a decoding framework)]. See claim 1 for the motivation to combine Jia, Ikonin, and Alshina. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Reference considered which may raise ODP issues based on amendments to the claims include: Esenlik, et al. (US PG PUB 2024/0430482 A1 referred to as “Esenlik” throughout) and Esenlik, et al. (US PG PUB 2024/0430428 A1 referred to as “Esen” throughout). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tyler W Sullivan whose telephone number is (571)270-5684. The examiner can normally be reached IFP. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at (571)-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TYLER W. SULLIVAN/ Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Jan 15, 2025
Application Filed
Jan 08, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594884
TRAILER ALIGNMENT DETECTION FOR DOCK AUTOMATION USING VISION SYSTEM AND DYNAMIC DEPTH FILTERING
2y 5m to grant Granted Apr 07, 2026
Patent 12593027
INTRA PREDICTION FOR SQUARE AND NON-SQUARE BLOCKS IN VIDEO COMPRESSION
2y 5m to grant Granted Mar 31, 2026
Patent 12563211
VIDEO DATA ENCODING AND DECODING USING A CODED PICTURE BUFFER WHOSE SIZE IS DEFINED BY PARAMETER DATA
2y 5m to grant Granted Feb 24, 2026
Patent 12542894
Method, An Apparatus and a Computer Program Product for Implementing Gradual Decoding Refresh
2y 5m to grant Granted Feb 03, 2026
Patent 12541880
CAMERA CALIBRATION METHOD, AND STEREO CAMERA DEVICE
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
98%
With Interview (+31.6%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 380 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month