Prosecution Insights
Last updated: April 19, 2026
Application No. 18/726,297

FEATURE ENCODING/DECODING METHOD AND DEVICE, AND RECORDING MEDIUM IN WHICH BITSTREAM IS STORED

Non-Final OA §102§103
Filed
Jul 02, 2024
Examiner
LOTFI, KYLE M
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
LG Electronics Inc.
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
2y 8m
To Grant
71%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
226 granted / 355 resolved
+5.7% vs TC avg
Moderate +7% lift
Without
With
+7.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
22 currently pending
Career history
377
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
25.8%
-14.2% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 355 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 and 10-14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kim, WO 2021/0201642, published October 7, 2021, for which US 2023/0156212 A1 is used as a translation. Regarding claim 1, Kim discloses: a feature decoding method performed by a feature decoding apparatus, the feature decoding method comprising: obtaining parameter information related to a plurality of layers from a bitstream (See [0495], which discloses “feature_coding” parameter may include information on the number of feature channels for encoding/decoding. See figure 32 for layers); and reconstructing feature information of the plurality of layers based on the parameter information (See [0518], which discloses reconstructing feature data from extracted feature data.), wherein the parameter information comprises at least one of information about a number of feature channels (See [0495], which discloses “feature_coding” parameter may include information on the number of feature channels for encoding/decoding. See “pca_feature_num”) or information about a quantization parameter (QP) (See [0313]-[0334]). Regarding claim 10, Kim discloses: the feature decoding method of claim 1, wherein the parameter information is obtained from a picture level of the bitstream (See figure 42, “feature_coding _header”, and description in [0581]: “Referring to FIG. 42-(a), Feature_coding_header is a header of a sequence, group, or frame unit according to the factor coding unit, and may contain head information about payload. The video/image data may be configured for each sequence, group, or frame unit (or level)” [emphasis added].). Regarding claim 11, Kim discloses: the feature decoding method of claim 10, wherein the information about the number of feature channels indicates the same value with respect to the plurality of layers (See [0495]; this syntax denotes a same value across layers.) Feature encoding method claim 12 is directed to an analogous encoding method to the decoding method in claim 1. Therefore, encoding method claim 12 corresponds to decoding method claim 1 and is rejected for the same reasoning as set forth above. Computer-readable recording medium claim 13 is directed to non-functional descriptive matter. A computer-readable recording medium storing a bitstream does not perform any functions, and any prior art disclosing a computer-readable recording medium storing a bitstream reads on such a claim. In this regard, claim 13 is anticipated by Kim in [0111], where it discloses “The bitstream may be transmitted over a network or may be stored in a digital storage medium.” Method claim 14 is directed to a method of transmitting a bitstream generated by a feature encoding method that corresponds to the feature decoding method of claim 1. Therefore, method claim 14 is rejected for the same reasoning as set forth above with respect to claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Kim, in view of Gan, “[VCM] On YOL0v3 as a Shared Backbone for Feature Map Compression” lSO/lEC.JTC l/SC 29/\VG 2 m57386, July 2021. Regarding claim 2, Kim discloses the limitations of claim 1, upon which depends claim 2. Kim does not disclose: the feature decoding method of claim 1, wherein the information about the number of feature channels comprises information about the number of feature channels of a first layer and information about the number of feature channels of a second layer, wherein the information about the number of feature channels of the first layer indicates a value different from the number of feature channels of the second layer, and wherein the first layer and the second layer are included in the plurality of layers indicating that a layer with a different number of feature channels is present among the plurality of layers However, Gan discloses the limitations of claim 2 in an analogous art. See Sec. 2, para. below Table 1: While the Darknet53 backbone consists of three layers containing 256, 512, and 1024 channels respectively, the YOLOv3 backbone proposed in [7] and used in this contribution has half as many channels; see Tab. 1 column YOLOv3 backbone listing Channel counts 128, 256, 512); also see 03 (Sec. 5, second para.: The dimensionality of the resulting feature maps is as follows: • 512 x 34x19 • 256 x 68x38 • 128 x 136x76.) It would have been obvious to one having ordinary skill in the art before the time of the applicant’s effective filing date to include channel counts per layer in the coding information of Kim, as disclosed in Gan, in order to more precise. Combining these elements would have merely entailed combining the respective teachings of the two references, without altering their respective functions, and would have had predictable results for one of ordinary skill in the art. See MPEP 2143.I.A. Claims 5 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Kim, in view of Rosewarne, “[VCM] Coding results of feature map compression for object tracking.” 1S0/IEC JTC 1/SC 29/WG 2 m56666 Online - April 2021. Regarding claim 5, Kim discloses the limitations of claim 1, upon which claim 5 depends. Lee does not disclose: the feature decoding method of claim 1, wherein the information about the QP comprises information about a QP of a first layer and information about a QP of a second layer, wherein the information about the QP of the first layer indicates a value different from the information about the QP of the second layer, and wherein the first layer and the second layer are included in the plurality of layers. However, Rosewarne discloses this limitation in an analogous art. Rosewarne discloses in Section 6, “Feature Map Quantization and Packaging”, that feature maps are grouped according to layer, and a quantization range is established for each layer, and ranges are signaled in SEI. It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to group the quantization parameter settings and information according to layer, as disclosed in Rosewarne, as different layers are likely to have different quantization values according to their resolutions, making layer-level QP indication most computationally efficient. Regarding claim 8, the combination of Kim in view of Rosewarne discloses the limitations of claim 5, upon which depends claim 8. This combination, specifically Rosewarne, further discloses: the feature decoding method of claim 5, wherein the information about the QP of the first layer indicates a QP value of the first layer (See Section 6, “Symmetric quantization using the quantization range is performed, with the resulting values mapped into the sample range such that a 0.0 value maps to the mid-tone for the bit-depth of the sample space.”). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Kim, in view of Rosewarne, in view of Leontaris, US 2013/0028316 A1. Regarding claim 9, the combination of Kim in view of Rosewarne discloses the limitations of claim 5, upon which depends claim 9. This combination does not disclose: the feature decoding method of claim 5, wherein the information about the QP of the first layer indicates a difference between a QP value of the second layer and a QP value of the first layer. However, in a related art directed to multi-layer video coding, Leontaris discloses multi-pass coding of video starting at base layer, and estimating correlation and complexity relationships between the layers on an intermediate pass. See figure 8. Leontaris discloses coding an enhancement layer QP value based on a difference with a base layer value. It would have been obvious to one having ordinary skill in the art before the time of the Applicant’s effective filing date to incorporate the feature disclosed in Leontaris of coding a QP difference value, within the layers of feature map video latent representation in Lee, in order to reduce coding overhead, as a difference value requires fewer bits than an absolute value. Allowable Subject Matter Claims 3, 4, 6, and 7 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 3, the prior art does not disclose or make obvious: the feature decoding method of claim 2, wherein the information about the number of feature channels of the first layer is obtained based on first channel information obtained from the bitstream indicating that the number of feature channels of the first layer and the number of feature channels of the second layer are different from each other. The closest prior art is Kim, whose “feature_coding()” flag, disclosed in [0495] indicates a of feature channels use for coding. Neither Kim, nor any other prior art of record discloses or suggest a first channel information that indicates there is a difference in the number of feature channels between two layers, and based upon which first channel information a flag such as feature_coding flag is obtained. Regarding claim 6, the prior art does not disclose or make obvious: the feature decoding method of claim 5, wherein the information about the QP of the first layer is obtained based on first QP information obtained from the bitstream indicating that the QP value of the first layer and the QP value of the second layer are different from each other. None of the prior art of record discloses or suggests an information about the QP of a first layer obtained based on a first QP information from the bitstream indicating that two layers have differing QP values. The closest prior art is Rosewarne, “[VCM] Coding results of feature map compression for object tracking.” 1S0/IEC JTC 1/SC 29/WG 2 m56666 Online - April 2021, which discloses in Section 6, “Feature Map Quantization and Packaging”, that feature maps are grouped according to layer, and a quantization range is established for each layer, and that these quantization ranges are signaled in SEI. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE M LOTFI whose telephone number is (571)272-8762. The examiner can normally be reached 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE M LOTFI/ Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Jul 02, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598317
HYBRID SPATIO-TEMPORAL NEURAL MODELS FOR VIDEO COMPRESSION
2y 5m to grant Granted Apr 07, 2026
Patent 12593070
SYSTEMS AND METHODS FOR SIGNALING SOURCE PICTURE TIMING INFORMATION FOR TEMPORAL SUBLAYERS IN VIDEO CODING
2y 5m to grant Granted Mar 31, 2026
Patent 12587646
NETWORK BASED IMAGE FILTERING FOR VIDEO CODING
2y 5m to grant Granted Mar 24, 2026
Patent 12581061
MATRIX BASED INTRA PREDICTION WITH MODE-GLOBAL SETTINGS
2y 5m to grant Granted Mar 17, 2026
Patent 12574527
METHODS FOR ENCODING AND DECODING FEATURE DATA, AND DECODER
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
71%
With Interview (+7.2%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 355 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month