Prosecution Insights
Last updated: April 19, 2026
Application No. 19/073,504

Encoding Method and Apparatus, and Decoding Method and Apparatus

Non-Final OA §102§103§112
Filed
Mar 07, 2025
Examiner
BILLAH, MASUM
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
335 granted / 419 resolved
+22.0% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
31 currently pending
Career history
450
Total Applications
across all art units

Statute-Specific Performance

§101
3.9%
-36.1% vs TC avg
§103
60.5%
+20.5% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 419 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is in response to the application 19/073,504 filed on 03/07/2025. Claims 1 – 20 have been examined and are pending in this application. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/01/2025 and 03/17/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 – 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1, 10, 11 states “image content indicates a relative value of a first expected number of bits based on encoding the (i+k)th coding unit”. it is unclear and also a relative wording, not defining which quantity or parameter said "value" is "relative" to.. Therefore, the entire claim looks as formulated as a result to be achieved devoid of clear technical steps. Regarding claim 2, 3, 12 – 15, it is unclear wherein the wording "complexity" refers generally to the non-encoded input image or residual after hybrid loop compensation”. In claim 4, 6, 7, 16, 18 and 19, 20 the wording "block bits" and "lossless coded bits" is introduced for said given coding unit without definition, thus wording is unclear. Indeed, it is unclear which compression method is adopted and how said "bits" arise from it. In claim 4, 8, 16 and 20, the wording "combination of the set moment and the complexity level" is also undefined and unclear, if "set moment" has to be understand as referred to the current block under decoding. Rest of the other claims, either directly or indirectly, dependent from a claims 1, 10 and 11 rejected under 35 U.S.C. 112(b). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 2, 8 – 12, 14, 15 and 20 are rejected under 35 U.S.C. 102(a)(1) as being by Guo et al. (US 2010/0104010 A1). Regarding claim 1, Guo discloses: “a method comprising: obtaining consecutive coding units in a bitstream [see fig. 1 – 2, see para: 0018; In the present invention, the conventional RC algorithm is divided into an UpdateQP part 20 and an UpdateModel part 18; the UpdateQP part 20 is arranged before the IME stage 10, and the UpdateModel part 18 is arranged behind the Entropy stage 16. In the UpdateQP part 20, calculating QP needs the information of the remaining bits. However, the exact number of the bits used by the first macro block (MB0) is unknown until the four stages thereof are completed. In this embodiment, the bits used by MB0 is finally obtained by the UpdateQP part 20 of MB4]; obtaining a quantization parameter (QP) value of an (i+k)th coding unit [see para: 0018; In the present invention, the conventional RC algorithm is divided into an UpdateQP part 20 and an UpdateModel part 18; the UpdateQP part 20 is arranged before the IME stage 10, and the UpdateModel part 18 is arranged behind the Entropy stage 16. In the UpdateQP part 20, calculating QP needs the information of the remaining bits. However, the exact number of the bits used by the first macro block (MB0) is unknown until the four stages thereof are completed. In this embodiment, the bits used by MB0 is finally obtained by the UpdateQP part 20 of MB4] in the consecutive coding units based on a first number of coded bits of an ith coding unit in the consecutive coding units [see para: 0019; After the first macro block (MB0) has output data, such as curbuMAD, curbuHeaderBits and curbuTextureBits, the data may be used to predict the fifth macro block (MB4). When the data of MB0 is used to predict MB4, the value of the remaining bits is incorrect because three macro blocks are interposed between them. Therefore, the bit numbers of the three intermediate macro blocks must be estimated before adjusting the value of the remaining bits, whereby the values of the distributable bits can be more accurately estimated, as shown in Equation (2): T r,l =T r1-4−[(m hdr,1-4 +m tex,1-4)×3×MADratio1]  (2) wherein T denotes the number of bits, r denotes the remaining bits, 1 denotes the ordinal number of the macro block, m denotes the number of the bits really generated, hdr denotes the header file, tex denotes texture, and MADratio1 denotes a first coefficient. Equation (2) can predict the value of the remaining bits. The number of the remaining bits of the current macro block is equal to the number of the remaining bits of the fourth macro block before the current macro block minus triple the number of the bits really used by the fourth macro block before the current macro block. If the triple the number of the bits really used by the fourth macro block before the current macro block is multiplied by the first coefficient, the prediction will be more accurate. The calculation of the first coefficient is expressed by Equation (3): MADratio1=MADPBUact/MADPd  (3)] and based on image content of the (i+k)th coding unit [see para: 0020; wherein MADPBUact is the real MAD (Mean Absolute Difference) of the preceding macro block, i.e. the MAD of the fourth macro block before the current macro block; MADPd is the MAD of the current macro block. MAD is an index to verify whether the predicted value is correct in video encoding. The greater MAD, the less accurate the predicted value; it implies that the images move faster currently. Thus, MAD can be used to correct the predicted number of the remaining bits. The larger the MAD value, the more bits the three intermediate macro blocks require; the smaller the MAD value, the fewer bits the three intermediate macro blocks require. The calculation of MADPd is expressed by Equation (4): MADPd =C 1×MADPFAVG×MADratio2 +C 2  (4) wherein C1 and C2 are parameters defined by the RC algorithm for the H.264/AVC and obtained from the UpdateModel part 18, and wherein MADPFAVG is the average value of all the MADs of the preceding frame, and MADratio2 is a second coefficient used to correct the MADs of the preceding and current macro blocks], wherein i is a positive integer [i = cur- 4], wherein k is a positive integer greater than or equal to 2 [k = 4], and wherein the image content indicates a relative value of a first expected number of bits based on encoding the (i+k)th coding unit [see para: 0019]; and decoding the (i+k)th coding unit based on the QP value [see para: 0015; FIG. 2 is a diagram schematically showing the hardware scheduling of a 4-stage pipeline encoder according to one embodiment of the present invention]. Regarding claim 2, Guo discloses: “wherein the image content comprises a complexity level of the (i+k)th coding unit [see para: 0022; Thereby, the remaining bits can be predicted according to the abovementioned equations. Then, the bits required by the current macro block can be predicted with Equation (7) of the H.264/AVC RC algorithm: b~l=Tr×σ~l,i2(l)∑k=lNunitσ~k,i2(j)(7) wherein {tilde over (σ)} denotes MAD, and {tilde over (σ)}l,i(l) denotes the MAD of the lth MB of the ith frame. MAD is an index to predict complexity in the RC algorithm. The bit number of the current macro block is equal to the predicted MAD of the current macro block divided by the sum of the MADs of all the other macro blocks and then multiplied by the value of the remaining bits. In other words, the bits are distributed according to the ratio of the complexity of the current macro block to the complexity of the remaining macro blocks]. Regarding claim 8, the Examiner takes Official Notice that “the features in claim 8” is commonly known in the art or implicit or can be obtainable or updatable based on mathematical equation and therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to incorporate what is known in the art the teaching of the Guo reference due to the fact that the combining would resolve a problem that efficiency of obtaining a QP value of a coding block and coding time corresponding to the coding block will significantly be increased in a coding process. Regarding claim 9, Guo discloses: “further comprising: decoding the bitstream to obtain an image [see para: 0015; FIG. 2 is a diagram schematically showing the hardware scheduling of a 4-stage pipeline encoder according to one embodiment of the present invention]; and displaying the image [commonly known in the art or implicit after coding process]. Regarding claim 10 and 11, claim 10 and 11 is rejected under the same art and evidentiary limitations as determined for the method of claim 1 and for the decoder. Regarding claim 12, claim 12 is rejected under the same art and evidentiary limitations as determined for the method of claim 2. Regarding claim 14, Guo discloses: “further comprising: dividing the (i+k)th coding unit into sub-blocks [see para: 0028; The arithmetic and logic unit 34 includes seven adders, two multipliers, a 16-cycle sequence divider, a 4-stage pipeline divider, a 16-cycle radical calculator, and a QP generator, whereby updating QP needs only 100 cycles, and updating models needs only 260 cycles. In other words, one macro block consumes only 360 cycles, and a QCIF-size frame consumes only 35640 cycles]; obtaining texture complexity levels of the sub-blocks, wherein the texture complexity levels are one of set complexity levels [see para: 0019; After the first macro block (MB0) has output data, such as curbuMAD, curbuHeaderBits and curbuTextureBits, the data may be used to predict the fifth macro block (MB4). When the data of MB0 is used to predict MB4, the value of the remaining bits is incorrect because three macro blocks are interposed between them. Therefore, the bit numbers of the three intermediate macro blocks must be estimated before adjusting the value of the remaining bits, whereby the values of the distributable bits can be more accurately estimated]; obtaining a first texture complexity level of the (i+k)th coding unit based on the texture complexity levels [see para: 0019]; and determining the complexity level based on the first texture complexity level [see para: 0022; Thereby, the remaining bits can be predicted according to the abovementioned equations. Then, the bits required by the current macro block can be predicted with Equation (7) of the H.264/AVC RC algorithm: b~l=Tr×σ~l,i2(l)∑k=lNunitσ~k,i2(j)(7) wherein {tilde over (σ)} denotes MAD, and {tilde over (σ)}l,i(l) denotes the MAD of the lth MB of the ith frame. MAD is an index to predict complexity in the RC algorithm. The bit number of the current macro block is equal to the predicted MAD of the current macro block divided by the sum of the MADs of all the other macro blocks and then multiplied by the value of the remaining bits. In other words, the bits are distributed according to the ratio of the complexity of the current macro block to the complexity of the remaining macro blocks]. Regarding claim 15, Guo discloses: “wherein determining the complexity level comprises processing the texture complexity levels according to a set rule to determine the complexity level[see para: 0022; Thereby, the remaining bits can be predicted according to the abovementioned equations. Then, the bits required by the current macro block can be predicted with Equation (7) of the H.264/AVC RC algorithm: b~l=Tr×σ~l,i2(l)∑k=lNunitσ~k,i2(j)(7) wherein {tilde over (σ)} denotes MAD, and {tilde over (σ)}l,i(l) denotes the MAD of the lth MB of the ith frame. MAD is an index to predict complexity in the RC algorithm. The bit number of the current macro block is equal to the predicted MAD of the current macro block divided by the sum of the MADs of all the other macro blocks and then multiplied by the value of the remaining bits. In other words, the bits are distributed according to the ratio of the complexity of the current macro block to the complexity of the remaining macro blocks]. Regarding claim 20, claim 20 is rejected under the same art and evidentiary limitations as determined for the method of claim 8. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Guo et al. (US 2010/0104010 A1) in view of Adiletta et al. (US 2005/0276329 A1) Regarding claim 3, Guo disclose all the limitation of claim 2 and are analyzed as previously discussed with respect to that claim. Guo does not explicitly disclose: “further comprising obtaining the complexity level of the (i+k)th coding unit from the bitstream, wherein the complexity level comprises at least one of a luminance complexity level or a chrominance complexity level”. However, Adiletta, from the same or similar field of endeavor teaches: “further comprising obtaining the complexity level of the (i+k)th coding unit from the bitstream, wherein the complexity level comprises at least one of a luminance complexity level or a chrominance complexity level [see para: 0080; The raw, analog video is input to the video port 36 of the VCDU and converted into luminance and chrominance data types, where the luminance roughly corresponds to the intensity at that point, and the chrominance corresponds to the color. The digital data consists of eight bits of luminance (Y), eight bits of chrominance-blue (Cb) and eight bits of chrominance-red (Cr). Raw, analog video data are received by the color decoder 33 and translated to digital YUV format according to the CCIR601 standard at either an NTSC format of 720 pixels×480 scan lines at 29.97 frames/second, or PAL format of 720 pixels×576 lines at 25 frames per second. The pixel data arrives as a stream of horizontal scan lines. The scan lines arrive in interlaced order (first all consecutive even lines from top to bottom followed by all consecutive odd lines from top to bottom)]. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the system/head mounted camera system disclosed by Guo to add the teachings of Adiletta as above, in order to improve efficiency of obtaining a QP value of a coding block, complexity level will be determined such as luminance and chrominance using mathematical calculation or formula as mentioned in the above paragraphs [see para: 0080]. Regarding claim 13, claim 13 is rejected under the same art and evidentiary limitations as determined for the method of claim 3. Allowable Subject Matter Claims 4 – 7, 16 – 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Vos et al (US 2008/0112632 A1). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Masum Billah whose telephone number is (571)270-0701. The examiner can normally be reached Mon - Friday 9 - 5 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie J. Atala can be reached at (571) 272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MASUM BILLAH/Primary Patent Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Mar 07, 2025
Application Filed
Mar 07, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603983
APPARATUS AND METHOD FOR GENERATING OBJECT-BASED STEREOSCOPIC IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12597123
RAIL FEATURE IDENTIFICATION SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12597258
ALERT DIRECTIVES AND FOCUSED ALERT DIRECTIVES IN A BEHAVIORAL RECOGNITION SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12591954
DEPTH INFORMATION DETECTOR, TIME-OF-FLIGHT CAMERA, AND DEPTH IMAGE ACQUISITION METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12581101
TEMPLATE MATCHING REFINEMENT FOR AFFINE MOTION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+21.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 419 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month