Prosecution Insights
Last updated: April 19, 2026
Application No. 18/875,673

METHOD FOR IMAGE ENCODING

Non-Final OA §102§103
Filed
Dec 16, 2024
Examiner
XU, XIAOLAN
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Mbda UK Limited
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
87%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
247 granted / 334 resolved
+16.0% vs TC avg
Moderate +13% lift
Without
With
+13.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
371
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 334 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim 7 is rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kadono et al. (Pub. No. US 2004/0076237 A1). Regarding claim 7, Kadono discloses One or more non-transitory computer-readable medium having stored thereon a program ([0247] recording a program implementing the steps of … method to a floppy disk or other computer-readable data recording medium; [0251]; [0257] The software for … can be stored to any computer-readable data recording medium (such as a CD-ROM disc, floppy disk, or hard disk drive)). See MPEP 2111.05 (III), when determining the scope of the claims, “encoded data in a bitstream” is not given patentable weight, because “encoded data in a bitstream” is non-functional descriptive material. It is merely static data that imparts no function (unlike an executable computer program which performs a function). It does not have any functional relationship with the intended computer system. Thus, the computer-readable data recording medium disclosed in Kadono meets claim 7. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-14 are rejected under 35 U.S.C. 103 as being unpatentable over Hannuksela (US 20050185719 A1) in view of THOUKYDIDES et al. (US 20170185474 A1). Regarding claim 1. Hannuksela discloses A method for encoding data defining an image (abstract, A method of video encoding), the method including the step of providing metadata associated with the image (abstract, the picture header for the frame; [0008] the most vital information is gathered in the picture header), encoding the metadata into binary code to form a metadata string ([0018] The coded parameter data is arranged in a so-called picture header; [0022] picture headers in video bitstreams), and repeating the metadata string (abstract, repeating part, but not all, of the data. The repeated part including the picture header for the frame; [0023] a repeat of the picture header for at least INTRA-frames; [0058] the encoder is arranged to send repeats of the picture header). However, Hannuksela doesn’t explicitly disclose repeating the metadata string a number of times. THOUKYDIDES discloses repeating a frame a number of times ([0057] if at least three probe response frames have been received from a particular BSS then it is possible to implement N-modular redundancy (majority logic) decoding as a simple form of forward error correction (FEC), to recover portions of the original frame; figure 11, [0075] Multiple probe response frames from the AP are collected using a wireless sniffer. FIG. 11 is a diagram illustrating the initial 65 octets of seven frames. Partial packet recovery is performed across these received frames using 7-modular redundancy). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Hannuksela according to the invention of THOUKYDIDES, to repeat the metadata string a number of times, in order to correctly receive the data (THOUKYDIDES [0039]). Regarding claim 2. Hannuksela discloses The method of claim 1, further comprising the steps of: segmenting the image into image blocks, each image block in a portion having a uniform block size ([0012] Picture data is coded on a block-by-block basis, each block representing 8.times.8 pixels of luminance or chrominance); applying a frequency-based transform to each of the image blocks, thereby providing transformed image data in which the image data is represented as coefficients defining a linear combination of predetermined basis functions having different spatial frequencies (figure 4, unit 103 DCT, [0055] In INTRA-mode, the video signal from the input 101 is input directly to a DCT transformer 103 which transforms the pixel data into DCT coefficients; [0056] In INTER mode, The prediction error is DCT transformed); quantising the coefficients ([0055] The DCT coefficients are then passed to a quantiser 104 which quantises the coefficients; [0056] The prediction error is DCT transformed and quantised); and converting the quantised coefficients into binary code ([0057] The video coder 100 produces header information (e.g. a temporal reference flag TR 112a to indicate the number of the frame being coded, an INTRA/INTER flag 112b to indicate the mode of coding performed (I or P/B), a quantising index 112c (i.e. the details of the quantiser used), the quantised DCT coefficients 112d and the motion vectors 112e for the picture being coded. These are coded and multiplexed together by the variable length coder (VLC) 113). Regarding claim 3. Hannuksela in view of THOUKYDIDES discloses The method according to claim 2, wherein the metadata string is repeated at least three times (Hannuksela [0058] the encoder is arranged to send repeats of the picture header; THOUKYDIDES figure 11, [0075] Multiple probe response frames from the AP are collected using a wireless sniffer. FIG. 11 is a diagram illustrating the initial 65 octets of seven frames. Partial packet recovery is performed across these received frames using 7-modular redundancy). The same motivation has been stated in claim 1. Regarding claim 4. Hannuksela in view of THOUKYDIDES discloses The method according to claim 2, wherein the metadata string is repeated at least five times (Hannuksela [0058] the encoder is arranged to send repeats of the picture header; THOUKYDIDES figure 11, [0075] Multiple probe response frames from the AP are collected using a wireless sniffer. FIG. 11 is a diagram illustrating the initial 65 octets of seven frames. Partial packet recovery is performed across these received frames using 7-modular redundancy). The same motivation has been stated in claim 1. Regarding claim 5. Hannuksela discloses A method of decoding a bitstream to reconstruct an image (abstract, A method of decoding an encoded video signal), the method comprising the steps of identifying, in the bitstream, a metadata string containing bits relating to metadata associated with the image (abstract, receiving coded data representing frames of a video signal; examining the coded data to detect header data); determining the metadata string that is repeated (abstract, detecting a repeat of the header data). However, Hannuksela doesn’t explicitly disclose determining a number of times the metadata string is repeated; and, for each bit in the metadata string, applying a voting procedure to determine a value of each said bit. THOUKYDIDES discloses determining a number of times a frame is repeated ([0057] if at least three probe response frames have been received from a particular BSS then it is possible to implement N-modular redundancy (majority logic) decoding as a simple form of forward error correction (FEC), to recover portions of the original frame; figure 11, [0075] Multiple probe response frames from the AP are collected using a wireless sniffer. FIG. 11 is a diagram illustrating the initial 65 octets of seven frames. Partial packet recovery is performed across these received frames using 7-modular redundancy; [0059]); and, for each bit in the frame, applying a voting procedure to determine a value of each said bit ([0067] The N-modular redundancy decoding can be applied to arbitrary groupings of bits. The corresponding bits in each of the received versions of the frame are compared, and the value for each group that occurs in the most versions is selected; [0068] Use of a smaller group size, down to individual bits, will increase the probability of being able to recover the frame from a certain number of received versions). The same motivation has been stated in claim 1. Regarding claim 6. The same analysis has been stated in claim 1. Furthermore, Hannuksela discloses A method of encoding a series of image frames including at least a current frame and a preceding frame (abstract, A method of video encoding), each of the frames being encoded according to the method of claim 1 (see rejection of claim 1). Regarding claim 7. The same analysis has been stated in claim 1. Regarding claim 8. The same analysis has been stated in claim 1. Regarding claim 9. The same analysis has been stated in claim 1. Regarding claim 10. Hannuksela in view of THOUKYDIDES discloses The method according to claim 2, wherein the metadata string is repeated at least seven times (Hannuksela [0058] the encoder is arranged to send repeats of the picture header; THOUKYDIDES figure 11, [0075] Multiple probe response frames from the AP are collected using a wireless sniffer. FIG. 11 is a diagram illustrating the initial 65 octets of seven frames. Partial packet recovery is performed across these received frames using 7-modular redundancy). The same motivation has been stated in claim 1. Regarding claim 11. The same analysis has been stated in claim 5. Regarding claim 12. The same analysis has been stated in claim 5. Regarding claim 13. The same analysis has been stated in claim 6. Regarding claim 14. The same analysis has been stated in claim 6. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOLAN XU whose telephone number is (571)270-7580. The examiner can normally be reached Mon. to Fri. 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH V. PERUNGAVOOR can be reached at (571) 272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAOLAN XU/ Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Dec 16, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598315
IMAGE ENCODING/DECODING METHOD AND DEVICE FOR DETERMINING SUB-LAYERS ON BASIS OF REQUIRED NUMBER OF SUB-LAYERS, AND BIT-STREAM TRANSMISSION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12586255
CONFIGURABLE POSITIONS FOR AUXILIARY INFORMATION INPUT INTO A PICTURE DATA PROCESSING NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12587652
IMAGE CODING DEVICE AND METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12581120
Method and Apparatus for Signaling Tile and Slice Partition Information in Image and Video Coding
2y 5m to grant Granted Mar 17, 2026
Patent 12581092
TEMPORAL INITIALIZATION POINTS FOR CONTEXT-BASED ARITHMETIC CODING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
87%
With Interview (+13.3%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 334 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month