Prosecution Insights
Last updated: April 19, 2026
Application No. 18/257,236

METHOD AND APPARATUS FOR ENCAPSULATING IMAGE DATA IN A FILE FOR PROGRESSIVE RENDERING

Final Rejection §102
Filed
Jun 13, 2023
Examiner
DAVIS, CHENEA
Art Unit
2421
Tech Center
2400 — Computer Networks
Assignee
Canon Kabushiki Kaisha
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
88%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
378 granted / 525 resolved
+14.0% vs TC avg
Strong +16% interview lift
Without
With
+16.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
23 currently pending
Career history
548
Total Applications
across all art units

Statute-Specific Performance

§101
13.7%
-26.3% vs TC avg
§103
48.2%
+8.2% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 525 resolved cases

Office Action

§102
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This office action is in response to communications filed 9/15/2025. Claims 3 and 5 are amended. Claims 8 and 18 are cancelled. Claims 1-7, 9-17 and 18-22 are pending in this action. The Examiner notes that several claims are labeled as currently amended. However, the amendments of other claims that are not mentioned above appear to be previously amended and have been treated as such. Response to Arguments Applicant's arguments filed 9/15/2025 have been fully considered but they are not persuasive. In response to Applicant’s arguments on page 8 that “Hannuksela does not contemplate generating several versions of a main image using a set of sub-images among which one or more sub-images is provided in different versions, enabling progressive refinement. Accordingly, Hannuksela does not disclose the claimed step of "generating progressive information defining the set of consecutive progressive steps for generating at least one version of the main image, each progressive step being associated with a set of sub-images versions required to generate a corresponding version of the main image”, the Examiner respectfully disagrees. The Examiner notes that Hannuksela teaches at least: [0065] Scalable video coding may refer to coding structure where one bitstream can contain multiple representations of the content, for example, at different bitrates, resolutions or frame rates. In these cases the receiver can extract the desired representation depending on its characteristics (e.g. resolution that matches best the display device). Alternatively, a server or a network element can extract the portions of the bitstream to be transmitted to the receiver depending on e.g. the network characteristics or processing capabilities of the receiver. A meaningful decoded representation can be produced by decoding only certain parts of a scalable bit stream. A scalable bitstream typically consists of a “base layer” providing the lowest quality video available and one or more enhancement layers that enhance the video quality when received and decoded together with the lower layers. In order to improve coding efficiency for the enhancement layers, the coded representation of that layer typically depends on the lower layers. E.g. the motion and mode information of the enhancement layer can be predicted from lower layers. Similarly the pixel data of the lower layers can be used to create prediction for the enhancement layer. [0118] Any number of image items can be included in the same file. Given a collection images stored by using the ‘meta’ box approach, it sometimes is essential to qualify certain relationships between images. Examples of such relationships include indicating a cover image for a collection, providing thumbnail images for some or all of the images in the collection, and associating some or all of the images in a collection with auxiliary image such as an alpha plane. A cover image among the collection of images is indicated using the ‘pitm’ box. A thumbnail image or an auxiliary image is linked to the primary image item using an item reference of type ‘thmb’ or ‘auxl’, respectively. [0119] HEIF supports derived images. An item is a derived image, when it includes a ‘dimg’ item reference to another item. A derived image is obtained by performing a specified operation, such as rotation, to specified input images. The operation performed to obtain the derived image is identified by the item_type of the item. The image items used as input to a derived image may be coded images, e.g. with item type ‘hvcl’, or they may be other derived image items. HEIF includes the specification of the clean aperture (i.e. cropping) operation, a rotation operation for multiple-of-90-degree rotations, and an image overlay operation. The image overlay ‘iovl’ derived image locates one or more input images in a given layering order within a larger canvas. The derived image feature of HEIF is extensible so that external specifications as well as later version of HEIF itself can specify new operations. Therefore, given the broadest reasonable interpretation of the limitation of “generating progressive information defining the set of consecutive progressive steps for generating at least one version of the main image, each progressive step being associated with a set of sub-images versions required to generate a corresponding version of the main image”, the teachings of Hannuksela reasonably meet the limitation as claimed, and the rejection of record is maintained. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-7, 9-17 and 19-22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hannuksela (of record). Regarding claims 1, 17 and 19-22, Hannuksela discloses a method of encapsulating image data in a media file (see Hannuksela, at least at abstract, [0103], and other related text), the image data being related to a main image to be generated based on a plurality of sub-images (see Hannuksela, at least at [0158], and other related text), wherein the method comprises: obtaining the plurality of sub-images, each sub-image being provided in at least one version corresponding to at least one progressive step of a set of consecutive progressive steps (see Hannuksela, at least at [0065]-[0066], [0082], [0085], [0142], [0155]-[0158], [0215], and other related text); generating descriptive metadata for describing information about the main image and the at least one version of the plurality of sub-images (see Hannuksela, at least at [0115], [0118]-[0121], and other related text); encapsulating the at least one version of the plurality of sub-images and the descriptive metadata in the media file (see Hannuksela, at least at [0103]-[0104], [0112]-[0114], [0123], and other related text); wherein the method further comprises: generating a progressive information defining the set of consecutive progressive steps for generating at least one version of the main image, each progressive step being associated with a set of sub-images versions required to generate a corresponding version of the main image (see Hannuksela, at least at [0118]-[0120], [0149], [0158], and other related text); and embedding the progressive information in the descriptive metadata (see Hannuksela, at least at [0115], [0118]-[0121], and other related text). Regarding claim 2, Hannuksela discloses wherein each sub-image represents a different subset o t f the layers of the main image (see Hannuksela, at least at [0158], and other related text). Regarding claim 3, Hannuksela discloses wherein sub-images are input images, at least one input image being associated with at least one alternative version of input image (see Hannuksela, at least at [0158], and other related text). Regarding claim 4, Hannuksela discloses wherein the progressive information comprises data for identifying, for each progressive step, a position in the file of the sub-image version data associated with the progressive step (see Hannuksela, at least at [0065]-[0066], [0082]-[0083], [0085], [0146]-[0147], and other related text). Regarding claim 5, Hannuksela discloses wherein the position indicates a last byte of sub-image version data in the file associated with the progressive step (see Hannuksela, at least at [0102], and other related text). Regarding claim 6, Hannuksela discloses wherein the position comprises an offset and a length to indicate the sub-image version data associated with the progressive step (see Hannuksela, at least at [0083]-[0084], and other related text). Regarding claim 7, Hannuksela discloses wherein the position comprises a list of extents of the sub-image version data associated with the progressive step (see Hannuksela, at least at [0046], [0081][-[0084], [0112], and other related text). Regarding claim 9, Hannuksela discloses wherein sub-image versions being described as image items, the progressive information comprises data for identifying, for each progressive step a list of the image item identifiers identifying the image items associated with the progressive step (see Hannuksela, at least at [0082], [0095], [0115]-[0116], [0134], [0139], [0142], [0146], [0174], and other related text). Regarding claim 10, Hannuksela discloses wherein input image versions being described as image items, the progressive information comprises for each progressive step a number of image items comprised in the progressive step (see Hannuksela, at least at [0082], [0095], [0115]-[0116], [0134], [0139], [0142], [0146], [0174], and other related text). Regarding claim 11, Hannuksela discloses wherein at least one input image version being composed of a plurality of layers, the progressive information further comprises data for identifying a layer identifier associated with the image item identifier of the input image version (see Hannuksela, at least at [0046], [0065]-[0066], [0082], [0085], [0095], [0115]-[0116], [0134], [0139], [0142], [0146], [0174], and other related text). Regarding claim 12, Hannuksela discloses wherein generating a progressive information comprises generating a progressive rendering data structure comprising data for determining, for each progressive rendering step, the number of image items to use for the reconstruction of the main image, wherein the number of image items for a progressive rendering step is described as a difference from the previous step (see Hannuksela, at least at [0146]-[0148], [0197], and other related text). Regarding claim 13, Hannuksela discloses wherein the progressive rendering data structure further comprises a number of progressive rendering steps (see Hannuksela, at least at [0146]-[0148], [0197], and other related text). Regarding claim 14, Hannuksela discloses wherein the progressive information characterizes a construction of the main image so that its quality is gradually improved (see Hannuksela, at least at [0146]-[0148], [0197], and other related text). Regarding claim 15, Hannuksela discloses wherein the progressive information is associated with the main image (see Hannuksela, at least at [0146]-[0148], [0197], and other related text). Regarding claim 16, Hannuksela discloses wherein the main image is part of an entity group, and the progressive information is associated with at least one entity of the group (see Hannuksela, at least at abstract, [0016], [0124]-[0125], and other related text). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHENEA DAVIS whose telephone number is (571)272-9524 and whose email address is CHENEA.SMITH@USPTO.GOV. The examiner can normally be reached M-F: 8:00 am - 4:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Flynn can be reached at 571-272-1915. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHENEA DAVIS/Primary Examiner, Art Unit 2421
Read full office action

Prosecution Timeline

Jun 13, 2023
Application Filed
Jun 12, 2025
Non-Final Rejection — §102
Sep 15, 2025
Response Filed
Dec 11, 2025
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604057
STREAMING SYSTEM AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12581147
SYSTEMS AND METHODS FOR CONTROLLING QUALITY OF CONTENT
2y 5m to grant Granted Mar 17, 2026
Patent 12581169
UNDER-ADDRESSABLE ADVERTISEMENT MEASUREMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12556762
METHODS AND APPARATUS TO CALIBRATE RETURN PATH DATA FOR AUDIENCE MEASUREMENT
2y 5m to grant Granted Feb 17, 2026
Patent 12549790
INTEGRATION OF PLATFORMS FOR MULTI-PLATFORM CONTENT ACCESS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
88%
With Interview (+16.5%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 525 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month