Prosecution Insights
Last updated: April 19, 2026
Application No. 18/808,837

IMAGE DATA ENCODING/DECODING METHOD AND APPARATUS

Final Rejection §DP
Filed
Aug 19, 2024
Examiner
RAHMAN, MOHAMMAD J
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
B1 Institute of Image Technology, Inc.
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
685 granted / 868 resolved
+20.9% vs TC avg
Moderate +11% lift
Without
With
+10.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
41 currently pending
Career history
909
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
56.0%
+16.0% vs TC avg
§102
3.0%
-37.0% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 868 resolved cases

Office Action

§DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action Response to Amendment This Office Action is in response to the correspondence on 11/30/2025. Applicant’s argument, filed on 11/30/2025 has been entered and carefully considered. Claims 1-4 are pending. Double Patenting rejection against US 11,792,526 B2 is retained based on the arguments submitted on 11/30/2025. The 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph, rejection is withdrawn based on the claim amendments submitted on 11/30/2025. Based on the arguments submitted on 11/30/2025, the 35 USC § 103 rejections are withdrawn. Response to Arguments Applicant’s arguments in the 11/30/2025 Remarks have been fully considered but they are not persuasive because of the following: Regarding claim, on page 5-9 argues “a detailed claim-by-claim analysis”. While the applicant’s argument points are understood, the examiner respectfully disagrees (MPEP 804, "The public should ….. be able to act on the assumption that upon the expiration of the patent it will be free to use not only the invention claimed in the patent but also modifications or variants which would have been obvious to those of ordinary skill in the art at the time the invention was made, taking into account the skill in the art and prior art other than the invention claimed in the issued patent", so, obvious variants to the ordinary skill in the art regarding the invention claimed in the patent are subject to Nonstatutory Double Patenting Rejection, examples are presented below to further clarify the rejection, Example – 1: Conflicting Patent No. US 11,706,531 B2 (Application 17/985,396) Instant Application:-18/808,837 1.A method for decoding a 360-degree image, the method comprising: receiving a bitstream in which the 360-degree image is encoded, the bitstream including data of an extended 2-dimensional image, the extended 2-dimensional image including a 2-dimensional image and a predetermined extension region, and the 2-dimensional image being projected from an image with a 3-dimensional projection structure and including at least one face; and reconstructing the extended 2-dimensional image by decoding the data of an extended 2-dimensional image, wherein, based on a projection format for the 3-dimensional projection structure is a first projection format, a size of the extension region is determined based on first width information for specifying a width of the extension region on a left side of the face and second width information for specifying a width of the extension region on a right side of the face, both the first width information and the second width information being obtained from the bitstream, wherein, based on a projection format for the 3-dimensional projection structure is a second projection format, a size of the extension region is determined based on first width information for specifying a width of the extension region on a left side of the face, second width information for specifying a width of the extension region on a right side of the face, third height information for specifying a height of the extension region on a top side of the face and fourth height information for specifying a height of the extension region on a bottom side of the face, all of the first width information, the second width information, the third height information and the fourth height information being obtained from the bitstream, wherein sample values of the extension region are determined differently according to a padding method selected from a plurality of padding methods, wherein the reconstructing the extended 2-dimensional image comprises generating a prediction image, wherein the prediction image is generated by selecting one prediction mode among a plurality of prediction modes including intra prediction and inter prediction, and performing prediction based on the selected prediction mode, and information on the selected prediction mode is obtained from the bitstream. 1.A method for decoding a 360-degree image, the method comprising: receiving a bitstream in which the 360-degree image is encoded, the bitstream including data of an extended 2-dimensional image, the extended 2-dimensional image including a 2-dimensional image and a predetermined extension region, and the 2-dimensional image being projected from an image with a 3-dimensional projection structure and including at least one face; and reconstructing the extended 2-dimensional image based on a residual image and a predicted image generated by performing prediction the residual image being obtained by parsing syntax elements included in the bitstream, wherein a size of the extension region to be padded is determined based on first width information of the extension region on a left side of the face and second width information of the extension region on a right side of the face, both the first width information and the second width information being obtained from the bitstream, wherein the first width information and the second width information are restricted to indicate an even number of luma samples according to a color format of the 2-dimensional image, and wherein sample values of the extension region are determined differently according to a padding method selected from a plurality of padding methods. US 11,706,531 B2 discloses all the elements of claim of 18/808,837 but US 11,706,531 B2 does not appear to explicitly disclose in the cited section luma samples according to a color format of the 2-dimensional image. However, Yamamoto et al. (US 10,225,567 B2), hereinafter Yamamoto, from the same or similar endeavor teaches luma samples according to a color format of the 2-dimensional image (Column 29, line 54-67). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify US 11,706,531 B2 to incorporate the teachings of Yamamoto to improve the accuracy of predicted image (Yamamoto, Abstract). Similar reasoning/motivation of modification can be applied/extended to the other related/dependent claims. Example – 2: Conflicting Patent No. US 12,483,794 B2 (Application 18/733,853) Instant Application:-18/808,837 1.A method for decoding a 360-degree image, the method comprising: receiving a bitstream in which the 360-degree image is encoded, the bitstream including data of an extended 2-dimensional image, the extended 2-dimensional image including a 2-dimensional image and a predetermined extension region, and the 2-dimensional image being projected from an image with a 3-dimensional projection structure and including at least one face; and reconstructing the extended 2-dimensional image by decoding the data of an extended 2-dimensional image, wherein, based on a projection format for the 3-dimensional projection structure is a first projection format, a size of the extension region is determined based on first width information for specifying a width of the extension region on a left side of the face and second width information for specifying a width of the extension region on a right side of the face, both the first width information and the second width information being obtained from the bitstream, wherein, based on a projection format for the 3-dimensional projection structure is a second projection format, a size of the extension region is determined based on first width information for specifying a width of the extension region on a left side of the face, second width information for specifying a width of the extension region on a right side of the face, third height information for specifying a height of the extension region on a top side of the face and fourth height information for specifying a height of the extension region on a bottom side of the face, all of the first width information, the second width information, the third height information and the fourth height information being obtained from the bitstream, wherein the first projection format and the second projection format are selected from a plurality of projection formats including an ERP format in which the 360-degree image is projected in a two-dimensional plane and a CMP format in which the 360-degree image is projected in a cube, wherein sample values of the extension region are determined differently according to a padding method selected from a plurality of padding methods, wherein, in case of the second projection format, the padding method for the left side of the face, the padding method for the right side of the face, the padding method for the top side of the face and the padding method for the bottom side of the face are selected independently from each other, wherein the reconstructing the extended 2-dimensional image comprises generating a prediction image and a residual image, and wherein the prediction image is generated by performing intra prediction. 1.A method for decoding a 360-degree image, the method comprising: receiving a bitstream in which the 360-degree image is encoded, the bitstream including data of an extended 2-dimensional image, the extended 2-dimensional image including a 2-dimensional image and a predetermined extension region, and the 2-dimensional image being projected from an image with a 3-dimensional projection structure and including at least one face; and reconstructing the extended 2-dimensional image based on a residual image and a predicted image generated by performing prediction the residual image being obtained by parsing syntax elements included in the bitstream, wherein a size of the extension region to be padded is determined based on first width information of the extension region on a left side of the face and second width information of the extension region on a right side of the face, both the first width information and the second width information being obtained from the bitstream, wherein the first width information and the second width information are restricted to indicate an even number of luma samples according to a color format of the 2-dimensional image, and wherein sample values of the extension region are determined differently according to a padding method selected from a plurality of padding methods. US 12,483,794 B2 discloses all the elements of claim of 18/808,837 but US 11,706,531 B2 does not appear to explicitly disclose in the cited section luma samples according to a color format of the 2-dimensional image. However, Yamamoto et al. (US 10,225,567 B2), hereinafter Yamamoto, from the same or similar endeavor teaches luma samples according to a color format of the 2-dimensional image (Column 29, line 54-67). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify US 12,483,794 B2 to incorporate the teachings of Yamamoto to improve the accuracy of predicted image (Yamamoto, Abstract). Similar reasoning/motivation of modification can be applied/extended to the other related/dependent claims. Example – 3: Conflicting Patent No. US 12,483,794 B2 (Application 18/733,853) Instant Application:-18/808,837 1.A method for decoding a 360-degree image, the method comprising: receiving a bitstream in which the 360-degree image is encoded, the bitstream including data of an extended 2-dimensional image, the extended 2-dimensional image including a 2-dimensional image and a predetermined extension region, and the 2-dimensional image being from an image with a 3-dimensional projection structure and including one or more faces; generating a prediction image by referring to syntax information obtained from the received bitstream; obtaining a decoded image by adding the generated prediction image to a residual image, the residual image being obtained by inverse-quantizing and inverse transforming quantized transform coefficients from the bitstream; and reconstructing the decoded image into the 360-degree image according to a projection format, wherein the projection format is selectively determined based on identification information, among a plurality of pre-defined projection formats including an ERP format in which the 360-degree image is projected in a two-dimensional plane or a CMP format in which the 360-degree image is projected in a cube, wherein a size of the extension region is variably determined based on at least one of first information indicating a width of the extension region or second information indicating a height of the extension region, independently of a size of the 2-dimensional image, wherein the extension region is not included in the image with the 3-dimensional projection structure, and wherein at least one of the identification information, the first information or the second information is obtained from the bitstream. 6.The method of claim 1, wherein the first information includes at least one of width information of the extension region on a left side of the face or width information of the extension region on a right side of the face, and wherein the second information includes at least one of height information of the extension region on a top side of the face or height information of the extension region on a bottom side of the face. A method for decoding a 360-degree image, the method comprising: receiving a bitstream in which the 360-degree image is encoded, the bitstream including data of an extended 2-dimensional image, the extended 2-dimensional image including a 2-dimensional image and a predetermined extension region, and the 2-dimensional image being projected from an image with a 3-dimensional projection structure and including at least one face; and reconstructing the extended 2-dimensional image based on a residual image and a predicted image generated by performing prediction the residual image being obtained by parsing syntax elements included in the bitstream, both the first width information and the second width information being obtained from the bitstream, wherein the first width information and the second width information are restricted to indicate an even number of luma samples according to a color format of the 2-dimensional image, and wherein sample values of the extension region are determined differently according to a padding method selected from a plurality of padding methods. wherein a size of the extension region to be padded is determined based on first width information of the extension region on a left side of the face and second width information of the extension region on a right side of the face, US 12,483,794 B2 discloses all the elements of claim of 18/808,837 but US 11,706,531 B2 does not appear to explicitly disclose in the cited section luma samples according to a color format of the 2-dimensional image. However, Yamamoto et al. (US 10,225,567 B2), hereinafter Yamamoto, from the same or similar endeavor teaches luma samples according to a color format of the 2-dimensional image (Column 29, line 54-67). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify US 12,483,794 B2 to incorporate the teachings of Yamamoto to improve the accuracy of predicted image (Yamamoto, Abstract). Similar reasoning/motivation of modification can be applied/extended to the other related/dependent claims. So, it is obvious to the ordinary skill in the art that all claim patents are a variant of the instant application). Therefore, the rejection is maintained. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-4 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of Conflicting Patent PAT US 11,792,526 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because the subject matter claimed in the instant application is anticipated by the Conflicting Patent and is covered by the Patent since the Patent and the application are claiming common subject matter, below is a list of limitations that perform the same function, however, different terminology may be used in both sets to describe the limitations, as follows, Claim 1 is used as an example to analyze the common subject matter: Conflicting Patent No. US 11,792,526 B2 Instant Application:-18/808,837 1. A method for decoding a 360-degree image performed by an image decoding apparatus, the method comprising: receiving a bitstream in which the 360-degree image is encoded, the bitstream including data of an extended 2-dimensional image, the extended 2-dimensional image including a 2-dimensional image and a predetermined extension region, and the 2-dimensional image being projected from an image with a 3-dimensional projection structure and the 2-dimensional image including at least one face; reconstructing the extended 2-dimensional image by decoding the data of an extended 2-dimensional image, wherein a size of the extension region is determined based on one or more syntax elements obtained from the bitstream, wherein the one or more syntax elements comprise width information indicating a width of the extension region, the width information being obtained from the bitstream and the width information is restricted to indicate an even number of luma samples according to a color format of the 2-dimensional image, wherein the number of syntax elements is determined differently based on a projection format for the 3-dimensional projection structure, the projection format being one among a plurality of projection formats including an ERP format in which the 360-degree image is projected in a two-dimensional plane and a CMP format in which the 360-degree image is projected in a cube, wherein sample values of the extension region are determined differently according to a padding method selected from a plurality of padding methods, wherein sample values of the extension region are determined by copying sample values of the face in a horizontal direction, a sample of the face and a corresponding sample of the extension region being located horizontally, wherein the reconstructing the extended 2-dimensional image comprises generating a prediction image, wherein the prediction image is generated by selecting one prediction mode among more than one prediction modes including intra prediction and inter prediction, and performing prediction based on the selected prediction mode, and wherein the prediction image is generated by inter prediction. 1. A method for decoding a 360-degree image, the method comprising: receiving a bitstream in which the 360-degree image is encoded, the bitstream including data of an extended 2-dimensional image, the extended 2-dimensional image including a 2-dimensional image and a predetermined extension region, and the 2-dimensional image being projected from an image with a 3-dimensional projection structure and including at least one face; and reconstructing the extended 2-dimensional image based on a residual image and a predicted image generated by performing prediction the residual image being obtained by parsing syntax elements included in the bitstream, wherein a size of the extension region to be padded is determined based on first width information of the extension region on a left side of the face and second width information of the extension region on a right side of the face, both the first width information and the second width information being obtained from the bitstream, wherein the first width information and the second width information are restricted to indicate an even number of luma samples according to a color format of the 2-dimensional image, and wherein sample values of the extension region are determined differently according to a padding method selected from a plurality of padding methods. As demonstrated, the claim of US patent US 11,792,526 B2 anticipate the features of the claim of instant application 18/808,837. Similar rejections can be presented for US 11696035 B2, US 11778332 B2, US 11778331 B2, US 11792524 B2, US 11792523 B2, US 11792522 B2, US 11812155 B2, US 11838640 B2, US 11838639 B2, US 11997391 B2, US 12126787 B2, US 12126786 B2, US 12149672 B2, US 12206829 B2, US 12219263 B2, US 11601677 B2, US 11758191 B2, US 11758190 B2, US 11758189 B2, US 11818396 B2, US 11831917 B1, US 11831916 B1, US 11831915 B1, US 11831914 B2, US 11949913 B2, US 12167037 B2, US 12177482 B2, US 12225232 B2, US 12225231 B2, US 12250406 B2, US 11463672 B2, US 11412137 B2. A nonstatutory type (35 U.S.C. 101) double patenting rejection can be overcome by amending the conflicting claims so they are no longer coextensive in scope or filing of a terminal disclaimer. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD J RAHMAN whose telephone number is (571)270-7190. The examiner can normally be reached Monday-Friday 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at (571) 272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Mohammad J Rahman/Primary Examiner, Art Unit 2487 ac
Read full office action

Prosecution Timeline

Aug 19, 2024
Application Filed
Sep 18, 2025
Non-Final Rejection — §DP
Nov 30, 2025
Response Filed
Mar 06, 2026
Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604001
SYSTEMS AND METHODS FOR BLOCK PARTITIONING AND INTERLEAVED CODING ORDER FOR MULTIVIEW VIDEO CODING
2y 5m to grant Granted Apr 14, 2026
Patent 12593050
SYSTEMS AND METHODS FOR MULTIPLE BIT RATE CONTENT ENCODING
2y 5m to grant Granted Mar 31, 2026
Patent 12593028
ENCODER WHICH GENERATES PREDICTION IMAGE TO BE USED TO ENCODE CURRENT BLOCK
2y 5m to grant Granted Mar 31, 2026
Patent 12587656
INTRA PREDICTION MODE DERIVATION-BASED INTRA PREDICTION METHOD AND DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12587647
IMAGE DATA ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
90%
With Interview (+10.7%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 868 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month