Prosecution Insights
Last updated: April 19, 2026
Application No. 19/317,151

IMAGE DATA ENCODING/DECODING METHOD AND APPARATUS

Non-Final OA §103§DP
Filed
Sep 03, 2025
Examiner
CATTUNGAL, ROWINA J
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
B1 Institute of Image Technology, Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
393 granted / 521 resolved
+17.4% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
33 currently pending
Career history
554
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 521 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to application filed 09/03/2025 in which the claims 1-6 are pending. Priority Acknowledgment are made of applicant's claim for foreign priority based on an applications filed in KR10-2016-0127890, KR10-2016-0129389, KR10-2017-0090619. It is noted, however, that applicant has not filed a certified copies of the KR10-2016-0127890, KR10-2016-0129389, KR10-2017-0090619 applications as required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/20/2026, 02/26/2026, 02/04/2026, 10/15/2025, 09/17/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-7 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-7 of copending Application No. 19/317,147 in view of Hannuksela et al. (US 2017/0347026 A1) (hereinafter Hannuksela II). Although the claims at issue are not identical, they are not patentably distinct from each other because the examined application claim is obvious over the conflicting copending claim The difference between the instant and conflicting copending claim is the addition of limitation “and wherein the bitstream comprises information on yaw rotation of the image”. However Hannuksela discloses and wherein the bitstream comprises information on yaw rotation of the image (para[0288]-[0289] teaches Yaw rotates around the one coordinate axis (e.g. Y-axis), Yaw may be defined to be in the range of 0, inclusive, to 360 exclusive; para[0358] teaches decode the video bitstream or at least the part of the bitstream representing the viewport or the spatial region based on the at least one value indicative of the quality of the viewport or the spatial region). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize limitation in the method of the conflicting copending claim, since including in a manifest information of an available media content. A viewport or a spatial region is indicated in the manifest. The viewport is a portion of a 360-degree video inter-view prediction can be utilized in multiview video coding to take advantage of inter-view correlation and improve compression efficiency. Claims 1-7 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-7 of copending Application No. 19/320,777 (reference application) in view of Hannuksela et al. (US 2017/0347026 A1) (hereinafter Hannuksela II). Although the claims at issue are not identical, they are not patentably distinct from each other because the examined application claim is obvious over the conflicting copending claim The difference between the instant and conflicting copending claim is the addition of limitation “and wherein the bitstream comprises information on yaw rotation of the image”. However Hannuksela discloses and wherein the bitstream comprises information on yaw rotation of the image (para[0288]-[0289] teaches Yaw rotates around the one coordinate axis (e.g. Y-axis), Yaw may be defined to be in the range of 0, inclusive, to 360 exclusive; para[0358] teaches decode the video bitstream or at least the part of the bitstream representing the viewport or the spatial region based on the at least one value indicative of the quality of the viewport or the spatial region). would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize limitation in the method of the conflicting copending claim, since including in a manifest information of an available media content. A viewport or a spatial region is indicated in the manifest. The viewport is a portion of a 360-degree video inter-view prediction can be utilized in multiview video coding to take advantage of inter-view correlation and improve compression efficiency. Claims 1-7 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-7 of copending Application No. 19/411,190 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because instant claims are anticipated by the conflicting copending claims. The difference between the instant examined claim and the conflicting copending claim is that the conflicting copending claim is narrower in scope and falls within the scope of the examined claim Claims 1-7 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-7 of copending Application No. 19/411,192 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because instant claims are anticipated by the conflicting copending claims. The difference between the instant examined claim and the conflicting copending claim is that the conflicting copending claim is narrower in scope and falls within the scope of the examined claim Claims 1-7 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-7 of copending Application No. 19/411,193 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because instant claims are anticipated by the conflicting copending claims. The difference between the instant examined claim and the conflicting copending claim is that the conflicting copending claim is narrower in scope and falls within the scope of the examined claim Claims 1-7 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-7 of copending Application No. 19/414,697 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because instant claims are anticipated by the conflicting copending claims. The difference between the instant examined claim and the conflicting copending claim is that the conflicting copending claim is narrower in scope and falls within the scope of the examined claim Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 15. Claims 1-3, 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Hannuksela et al. (US 2017/0085917 A1) in view of Hannuksela et al. (US 2017/0347026 A1) (hereinafter Hannuksela II). Regarding claim 1, Hannuksela discloses a method for processing an image (Para[0422] & Fig. 14 teaches a method comprises coding or decoding samples of a border region of a 360-degree panoramic picture), the method comprising: obtaining image resizing information for the image based on a received bitstream (Para[0318] teaches the encoder and/or the decoder may derive a horizontal scale factor (e.g. stored in variable ScaleFactorHor) and a vertical scale factor (e.g. stored in variable ScaleFactorVer) for inter-layer predictor, for a pair of an enhancement layer and its reference layer for example based on the reference layer location offsets for the pair. resampling may be pre-defined for example in a coding standard and/or indicated by the encoder in the bitstream (e.g. as an index among pre-defined resampling processes or filters) and/or decoded by the decoder from the bitstream); reconstructing the image by decoding the bitstream (Para0379] teaches it is indicated in the bitstream whether sample locations outside a picture boundary are handled in inter-layer resampling, Para [0444] teaches a decoder may decode the mapping from reference layer location offsets parsed from the bitstream, decoding the inter-layer predicted bitstream); and performing image resizing for the reconstructed image based on the image resizing information (Fig. 10, illustrates enhancement layer 1030 having larger size than upsampled base layer 1010 and base layer 1020), wherein the image resizing information comprises offset factors for each direction of the reconstructed image (para[0332] teaches scaled reference layer offsets may be considered to specify the horizontal and vertical offsets between the sample in the current picture that is collocated with the top-left luma sample of the reference region in a decoded picture in a reference layer and the horizontal and vertical offsets between the sample in the current picture that is collocated with the bottom-right luma sample of the reference region in a decoded picture in a reference layer & Fig. 10, enhancement layer 1030 being extended above, left, right, and below base layer by an offset scaling value. para[0394] teaches a decoder may decode scaled reference layer offset values instead of or in addition to reference region offset values, whereby the values indicate that an enhancement layer picture corresponds to a region in the reference-layer picture that crosses the picture boundary to the opposite side of the reference-layer picture). Hannuksela does not explicitly disclose and wherein the bitstream comprises information on yaw rotation of the image. However Hannuksela II discloses and wherein the bitstream comprises information on yaw rotation of the image (para[0288]- [0289] teaches Yaw rotates around the one coordinate axis (e.g. Y-axis), Yaw may be defined to be in the range of 0, inclusive, to 360 exclusive; para[0358] teaches decode the video bitstream or at least the part of the bitstream representing the viewport or the spatial region based on the at least one value indicative of the quality of the viewport or the spatial region). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in which the reference region is determined to cross a picture boundary of the 360-degree panoramic source picture, rotating the residual data of the residual block where reference region is included with the sample values of an opposite side border region, the variable values are matched with the blocks of the opposite side border region of Hannuksela with the method involves including in a manifest information of an available media content. A viewport or a spatial region is indicated in the manifest. The viewport is a portion of a 360-degree video of Hannuksela II in order to provide a system in which inter-view prediction can be utilized in multiview video coding to take advantage of inter-view correlation and improve compression efficiency. Regarding claim 2, Hannuksela discloses the method of claim 1, wherein the image resizing is performed further considering scaling factors for both a lateral direction and a longitudinal direction, and the scaling factor for the lateral direction and the scaling factor for the longitudinal direction are obtained independently from each other (Para[0363] teaches The variables ScaledRefRegionWidthInSamplesY and ScaledRefRegionHeightInSamplesY may be set to the width and height, respectively, of the reference region within the current picture. The horizontal and vertical scale factors for the luma sample array may then be derived as the ratio of ScaledRefRegionWidthInSamplesY to the reference region width (in the luma sample array of the source picture for inter-layer prediction, here denoted ScaledRefRegionWidthInSamplesY) and the ratio of ScaledRefRegionHeightInSamplesY to the reference region height (in the luma sample array of the source picture for inter-layer prediction), respectively). Regarding claim 3, Hannuksela discloses 3. The method of claim 1, wherein the image resizing is performed based on a resizing value, and the resizing value is obtained based on the offset factor included in the image resizing information and a decoding setting (para[0364] teaches the reference layer sample location corresponding to or collocating with (xP, yP) may be derived for a luma sample array on the basis of reference layer location offsets for example using the following process, process generates a sample location (xRef16, yRef16) specifying the reference layer sample location in units of 1/16-th sample relative to the top-left sample of the luma component. xRef16 is set equal to (((xP−ScaledRefLayerLeftOffset)*ScaleFactorHor+addHor+(1<<11))>>12)+refOffsetLeft, where addHor is set on the basis of horizontal phase offset for luma and refOffSetLeft is the left offset of the reference region in units of 1/16-th sample relative to the top-left sample of the luma sample array of the source picture for inter-layer prediction. yRef16 is set equal to (((yP−ScaledRefLayerTopOffset)*ScaleFactorVer+addVer+(1<<11))>>12)+refOffsetTop, where addVer is set on the basis of vertical phase offset for luma and refOffSetTop is the top offset of the reference region in units of 1/16-th sample relative to the top-left sample of the luma sample array of the source picture for inter-layer prediction). Regarding claim 5, Hannuksela discloses the method of claim 1, wherein the image resizing for a chroma component is performed based on the image resizing for a luma component (Para[0306] teaches inter-component prediction takes place from the luma component (or sample array) to the chroma components (or sample arrays). Para [0327] an inter-layer resampling process for obtaining a resampled chroma sample value may be specified identically or similarly to the above-described process for a luma sample value, Para[0363] –[0364] For a particular direct reference layer with nuh_layer_id equal to rLId, the variables ScaledRefLayerLeftOffset, ScaledRefLayerTopOffset, ScaledRefLayerRightOffset and ScaledRefLayerBottomOffset may be set equal to scaled_ref_layer_left_offset[rLId], scaled_ref_layer_top_offset[rLId], scaled_ref_layer_right_offset[rLId] and scaled_ref_layer_bottom_offset[rLId], respectively, scaled (when needed) to be represented in units of luma samples of the current picture. The scale factors for chroma sample arrays may be derived similarly). Regarding claim 6, Hannuksela discloses method for processing an image Para[0379] teaches method of handling sample locations outside picture boundaries is in use for inter-layer resampling, the encoder indicates the method in the bitstream, the method comprising: obtaining the image (para[0422] & Fig. 14 teaches a method comprises coding or decoding samples of a border region of a 360-degree panoramic picture); encoding the image into a bitstream (Para[0311] teaches inter-layer prediction may for example depend on the coding profile according to which the bitstream or a particular layer within the bitstream is being encoded); and encoding image resizing information for the image into the bitstream (Para[0379] teaches method of handling sample locations outside picture boundaries is in use for inter-layer resampling, the encoder indicates the method in the bitstream. The signaling may be specific to handling sample locations outside picture boundaries in inter-layer resampling or may be combined with handling sample locations outside picture boundaries for inter prediction. For example, the encoder may include one or more of the following indications, or similar, into the bitstream); wherein the image resizing information is used for performing image resizing for the image when being reconstructed, wherein the information on image resizing comprises offset factors for each direction of the image (Para[0318] teaches the encoder and/or the decoder may derive a horizontal scale factor (e.g. stored in variable ScaleFactorHor) and a vertical scale factor (e.g. stored in variable ScaleFactorVer) for a pair of an enhancement layer and its reference layer for example based on the reference layer location offsets for the pair. resampling may be pre-defined for example in a coding standard and/or indicated by the encoder in the bitstream (e.g. as an index among pre-defined resampling processes or filters) and/or decoded by the decoder from the bitstream) Hannuksela does not explicitly disclose and wherein the bitstream comprises information on yaw rotation of the image. However Hannuksela II discloses and wherein the bitstream comprises information on yaw rotation of the image (para[0288]- [0289] teaches Yaw rotates around the one coordinate axis (e.g. Y-axis), Yaw may be defined to be in the range of 0, inclusive, to 360 exclusive; para[0358] teaches decode the video bitstream or at least the part of the bitstream representing the viewport or the spatial region based on the at least one value indicative of the quality of the viewport or the spatial region). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in which the reference region is determined to cross a picture boundary of the 360-degree panoramic source picture, rotating the residual data of the residual block where reference region is included with the sample values of an opposite side border region, the variable values are matched with the blocks of the opposite side border region of Hannuksela with the method involves including in a manifest information of an available media content. A viewport or a spatial region is indicated in the manifest. The viewport is a portion of a 360-degree video of Hannuksela II in order to provide a system in which inter-view prediction can be utilized in multiview video coding to take advantage of inter-view correlation and improve compression efficiency. Regarding claim 7, Hannuksela discloses A method for transmitting a bitstream (Para[0280] teaches portions of the bitstream to be transmitted to the receiver Para[0379] teaches method of handling sample locations outside picture boundaries is in use for inter-layer resampling, the encoder indicates the method in the bitstream), the method comprising: obtaining an image (Para[0422] & Fig. 14 teaches a method comprises coding or decoding samples of a border region of a 360-degree panoramic picture); encoding the image into a bitstream Para[0311] teaches inter-layer prediction may for example depend on the coding profile according to which the bitstream or a particular layer within the bitstream is being encoded );; encoding image resizing information for the image into the bitstream (Para[0379] teaches method of handling sample locations outside picture boundaries is in use for inter-layer resampling, the encoder indicates the method in the bitstream. The signaling may be specific to handling sample locations outside picture boundaries in inter-layer resampling or may be combined with handling sample locations outside picture boundaries for inter prediction. For example, the encoder may include one or more of the following indications, or similar, into the bitstream);; and transmitting the bitstream, wherein the image resizing information is used for performing image resizing for the image when being reconstructed (Para[0318] teaches the encoder and/or the decoder may derive a horizontal scale factor (e.g. stored in variable ScaleFactorHor) and a vertical scale factor (e.g. stored in variable ScaleFactorVer) for a pair of an enhancement layer and its reference layer for example based on the reference layer location offsets for the pair. resampling may be pre-defined for example in a coding standard and/or indicated by the encoder in the bitstream (e.g. as an index among pre-defined resampling processes or filters) and/or decoded by the decoder from the bitstream), wherein the information on image resizing comprises offset factors for each direction of the image (para[0332] teaches scaled reference layer offsets may be considered to specify the horizontal and vertical offsets between the sample in the current picture that is collocated with the top-left luma sample of the reference region in a decoded picture in a reference layer and the horizontal and vertical offsets between the sample in the current picture that is collocated with the bottom-right luma sample of the reference region in a decoded picture in a reference layer & Fig. 10, para[0394] teaches a decoder may decode scaled reference layer offset values instead of or in addition to reference region offset values, whereby the values indicate that an enhancement layer picture corresponds to a region in the reference-layer picture that crosses the picture boundary to the opposite side of the reference-layer picture). Hannuksela does not explicitly disclose and wherein the bitstream comprises information on yaw rotation of the image. However Hannuksela II discloses and wherein the bitstream comprises information on yaw rotation of the image (para[0288]- [0289] teaches Yaw rotates around the one coordinate axis (e.g. Y-axis), Yaw may be defined to be in the range of 0, inclusive, to 360 exclusive; para[0358] teaches decode the video bitstream or at least the part of the bitstream representing the viewport or the spatial region based on the at least one value indicative of the quality of the viewport or the spatial region.). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in which the reference region is determined to cross a picture boundary of the 360-degree panoramic source picture, rotating the residual data of the residual block where reference region is included with the sample values of an opposite side border region, the variable values are matched with the blocks of the opposite side border region of Hannuksela with the method involves including in a manifest information of an available media content. A viewport or a spatial region is indicated in the manifest. The viewport is a portion of a 360-degree video of Hannuksela II in order to provide a system in which inter-view prediction can be utilized in multiview video coding to take advantage of inter-view correlation and improve compression efficiency. 16. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Hannuksela et al. (US 2017/0085917 A1) in view of Hannuksela et al. (US 2017/0347026 A1) (hereinafter Hannuksela II) and Yamamoto et al. (US 2017/0034532 A1). Regarding claim 4, Hannuksela in view of Hannuksela II discloses the method of claim 3, Hannuksela in view of Hannuksela II does not explicitly disclose wherein the resizing value is calculated as equal to the offset factor multiplied by 2 according to a decoding setting. However Yamamoto discloses wherein the resizing value is calculated as equal to the offset factor multiplied by 2 according to a decoding setting (Para[0273] teaches a value obtained by multiplying a syntax value of the corresponding reference region offset information by 2 is set as a reference region offset). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method in which the reference region is determined to cross a picture boundary of the 360-degree panoramic source picture, rotating the residual data of the residual block where reference region is included with the sample values of an opposite side border region, the variable values are matched with the blocks of the opposite side border region and orientation of a viewport is indicated in a manifest and/or parsed from a manifest of Hannuksela in view of Hannuksela II with the method of expansion reference layer offset syntax related with reference layer, reference layer offset syntax, and phase offset syntax between layers are used of Yamamoto in order to provide a system with hierarchy image decoding which decodes the coding data encoded hierarchically. Conclusion 17. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROWINA J CATTUNGAL whose telephone number is (571)270-5922. The examiner can normally be reached Monday-Thursday 7:30am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571) 272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROWINA J CATTUNGAL/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Sep 03, 2025
Application Filed
Mar 26, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604092
AUTOMATED DEVICE FOR DRILL CUTTINGS IMAGE ACQUISITION
2y 5m to grant Granted Apr 14, 2026
Patent 12604076
ENDOSCOPE SYSTEM, CONTROL METHOD, AND PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12604036
METHOD AND APPARATUS OF ENCODING/DECODING IMAGE DATA BASED ON TREE STRUCTURE-BASED BLOCK DIVISION
2y 5m to grant Granted Apr 14, 2026
Patent 12604037
IMAGE DATA ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12604038
IMAGE DATA ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
88%
With Interview (+13.0%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 521 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month