Prosecution Insights
Last updated: April 19, 2026
Application No. 18/701,256

METHODS AND APPARATUS FOR REAL-TIME GUIDED ENCODING

Non-Final OA §102
Filed
Apr 15, 2024
Examiner
LAM, HUNG H
Art Unit
2639
Tech Center
2600 — Communications
Assignee
Gopro Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
96%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
541 granted / 644 resolved
+22.0% vs TC avg
Moderate +12% lift
Without
With
+12.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
9 currently pending
Career history
653
Total Applications
across all art units

Statute-Specific Performance

§101
5.0%
-35.0% vs TC avg
§103
42.8%
+2.8% vs TC avg
§102
40.3%
+0.3% vs TC avg
§112
2.7%
-37.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 644 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-10, 12 and 15-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Alvarez (US2013/0002907). Regarding claim 1, Alvarez discloses a method for guiding an encoder in real-time, comprising: obtaining real-time information from a first processing element of an image processing pipeline (Fig. 2: 130, ISP_DATA or CTRL_INFO_IN; [0057]); determining an encoding parameter based on the real-time information ([0056; 0059; 0067]: according to Alvarez, the received signal ISP_DATA may be used to improve still picture quality. Fig. 3-7: The CODEC_DATA, optimize coding parameters or adjust rate estimate in block 224, 222,246, 248, 276, 278 are based on CRTL_INFO_OUT, PICTURE_DATA_IN, ISP_DATA of ISP 130, the data of the collect and the send setting steps of ISP 130 in Fig. 5-7); configuring the encoder of a second processing element of the image processing pipeline to generate encoded media based on the encoding parameter (Fig. 2: 132; [0059-0061 and 0067]; Fig. 3-7: See CODEC_DATA, optimize coding parameters or adjust rate estimate based on CRTL_INFO_OUT, PICTURE_DATA_IN and/or ISP_DATA of ISP 130; See also one of other corrections or adjustment data in the data of the collect and send setting steps of ISP 130 in Fig. 5-7); and providing the encoded media to a decoding device (Fig. 2: PICTURE_DATA_OUT is decoded to be displayed on display 108 and block 110 also transports encoded bitstream 118; [0018; 0020; 0023]). Regarding claim 2, Alvarez discloses the method of claim 1, further comprising determining an auto exposure setting via the first processing element, and the real-time information comprises the auto exposure setting (Fig. 3: See exposure control block 141; [0019; 0033 and 0052]). Regarding claim 3, Alvarez discloses the method of claim 1, further comprising determining color space conversion statistics via the first processing element, and the real-time information comprises the color space conversion statistics (Fig. 3: See color processor 144, color interpolation; [0019; 0029; 0036-0042]: using statistical and image characteristic data to improve performance; [0033; 0058]: color conversion). Regarding claim 4 Alvarez discloses the method of claim 1, further comprising stabilizing an image via the first processing element, and the real-time information comprises motion vectors (Fig. 3: Stabilization block 146; [0019; 0031-0034; 0044; 0080]). Regarding claim 5, Alvarez discloses the method of claim 4, further comprising estimating motion based on the motion vectors via the second processing element (abstract; [0019; 0042; 0055; 0080]). Regarding claim 6, Alvarez discloses the method of claim 1, further comprising reducing temporal noise via the first processing element, and the real-time information comprises temporal filter parameters (Fig. 3: Noise Reduction block 146; [0026; 0029; 0031-0041]: temporal noise). Regarding claim 7, Alvarez discloses the method of claim 1, further comprising detecting a presence of a face via the first processing element, and the real-time information comprises facial detection parameters ([0039; 0079]). Regarding claim 8, Alvarez discloses an encoding device, comprising: a camera configured to capture at least a first image (Fig. 1: See camera 106 and IMAGE_DATA_IN); an image processing pipeline (100) comprising: a first processing element and an encoding element (Fig. 1-2: See ISP and/or CODEC; [0024]); and a first non-transitory computer-readable medium (Fig. 1: Memory 104) comprising a first set of instructions that, when executed by the first processing element (ISP/block 130), causes the first processing element to: perform a first correction to the first image to generate a corrected first image (Fig. 3-4: See one of a plurality of correction or adjustment processes in one of blocks 140-153 of ISP 130 to generate PICTURE_DATA_IN or ISP_DATA ; [0025-0031; 0036; 0055; 0057-0067]); determine a first encoding parameter based on the first correction (Fig. 3-7: The CODEC_DATA, optimize coding parameters or adjust rate estimate in block 224, 222,246, 248, 276, 278 are based on CRTL_INFO_OUT, PICTURE_DATA_IN, ISP_DATA of ISP 130, the data of the collect and the send setting steps of ISP 130 in Fig. 5-7); and a third non-transitory computer-readable medium (Fig. 1: Memory 104) comprising a third set of instructions that, when executed by the encoding element (CODEC 132), causes the encoding element to generate encoded media based on the corrected first image and the first encoding parameter (abstract; [5; 0037]: pictures area encoded in real time using image signal processing related information; Fig. 2 shows codec 132 that performs codec operations based on ISP_DATA, PICTURE_DATA_IN, CTRL_INFO_OUT. Fig. 5-7 show 5-7 also show codec 132 that adjust rate control or optimize coding parameters based on the data of the collect and send setting steps of ISP 130). Regarding claim 9, Alvarez discloses the encoding device of claim 8, where the first processing element comprises an image signal processor and the first correction comprises at least one of: an auto exposure, a color correction, or a white balance (Fig 3: See Exposure Control 141, Color processor 144). Regarding claim 10, Alvarez discloses the encoding device of claim 9, further comprising: a second processing element connected to the first processing element (Fig. 3-4: see one of the processing blocks 140-153 of ISP 130) and the encoding element (Fig. 3-4: CODCEC 132); and a second non-transitory computer-readable medium comprising a second set of instructions that, when executed by the second processing element (see one of the processing blocks 140-153 of ISP 130), causes the second processing element to: perform a second correction to the corrected first image (see one of the processing blocks 140-153 of ISP 130 that perform one of the compression, adjustment or corrections on the IMAGE_DATA_IN; [0026-0053]); determine a second encoding parameter based on the second correction; and where the third set of instructions further causes the encoding element to generate the encoded media based on the second encoding parameter (Fig. 3-7: See CODEC_DATA, optimize coding parameters or adjust rate estimate based on CRTL_INFO_OUT, PICTURE_DATA_IN and/or ISP_DATA of ISP 130; See also one of other corrections or adjustment data in the data of the collect and send setting steps of ISP 130 in Fig. 5-7). Regarding claim 12, Alvarez discloses the encoding device of claim 8, further comprising a memory buffer and where the first processing element writes the first encoding parameter to the memory buffer (Fig. 5-7: [0026-0053]: information such that magnification, rate of change, F-stope, Shutter speed, auto-focus, focus speed and focal length are collected in blocks 208, 236, 266 collect and send to block/codec 132: Therefore, the information are inherently buffer in a memory in order to be collected and send to CODEC 132 accordingly) and the encoding element reads the first encoding parameter from the memory buffer in-place (Fig. 5-7: CODEC 132 read/get the zoom settings in step 214, AE ACTIVE in 244, CODEC Statistics 252, or AF ACTIVE in 274). Regarding claim 15, Alvarez discloses an encoding device, comprising: a camera configured to capture a primary data stream (Fig. 1-2: camera 106 and IMAGE_DATA_IN); a codec (Fig. 1-3: ISP/CODEC 130/132) configured to encode the primary data stream based on a supplemental data stream (abstract; [5; 0037]: pictures area encoded in real time using image signal processing related information; Fig. 2 shows codec 132 that performs codec operations based on ISP_DATA, PICTURE_DATA_IN, CTRL_INFO_OUT. Fig. 5-7 show 5-7 also show codec 132 that adjust rate control or optimize coding parameters based on the data of the collect and send setting steps of ISP 130); an image processing pipeline comprising a first processing element (Fig. 1-3: ISP 130); and a first non-transitory computer-readable medium (Fig. 1: memory 126) comprising a first set of instructions that, when executed by the first processing element, causes the first processing element to: perform a first correction to at least a portion of the primary data stream (Fig. 3: See one of a plurality of correction or adjustment processes in one of blocks 140-153 of ISP 130; [0031]); and generate a first parameter of the supplemental data stream based on the first correction (Fig. 3-7: See CODEC_DATA, optimize coding parameters or adjust rate estimate based on CRTL_INFO_OUT, PICTURE_DATA_IN and/or ISP_DATA of ISP 130; See also the data of the collect and send setting steps of ISP 130 in Fig. 5-7). Regarding claim 16, Alvarez discloses the encoding device of claim 15, where the primary data stream is captured according to a first real-time constraint (Fig. 2-3: See CTRL_INFO_OUT and ISP_DATA constraint; [0019; 0040]: CTRL_INFO_OUT comprises a number of data types: capture parameters, rate control information, exposure setting, flash control output; [0023]: see also signal to delay activating a shutter of the image sensor 101) and the primary data stream is encoded according to a second real-time constraint (abstract; [5; 0037]: pictures area encoded in real time using image signal processing related information; Fig. 2 shows codec 132 that performs codec operations based on ISP_DATA, PICTURE_DATA_IN, CTRL_INFO_OUT. Fig. 5-7 show 5-7 also show codec 132 that adjust rate control or optimize coding parameters based on the data of the collect and send setting steps of ISP 130). Regarding claim 17, Alvarez discloses the encoding device of claim 16, where the first real-time constraint comprises a frame rate ([0035]; See also rate control block 184) and the second real-time constraint comprises a latency ([0023]: see the use of image stabilization logic to delay activating o shutter of the image sensor 101). Regarding claim 18, Alvarez discloses the encoding device of claim 15, where the first parameter comprises at least one of a quantization parameter, a compression parameter, a bit rate parameter, or a group of picture (GOP) size (Fig. 3: See picture scaling block 152, resize 153 or picture data compression by block/ISP 130; [0026; 0031; 0035]). Regarding claim 19, Alvarez discloses the encoding device of claim 15, where the first correction comprises at least one of image stabilization or temporal noise reduction (Fig. 3: See stabilization block 146, noise reduction 143). Regarding claim 20, Alvarez discloses the encoding device of claim 15, where the supplemental data stream is updated in real-time ([0037]). Claim 1 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Bennett (US2013/0021483). Regarding claim 1, Bennett discloses a method for guiding an encoder in real-time, comprising: obtaining real-time information from a first processing element of an image processing pipeline (Fig. 1: 103, 104, 105; Fig. 3: 301); determining an encoding parameter based on the real-time information ([0022; 0041]); configuring the encoder of a second processing element of the image processing pipeline to generate encoded media based on the encoding parameter ( Fig. 1: 107; Fig. 3: 302; [0041-0042]); and providing the encoded media to a decoding device (Fig. 1: 106; Fig. 2: 265). Allowable Subject Matter Claims 11 and 13-14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 11, the prior art of Alvarez discloses a camera comprising a first circuit and a second circuit. The first circuit may be configured to perform image signal processing using encoding related information. The second circuit may be configured to encode image data using image signal processing related information. The prior art of Bennett discloses a video encoding system that processes image data captured by an image sensor comprises at least one encoder that receives motion information describing motion of the imaging sensor during capturing of the image data and that encodes the image data with assistance of the motion information. The prior art of Chen (US8315307) discloses a method that involves using a removable unidirectional predicted temporal scaling frame communication along with intra-coded frames and/or inter-coded frame. The prior art of Christopoulos (US2001/0047517) discloses a method and apparatus for performing intelligent transcoding of multimedia data between two or more network elements in a client-server or client-to-client service provision environment. The prior art of Shen (US2006/0109900) discloses a transcoder and transcoding method that combines an inverse-quantized DCT block with one or more transcoding matrices. However, none of the prior art, alone or in combination provide a motivation to teach or fairly suggest the encoding device of claim 8 further in combination with: “where the camera is configured to capture a second image and the first correction to the first image is further based on the second image”. Regarding claim 13, the prior art of Alvarez discloses a camera comprising a first circuit and a second circuit. The first circuit may be configured to perform image signal processing using encoding related information. The second circuit may be configured to encode image data using image signal processing related information. The prior art of Bennett discloses a video encoding system that processes image data captured by an image sensor comprises at least one encoder that receives motion information describing motion of the imaging sensor during capturing of the image data and that encodes the image data with assistance of the motion information. The prior art of Chen (US8315307) discloses a method that involves using a removable unidirectional predicted temporal scaling frame communication along with intra-coded frames and/or inter-coded frame. The prior art of Christopoulos (US2001/0047517) discloses a method and apparatus for performing intelligent transcoding of multimedia data between two or more network elements in a client-server or client-to-client service provision environment. The prior art of Shen (US2006/0109900) discloses a transcoder and transcoding method that combines an inverse-quantized DCT block with one or more transcoding matrices. However, none of the prior art, alone or in combination provide a motivation to teach or fairly suggest the encoding device of claim 12 further in combination with: “ where the memory buffer is characterized by a single data rate mode and a double data rate mode, and where the first processing element writes the first encoding parameter to the memory buffer in the single data rate mode”. Regarding claim 14, the prior art of Alvarez discloses a camera comprising a first circuit and a second circuit. The first circuit may be configured to perform image signal processing using encoding related information. The second circuit may be configured to encode image data using image signal processing related information. The prior art of Bennett discloses a video encoding system that processes image data captured by an image sensor comprises at least one encoder that receives motion information describing motion of the imaging sensor during capturing of the image data and that encodes the image data with assistance of the motion information. The prior art of Chen (US8315307) discloses a method that involves using a removable unidirectional predicted temporal scaling frame communication along with intra-coded frames and/or inter-coded frame. The prior art of Christopoulos (US2001/0047517) discloses a method and apparatus for performing intelligent transcoding of multimedia data between two or more network elements in a client-server or client-to-client service provision environment. The prior art of Shen (US2006/0109900) discloses a transcoder and transcoding method that combines an inverse-quantized DCT block with one or more transcoding matrices. However, none of the prior art, alone or in combination provide a motivation to teach or fairly suggest the encoding device of claim 12 further in combination with: “where the memory buffer is characterized by a single data rate mode and a double data rate mode, and where the encoding element reads the first encoding parameter from the memory buffer in the single data rate mode”. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNG H LAM whose telephone number is (571)272-7367. The examiner can normally be reached 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TWYLER HASKINS can be reached at (571) 272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUNG H LAM/Primary Examiner, Art Unit 2639 12/26/25
Read full office action

Prosecution Timeline

Apr 15, 2024
Application Filed
Jan 06, 2026
Non-Final Rejection — §102
Apr 15, 2026
Examiner Interview Summary
Apr 15, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604080
APPARATUS AND METHOD WITH IMAGE GENERATION
2y 5m to grant Granted Apr 14, 2026
Patent 12604107
IMAGE CAPTURE DEVICES WITH REDUCED STITCH DISTANCES
2y 5m to grant Granted Apr 14, 2026
Patent 12604101
SEMICONDUCTOR DEVICE AND IMAGE PROCESSING SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12593142
DIFFRACTION-GATED REAL-TIME ULTRA-HIGH-SPEED MAPPING PHOTOGRAPHY SYSTEM AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12581209
IMAGE SENSOR INCLUDING COLOR SEPARATING LENS ARRAY AND ELECTRONIC DEVICE INCLUDING THE IMAGE SENSOR
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
96%
With Interview (+12.5%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 644 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month