Prosecution Insights
Last updated: April 19, 2026
Application No. 18/464,433

LINE-BASED COMPRESSION FOR DIGITAL IMAGE DATA

Non-Final OA §103
Filed
Sep 11, 2023
Examiner
PHILIPPE, GIMS S
Art Unit
2424
Tech Center
2400 — Computer Networks
Assignee
Texas Instruments Incorporated
OA Round
4 (Non-Final)
85%
Grant Probability
Favorable
4-5
OA Rounds
3y 0m
To Grant
87%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
878 granted / 1030 resolved
+27.2% vs TC avg
Minimal +2% lift
Without
With
+1.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
35 currently pending
Career history
1065
Total Applications
across all art units

Statute-Specific Performance

§101
6.7%
-33.3% vs TC avg
§103
39.9%
-0.1% vs TC avg
§102
26.8%
-13.2% vs TC avg
§112
4.2%
-35.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1030 resolved cases

Office Action

§103
DETAILED ACTION 1. Applicant’s amendment filed on November 7, 2025 has been fully considered and entered, but the arguments are moot in view of the new grounds of rejection. Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Claim Rejections - 35 USC § 103 2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 3. The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claims 1-2, 4, 8-11, 13, and 17-20 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Joch et al. (US Patent Application Publication no. 2005/0117646) in view of Seo et al. (US Patent no. 8208545), and further in view of Yang et al. (US Patent Application Publication no. 2004/0008778). Regarding claims 1 and 10, Joch discloses a system and method comprising: receiving reference frame data from storage (See Joch [0053] and processor 36 of Fig. 4, and paragraph [0059]); decompressing the reference frame data (See Joch [0053]-[0054]) by at least: selecting an entropy code for decoding (See Joch [0054]); computing a pixel predictor (See Joch [0058]). It is noted that Joch is silent about decoding a pixel residual based on the pixel predictor using the entropy code or using run mode decoding. However, Seo teaches decoding a pixel residual based on the pixel predictor using the entropy code or using run mode decoding (See Seo col. 2, lines 60-67, col. 3, lines 1-15). Therefore, it is considered obvious that one skilled in the art at the time of the invention, would recognize the advantage of modifying Joch to incorporate Seo’s teachings to decode a pixel residual based on the pixel predictor using the entropy code or using run mode decoding. The motivation for performing such a modification in Joch is to maximize compression efficiency while improving the compression rate. It is also noted that the although combination of Joch and Seo teaches selecting an entropy code (See Joch [0054] and [0058]), it is silent about “from among multiple entropy codes by reading an indicator of the entropy code from the reference frame data”. However, Yang teaches “from among multiple entropy codes by reading an indicator of the entropy code from the reference frame data” (See Yang [0046] “a segmented reference frame and a new frame that is to be approximated using the segments from the reference frame and their respective motion vectors”, [0049] “the data introduced to the decoder, including the segmentation of the reference frame, previous motion vectors, the prediction indicator, the entropy-coded top-level motion vectors, and entropy-coded lower-level residual vectors.”). Therefore, it is considered obvious that one skilled in the art, before the effective filing date of the claimed invention, would recognize the advantage of modifying the combination of Joch and Seo to incorporate Yang’s teachings to select an entropy code for decoding from among multiple entropy codes by reading an indicator of the entropy code from the reference frame data. The motivation for performing such a modification in the combination of Joch and Seo is to provide a novel way of encoding and decoding motion vectors that saves bits by exploiting the correlations between the motions of adjacent segments as taught by Yang (See Yang [0019]). As per claims 19-20, Joch discloses a non-transitory computer-readable medium having executable instructions stored thereon, configured to be executable by one or more processors for causing the one or more processors (See Joch [0053] and [0059]) to: compress first reference frame data to generate compressed reference frame data (See Joch [0018], [0036]); store the compressed reference frame data to off-chip storage (See Joch [0055]); retrieve the compressed reference frame data from the off-chip storage (See Joch [0053]-[0054]); decompress the compressed reference frame data by at least: selecting an entropy code for decoding (See Joch [0053]-[0054]); computing a pixel predictor (See Joch [0058]). It is noted that Joch is silent about decoding a pixel residual based on the pixel predictor using the entropy code or using run mode decoding. However, Seo teaches decoding a pixel residual based on the pixel predictor using the entropy code or using run mode decoding (See Seo col. 2, lines 60-67, col. 3, lines 1-15). Therefore, it is considered obvious that one skilled in the art at the time of the invention, would recognize the advantage of modifying Joch to incorporate Seo’s teachings to decode a pixel residual based on the pixel predictor using the entropy code or using run mode decoding. The motivation for performing such a modification in Joch is to maximize compression efficiency while improving the compression rate. It is also noted that the although combination of Joch and Seo teaches selecting an entropy code (See Joch [0054] and [0058]), it is silent about “from among multiple entropy codes by reading an indicator of the entropy code from the reference frame data”. However, Yang teaches “from among multiple entropy codes by reading an indicator of the entropy code from the reference frame data” (See Yang [0046] “a segmented reference frame and a new frame that is to be approximated using the segments from the reference frame and their respective motion vectors”, [0049] “the data introduced to the decoder, including the segmentation of the reference frame, previous motion vectors, the prediction indicator, the entropy-coded top-level motion vectors, and entropy-coded lower-level residual vectors.”). Therefore, it is considered obvious that one skilled in the art, before the effective filing date of the claimed invention, would recognize the advantage of modifying the combination of Joch and Seo to incorporate Yang’s teachings to select an entropy code for decoding from among multiple entropy codes by reading an indicator of the entropy code from the reference frame data. The motivation for performing such a modification in the combination of Joch and Seo is to provide a novel way of encoding and decoding motion vectors that saves bits by exploiting the correlations between the motions of adjacent segments as taught by Yang (See Yang [0019]). As per claims 2 and 11, the combination of Joch, Seo and Yang further teaches wherein receiving the reference frame data comprises retrieving compressed data from the storage (See Joch [0055] and [0061]). As per claims 4 and 13, the combination of Joch, Seo and Yang further teaches wherein receiving the reference frame data comprises receiving the reference frame data from off-chip memory (See Joch [0053]-[0055]). As per claims 8 and 17, the combination of Joch, Seo and Yang further teaches constructing a current pixel based on the pixel predictor and pixel residual after decoding the pixel residual using entropy code (See Seo col. 2, lines 60-67, col. 3, lines 1-15). As per claims 9 and 18, the combination of Joch, Seo and Yang further teaches generating a picture based on the decompressed reference frame data and displaying the picture (See Joch [0017] and [0046]). As per claim 21, the combination of Joch, Seo and Yang further teaches wherein the decompressing the reference frame data is performed by an encoder (See Joch [0017], [0046] and [0067]). 5. Claims 6 and 15 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Joch et al. (US Patent Application Publication no. 2005/0117646) in view of Seo et al. (US Patent no. 8208545) and Yang et al. (US Patent Application Publication no. 2004/0008778) as applied to claims 1 and 10 above, and further in view of Uramoto et al. (US Patent no. 5400087). Regarding claims 6 and 15, most of the limitations of these claims have been noted in the above rejection of claims 1 and 10. It is noted that the combination of Joch, Seo and Yang is silent about computing minimum absolute difference for the current pixel, and computing the pixel predictor for the current pixel based on the minimum absolute difference. However, Uramoto teaches computing minimum absolute difference for the current pixel, and computing the pixel predictor for the current pixel based on the minimum absolute difference (See Uramoto col. 34, lines 46-68, col. 35, lines 1-5). Therefore, it is considered obvious that one skilled in the art, at the time of the invention, would recognize the advantage of modifying the combination of Joch, Seo and Yang to incorporate Uramoto’s teachings to compute minimum absolute difference for the current pixel, and computing the pixel predictor for the current pixel based on the minimum absolute difference. The motivation for performing such a modification in the proposed combination of Joch, Seo and Yang is to determine a displacement vector which corresponds to the minimum absolute value sum is determined as the motion vector. 6. Claims 5, 7, 14, and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The claims are allowable over the prior art of record since the cited references taken individually or in combination fail to teach or suggest decompressing a reference frame data wherein receiving the reference frame data comprises retrieving the reference frame data from off-chip memory, wherein the reference frame data is first reference frame data, wherein receiving the first reference frame data comprises retrieving compressed reference frame data from the off-chip memory, and wherein the method further comprises, before retrieving the first reference frame data: compressing second reference frame data to generate the compressed reference frame data; and storing the compressed reference frame data to the off-chip memory. 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GIMS S PHILIPPE whose telephone number is (571)272-7336. The examiner can normally be reached Maxi Flex. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Bruckart can be reached at 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GIMS S PHILIPPE/Primary Examiner, Art Unit 2424
Read full office action

Prosecution Timeline

Sep 11, 2023
Application Filed
Nov 22, 2024
Non-Final Rejection — §103
Feb 26, 2025
Response Filed
Apr 25, 2025
Final Rejection — §103
Jun 30, 2025
Response after Non-Final Action
Jul 23, 2025
Request for Continued Examination
Jul 29, 2025
Response after Non-Final Action
Aug 06, 2025
Non-Final Rejection — §103
Nov 07, 2025
Response Filed
Feb 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603979
PROCESSING IMAGES USING NEURAL STYLE TRANSFER NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12597272
METHOD FOR DETERMINING THE POSITION OF AN OBJECT WITH RESPECT TO A ROAD MARKING LINE OF A ROAD
2y 5m to grant Granted Apr 07, 2026
Patent 12592073
IMAGE PROCESSING DEVICE AND IN-VEHICLE CONTROL DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12581093
METHOD AND APPARATUS FOR VIDEO CODING USING AN IMPROVED IN-LOOP FILTER
2y 5m to grant Granted Mar 17, 2026
Patent 12581098
TRANSPORTING HEIF-FORMATTED IMAGES OVER REAL-TIME TRANSPORT PROTOCOL
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
85%
Grant Probability
87%
With Interview (+1.5%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 1030 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month