Prosecution Insights
Last updated: April 19, 2026
Application No. 18/168,120

SMART BIT ALLOCATION ACROSS CHANNELS OF TEXTURE DATA COMPRESSION

Non-Final OA §103
Filed
Feb 13, 2023
Examiner
NGUYEN, ANH TUAN V
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Meta Platforms Technologies, LLC
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
92%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
355 granted / 489 resolved
+10.6% vs TC avg
Strong +19% interview lift
Without
With
+19.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
38 currently pending
Career history
527
Total Applications
across all art units

Statute-Specific Performance

§101
8.3%
-31.7% vs TC avg
§103
67.6%
+27.6% vs TC avg
§102
4.9%
-35.1% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 489 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/18/2025 has been entered. Applicant’s amendment/response filed 12/18/2025 has been entered and made of record. Claims 1, 8, and 15 were amended. Claims 1-20 are pending in the application. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4-9, 11-16, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over BV et al. (US 2023/0326088) in view of Dos Santos et al. (US 2018/0165869), Teng et al. (US 2017/0347107), and Leontaris et al. (US 2013/0243080). Regarding claim 1, BV teaches/suggests: A method implemented by a computing system (BV Fig. 14: computer device 1400), the method comprising: accessing target texture receiving a target bit rate for encoding the target texture determining, using a machine-learning model (BV [0030] “Compression network 110 is a machine learning model”), different target bit allocations for encoding different ones of the target texture encoding the target texture BV does not teach/suggest of a target physically-based rendering (PBR) texture set. Dos Santos, however, teaches/suggests a target physically-based rendering (PBR) texture set (Dos Santos [0071] “a mipmap may include a predetermined number of texture sets each including the N+1 textures … such as Physically-Based Rendering (PBR), which requires a large number of textures to describe a material surface”). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to modify the input image of BV to include the texture set of Dos Santos for compression. BV as modified by Dos Santos does not teach/suggest: wherein the machine-learning model is configured to provide a renderer-aware bit allocation for different materials across channels of the PBR texture set based on different weights applied by a renderer to different texture components. Teng, however, teaches/suggests a renderer-aware bit allocation for different materials based on different weights (Teng [0015] “according to spatial texture complexities of the encoding blocks CB.sub.1 to CB.sub.M and protected color pixel counts in the encoding blocks CB.sub.1 to CB.sub.M, the video encoding device allocates appropriate numbers of bits to the encoding blocks CB.sub.1 to CB.sub.M” [0017] “obtain block texture weights BTW.sub.1 to BTW.sub.M corresponding to the encoding blocks CB.sub.1 to CB.sub.M”). The spatial texture complexities meet the different materials. Before the effective filing date of the claimed invention, the substitution of one known element (the texture weights of Teng) for another (the importance map of BV) would have been obvious to one of ordinary skill in the art because such substitution would have yielded predictable results, namely to allocate the appropriate number of bits. As such, BV as modified by Dos Santos and Teng teaches/suggests: wherein the machine-learning model is configured to provide a renderer-aware bit allocation for different materials across channels of the PBR texture set based on different weights applied by a renderer to different texture components (BV [0033] “the compression network generates a compressed representation 206 of the input image at or near the target bitrate” Dos Santos [0071] “a mipmap may include a predetermined number of texture sets each including the N+1 textures … such as Physically-Based Rendering (PBR), which requires a large number of textures to describe a material surface” Teng [0017] “obtain block texture weights BTW.sub.1 to BTW.sub.M corresponding to the encoding blocks CB.sub.1 to CB.sub.M”). BV as modified by Dos Santos and Teng does not teach/suggest: save the bit allocations for the texture components of the pixel region; adopt one or more of the saved bit allocations to compress one or more texture components of an additional pixel region; Leontaris, however, teaches/suggests save the bit allocations (Leontaris [0026] “allocate or improve rate control related budgets in encoding frames in the final splice into the compressed coded bitstream … bit allocation for the particular final splice may be decreased and saved bits may be allocated to the neighboring final splice”). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to modify the bit allocations of BV as modified by Dos Santos and Teng to be saved as taught/suggested by Leontaris to improve the rate control. As such, BV as modified by Dos Santos, Teng, and Leontaris teaches/suggests: save the bit allocations for the texture components of the pixel region (BV [0033] “the compression network generates a compressed representation 206 of the input image at or near the target bitrate” Dos Santos [0071] “a mipmap may include a predetermined number of texture sets each including the N+1 textures” Leontaris [0026] “bit allocation for the particular final splice may be decreased and saved bits may be allocated to the neighboring final splice”); adopt one or more of the saved bit allocations to compress one or more texture components of an additional pixel region (BV [0033] “the compression network generates a compressed representation 206 of the input image at or near the target bitrate” Dos Santos [0071] “a mipmap may include a predetermined number of texture sets each including the N+1 textures” Leontaris [0026] “bit allocation for the particular final splice may be decreased and saved bits may be allocated to the neighboring final splice”); Regarding claim 2, BV as modified by Dos Santos, Teng, and Leontaris teaches/suggests: The method of Claim 1, further comprising: receiving training texture components of a training physically-based rendering (PBR) texture set (BV [0048] “training inputs 502 are provided to the compression network 110” Dos Santos [0071] “a mipmap may include a predetermined number of texture sets each including the N+1 textures … such as Physically-Based Rendering (PBR), which requires a large number of textures to describe a material surface”); encoding each of the training texture components at a plurality of training bitrates (BV [0048] “the compression network 110 generates a compressed representation of the input training image” Dos Santos [0071] “a mipmap may include a predetermined number of texture sets each including the N+1 textures”); rendering a plurality of reconstructed images associated with a plurality of total training bitrates for encoding the training texture components based on combinations of decoded training texture components at the plurality of training bitrates (BV [0048] “The compressed representation is then provided to reconstruction network 304 which generates a reconstructed image” [0038] “the reconstruction network is a decoder neural network” Dos Santos [0071] “a mipmap may include a predetermined number of texture sets each including the N+1 textures” [In view of BV and Dos Santos, the combined bitrate of the texture set meets the total bitrate.]); determining a desired reconstructed image for each of the plurality of total training bitrates for encoding the training texture components (BV [0049] “The reconstructed image and the training input are then provided to a discriminator 504 which attempts to distinguish between the original image and the reconstructed image” [0055] “The features learned by the discriminator network for both of its inputs are compared with each other using L1 Loss. This enforces the reconstructed image to be visually close to the input image” Dos Santos [0071] “a mipmap may include a predetermined number of texture sets each including the N+1 textures”); and extracting a desired training bit allocation across the training texture components associated with the desired reconstructed image for each of the plurality of total training bitrates for encoding the training texture components (BV [0050] “The loss function 506 may include an Equivalence Distortion (ED) Loss. This loss function ensures bits are allocated optimally in the importance map … Let L.sub.E denote the equivalence loss obtained when importance values of different regions between the user input importance map and learned importance map are compared” [0053] “L.sub.whole penalizes the model when the sum of values in learned map exceed the user input map, thereby staying within the limits of available bit budget” Dos Santos [0071] “a mipmap may include a predetermined number of texture sets each including the N+1 textures” Teng [0017] “obtain block texture weights BTW.sub.1 to BTW.sub.M corresponding to the encoding blocks CB.sub.1 to CB.sub.M”). The same rationale to combine as set forth in the rejection of claim 1 above is incorporated herein. Regarding claim 4, BV as modified by Dos Santos, Teng, and Leontaris teaches/suggests: The method of Claim 2, further comprising: determining one or more texture features for each of the training texture components (Dos Santos [0071] “a mipmap may include a predetermined number of texture sets each including the N+1 textures” Teng [0015] “according to spatial texture complexities of the encoding blocks CB.sub.1 to CB.sub.M and protected color pixel counts in the encoding blocks CB.sub.1 to CB.sub.M, the video encoding device allocates appropriate numbers of bits to the encoding blocks CB.sub.1 to CB.sub.M” [The spatial texture complexities meet the texture features.]); and training the machine-learning model to learn the bit allocation for encoding each of texture components using the one or more texture features for each of the training texture components, the plurality of total training bitrates for encoding the training texture components, and the desired training bit allocation across the training texture components (BV [0031] “The encoder generates a latent space representation of the input image and the importance map network learns an importance map for the image” [0029] “the importance values are used to allocate more bits to more important regions and fewer bits to less important regions” Dos Santos [0071] “a mipmap may include a predetermined number of texture sets each including the N+1 textures” Teng [0017] “obtain block texture weights BTW.sub.1 to BTW.sub.M corresponding to the encoding blocks CB.sub.1 to CB.sub.M”). The same rationale to combine as set forth in the rejection of claim 1 above is incorporated herein. Regarding claim 5, BV, Dos Santos, Teng, and Leontaris are silent regarding: The method of Claim 4, wherein the one or more texture features comprises an image variance. However, the concept and advantages of the image variance are well known and expected in the art (Official Notice). It would have been obvious that the spatial texture complexities of BV as modified by Dos Santos, Teng, and Leontaris include the image variance for the bit allocation. Regarding claim 6, Dos Santos, Teng, and Leontaris are silent regarding: The method of Claim 4, wherein the one or more texture features comprises an image mean. However, the concept and advantages of the image mean are well known and expected in the art (Official Notice). It would have been obvious that the spatial texture complexities of BV as modified by Dos Santos, Teng, and Leontaris include the image mean for the bit allocation. Regarding claim 7, BV as modified by Dos Santos, Teng, and Leontaris teaches/suggests: The method of Claim 4, further comprising: extracting, using a neural network, the one or more texture features indicating a material for each of the training texture components (BV [0031] “The encoder generates a latent space representation of the input image and the importance map network learns an importance map for the image” Dos Santos [0071] “a mipmap may include a predetermined number of texture sets each including the N+1 textures … such as Physically-Based Rendering (PBR), which requires a large number of textures to describe a material surface” Teng [0015] “according to spatial texture complexities of the encoding blocks CB.sub.1 to CB.sub.M and protected color pixel counts in the encoding blocks CB.sub.1 to CB.sub.M, the video encoding device allocates appropriate numbers of bits to the encoding blocks CB.sub.1 to CB.sub.M”). The same rationale to combine as set forth in the rejection of claim 1 above is incorporated herein. Claims 8-9 and 11-14 recite limitation(s) similar in scope to those of claims 1-2 and 4-7, respectively, and are rejected for the same reason(s). BV as modified by Dos Santos, Teng, and Leontaris further teaches/suggests one or more non-transitory computer-readable storage media including instructions (BV Fig. 14: memory 1404); and one or more processors coupled to the storage media (BV Fig. 14: processor 1402). Claims 15-16 and 18-20 recite limitation(s) similar in scope to those of claims 1-2 and 4-6, respectively, and are rejected for the same reason(s). BV as modified by Dos Santos, Teng, and Leontaris further teaches/suggests a non-transitory computer-readable medium comprising instructions (BV Fig. 14: memory 1404). Claim(s) 3, 10, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over BV et al. (US 2023/0326088) in view of Dos Santos et al. (US 2018/0165869), Teng et al. (US 2017/0347107), and Leontaris et al. (US 2013/0243080) as applied to claims 1, 8, and 15 above, and further in view of Szilagyi et al. (US 2023/0119164). Regarding claim 3, BV as modified by Dos Santos, Teng, and Leontaris teaches/suggests: The method of Claim 2, wherein determining the desired reconstructed image for each of the plurality of total training bitrates for encoding the training texture components further comprising: determining image qualities of the plurality of reconstructed images based on comparisons to the training PBR texture set (BV [0055] “The features learned by the discriminator network for both of its inputs are compared with each other using L1 Loss” Dos Santos [0071] “a mipmap may include a predetermined number of texture sets each including the N+1 textures … such as Physically-Based Rendering (PBR), which requires a large number of textures to describe a material surface”); and determining the desired reconstructed image for each of the plurality of total training bitrates for encoding the training texture components based on the image qualities (BV [0055] “This enforces the reconstructed image to be visually close to the input image” Dos Santos [0071] “a mipmap may include a predetermined number of texture sets each including the N+1 textures”). The same rationale to combine as set forth in the rejection of claim 1 above is incorporated herein. BV as modified by Dos Santos, Teng, and Leontaris further teaches/suggests peak signal to noise ratio values and structural similarity index measurements (BV [0064] “The PSNR and SSIM are reported for each of the compressed reconstruction”). BV, Dos Santos, Teng, and Leontaris are silent regarding that include comparisons of peak signal to noise ratio values and structural similarity index measurements. Szilagyi, however, teaches/suggests comparisons of peak signal to noise ratio values and structural similarity index measurements (Szilagyi [0190]-[0091] “visualization data optimization process 248 may use the PSNR to measure the quality of reconstruction of lossy image compression codecs … may also use the SSIM approach, to evaluate the change in perception in the structural information between the models”). Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to modify the PSNR and SSIM of BV as modified by Dos Santos, Teng, and Leontaris to be compared as taught/suggested by Szilagyi for quality. Claims 10 and 17 recite limitation(s) similar in scope to those of claim 3, and are rejected for the same reason(s). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 2014/0254949 – bit budget US 2014/0303965 – bit budget Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANH-TUAN V NGUYEN whose telephone number is 571-270-7513. The examiner can normally be reached on M-F 9AM-5PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JASON CHAN can be reached on 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANH-TUAN V NGUYEN/ Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Feb 13, 2023
Application Filed
Mar 08, 2025
Non-Final Rejection — §103
Aug 11, 2025
Examiner Interview Summary
Aug 11, 2025
Applicant Interview (Telephonic)
Aug 13, 2025
Response Filed
Sep 21, 2025
Final Rejection — §103
Dec 16, 2025
Examiner Interview Summary
Dec 16, 2025
Applicant Interview (Telephonic)
Dec 18, 2025
Request for Continued Examination
Jan 06, 2026
Response after Non-Final Action
Jan 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591359
ELECTRONIC DEVICE COMPRISING DISPLAY THAT OPTIMALLY DISPLAY CONTENT WITH RESPECT TO CAMERA HOLE, AND METHOD FOR CONTROLLING DISPLAY THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12592033
METHOD AND APPARATUS FOR DETECTING PICKED OBJECT, COMPUTER DEVICE, READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 31, 2026
Patent 12573132
ASSIGNING PRIMITIVES TO TILES IN A GRAPHICS PROCESSING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12573161
Learning Articulated Shape Reconstruction from Imagery
2y 5m to grant Granted Mar 10, 2026
Patent 12561893
COLOR AND INFRA-RED THREE-DIMENSIONAL RECONSTRUCTION USING IMPLICIT RADIANCE FUNCTIONS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
92%
With Interview (+19.2%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 489 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month