Prosecution Insights
Last updated: April 19, 2026
Application No. 18/946,460

ELECTRONIC DEVICE PROCESSING IMAGE USING AI ENCODING/DECODING, AND METHOD FOR CONTROLLING SAME

Non-Final OA §103
Filed
Nov 13, 2024
Examiner
PICON-FELICIANO, ANA J
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
90%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
294 granted / 428 resolved
+10.7% vs TC avg
Strong +22% interview lift
Without
With
+21.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
459
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 428 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This Office Action is sent in response to Applicant’s Communication received on November 13,2024 for application 18/946,460. This Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawings, Abstract, Oath/Declaration, and Claims. 3. Claims 1-16 are presented for examination. Priority 4. Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. KR 10-2022-0079440, filed on June 29,2022. Information Disclosure Statement 5. The information disclosure statement (IDS) submitted on November 13, 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 8. Claims 1, 2, 7, 8, 11 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over VAN ROZENDAAL et al.(US 2024/0205427 A1)(hereinafter Van Rozendaal) in view of COBAN et al.(US 2022/0086463 A1)(hereinafter Coban). Regarding claims 1 and 11, Van Rozendaal discloses an electronic device configured for artificial intelligence (AI) encoding [See Van Rozendaal: at least Figs. 1-9, 11-12, 14 and par. 23 regarding electronic device configured to compress data using machine learning systems and tuning machine learning systems for compressing the data] and a method for processing an image by using artificial intelligence (AI) encoding[See Van Rozendaal: at least Figs. 1-9, 11-12, 14 and par. 23 regarding method for electronic device configured to compress data using machine learning systems and tuning machine learning systems for compressing the data], the electronic device and the method comprising: memory storing a trained first neural network model[See Van Rozendaal: at least at least Figs. 1, 4 and 14, and par. 6, 54, 62, 65, 125, 232 regarding Variables (e.g., neural signals and synaptic weights), system parameters associated with a computational device (e.g., a neural network with weights), delays, frequency bin information, task information, among other information, may be stored in a memory block associated with a neural processing unit (NPU) 108, in a memory block associated with a CPU 102, in a memory block associated with a graphics processing unit (GPU) 104, in a memory block associated with a digital signal processor (DSP) 106, in a memory block 118, or distributed across multiple blocks. Instructions executed at the CPU 102 may be loaded from a program memory associated with the CPU 102 and/or a memory block 118.]; a communication interface[See Van Rozendaal: at least Figs. 1, 4 and 14, and par. 65, 112, 125, 230, 234-236 regarding one or more networking interfaces (e.g., wired and/or wireless communications interfaces and the like)…]; and at least one processor [See Van Rozendaal: at least Figs. 1, 4 and 14, and par. 5-6, 62-63, 125, 226-233, 238, 252 regarding a central processing unit (CPU) 102 or a multi-core CPU configured to perform one or more of the functions…The one or more compute components can include a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), and/or an image signal processor (ISP)…]configured to: obtain AI information [See Van Rozendaal: at least Figs. 1-9, 11-12, 14 and par. 128, 139, 145, 157-159, 162, 170 regarding The encoder 502 can receive an image 501 (image xi) as input and can map and/or convert the image 501 (image xi) to a latent code 504 (latent zi) in a latent code space. The image 501 can represent a still image and/or a video frame associated with a sequence of frames (e.g., a video)… As shown in FIG. 6, the neural network compression system 600 can be trained using a training dataset 602. The training dataset 602 can be processed by an encoder 606 of a codec 604 to generate a latent space representation 608 (z2) of the training dataset 602..]; and input an image into a first neural network model to obtain an AI-encoded image / obtaining an AI-encoded image by inputting the image into a first neural network model [See Van Rozendaal: at least Figs. 1-9, 11-12, 14 and par. 128, 139, 145, 157-159, 162, 170, 193-201 regarding The encoder 502 can receive an image 501 (image xi) as input and can map and/or convert the image 501 (image xi) to a latent code 504 (latent zi) in a latent code space. The image 501 can represent a still image and/or a video frame associated with a sequence of frames (e.g., a video)… At block 1102, the process 1100 can include receiving, by a neural network compression system, input data for compression by the neural network compression system. At block 1104, the process 1100 can include determining a set of updates for the neural network compression system. In some examples, the set of updates can include updated model parameters (e.g., model parameter updates (θ) and/or quantized model parameter updates (δ)) tuned using the input data. At block 1106, the process 1100 can include generating, by the neural network compression system using a latent prior (e.g., latent prior 506, latent prior 622, latent prior 708, latent prior 806, latent prior 910), a first bitstream (e.g., bitstream 510, bitstream 520, bitstream 710, bitstream 810, bitstream 916) including a compressed version of the input data...]. Van Rozendaal does not explicitly disclose obtain / obtaining AI decoding information of an external device and context information of the electronic device; identify / identifying operation setting information associated with AI encoding based on the AI decoding information of the external device and the context information of the electronic device; and input an image into a first neural network model to which the operation setting information is applied to obtain an AI-encoded image / obtaining an AI-encoded image by inputting the image into a first neural network model to which the operation setting information associated with the AI encoding is applied. However, obtaining AI decoding information of an external device and context information of the electronic device; identifying operation setting information associated with the AI encoding based on the AI decoding information and inputting an image to a first neural network to which the operation setting information is applied to obtain an AI-encoded image was well known in the art at the time of the invention was filed as evident from the teaching of Coban [See Coban: at least Figs. 1-9 and par. 28, 60, 99, 122-146 regarding At block 702, the process 700 includes generating, by a first convolutional layer of an encoder sub-network of a neural network system, output values associated with a luminance channel of a frame. For example, as described above with respect to FIG. 6, convolutional layer 680 of encoder sub-network 610 of a neural network system outputs values associated with Y-channel (e.g., luminance channel) inputs for a frame. At block 704, the process 700 includes generating, by a second convolutional layer of the encoder sub-network, output values associated with at least one chrominance channel of the frame. In the example of FIG. 6, convolutional layer 681 of encoder sub-network 610 outputs values associated with UV-channel inputs 604 (e.g., at least one chrominance channel) of the frame. At block 706, the process 700 includes generating a combined representation of the frame by combining the output values associated with the luminance channel of the frame and the output values associated with the at least one chrominance channel of the frame. In the corresponding structure of FIG. 6, the output values of convolutional layers 681 and 680 are combined at merging structure 699…]. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Van Rozendaal with Coban teachings by including “obtain / obtaining AI decoding information of an external device and context information of the electronic device; identify / identifying operation setting information associated with AI encoding based on the AI decoding information of the external device and the context information of the electronic device; and input an image into a first neural network model to which the operation setting information is applied to obtain an AI-encoded image / obtaining an AI-encoded image by inputting the image into a first neural network model to which the operation setting information associated with the AI encoding is applied” because this combination has the benefit of providing improved image quality, reduced processing resource usage, or both [See Coban: at least par. 47-52]. Further on, when combined with Coban teachings , Van Rozendaal and Coban teach or suggest obtain / obtaining a compressed image by encoding the AI-encoded image[See Van Rozendaal: at least Figs. 1-9, 11-12, 14 and par. 128, 139, 145, 157-159, 162, 170, 193-217 regarding At block 1106, the process 1100 can include generating, by the neural network compression system using a latent prior (e.g., latent prior 506, latent prior 622, latent prior 708, latent prior 806, latent prior 910), a first bitstream (e.g., bitstream 510, bitstream 520, bitstream 710, bitstream 810, bitstream 916) including a compressed version of the input data. At block 1108, the process 1100 can include generating, by the neural network compression system using the latent prior and a model prior (e.g., model prior 714, model prior 816 tra, model prior 912, model prior p[δ]), a second bitstream (e.g., bitstream 510, bitstream 520, bitstream 712, bitstream 811, bitstream 918) including a compressed version of the updated model parameters (e.g., quantized model parameter updates (δ)). See Coban: at least Figs. 6-7 and par. 142-144 regarding At block 708, the process 700 includes generating encoded video data based on the combined representation of the frame. In the example of FIG. 6, the combined values generated by merging structure 699 are then processed by additional convolutional layers and 614 as well as 612, 613, and 614 as well as GDN layers 616, 617. Quantizer 632 and encoder 634 are then used to generate encoded video data based on the combined representation of the frame from merging structure 699.]; and transmit the compressed image and AI encoding information associated with the first neural network model to the external device through the communication interface / transmitting the compressed image and AI encoding information associated with the first neural network model to the external device[See Van Rozendaal: at least Figs. 1-9, 11-12, 14 and par. 128, 139, 145, 157-159, 162, 170, 193-217 regarding At block 1110, the process 1100 can include outputting the first bitstream and the second bitstream for transmission to a receiver. In some examples, the receiver can include a decoder (e.g., decoder 514, decoder 610, decoder 718, decoder 814, decoder 922)… See Coban: at least par. 144 regarding In some examples, the process 700 includes transmitting the encoded video data over a transmission medium to at least one device.]. Regarding claim 7, Van Rozendaal discloses an electronic device configured for artificial intelligence (AI) decoding[See Van Rozendaal: at least Figs. 1-9, 11-14 and par. 4, 6-7, 23 regarding electronic device configured to decompress data using machine learning systems and tuning machine learning systems for compressing the data], the electronic device comprising: memory storing a trained decoder neural network model[See Van Rozendaal: at least at least Figs. 1, 4 and 14, and par. 6, 54, 62, 65, 125, 232 regarding Variables (e.g., neural signals and synaptic weights), system parameters associated with a computational device (e.g., a neural network with weights), delays, frequency bin information, task information, among other information, may be stored in a memory block associated with a neural processing unit (NPU) 108, in a memory block associated with a CPU 102, in a memory block associated with a graphics processing unit (GPU) 104, in a memory block associated with a digital signal processor (DSP) 106, in a memory block 118, or distributed across multiple blocks. Instructions executed at the CPU 102 may be loaded from a program memory associated with the CPU 102 and/or a memory block 118.]; a communication interface[See Van Rozendaal: at least Figs. 1, 4 and 14, and par. 65, 112, 125, 230, 234-236 regarding one or more networking interfaces (e.g., wired and/or wireless communications interfaces and the like)…]; and at least one processor[See Van Rozendaal: at least Figs. 1, 4 and 14, and par. 5-6, 62-63, 125, 226-233, 238, 252 regarding a central processing unit (CPU) 102 or a multi-core CPU configured to perform one or more of the functions…The one or more compute components can include a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), and/or an image signal processor (ISP)…] configured to: receive a compressed image and AI encoding information through the communication interface [See Van Rozendaal: at least Figs. 1-9, 11-14 and par. 130, 139-141, 149-151,154, 159-162, 170, 174, 218 regarding The decoder 514 can receive encoded bitstream 510 from the arithmetic encoder 508 and use the latent prior 506 to decode latent code 504 (latent zi) in the encoded bitstream 510… At block 1302, the process 1300 includes receiving a compressed version of an updated neural network compression system (and/or one or more parameters of the neural network compression system) and a compressed version of encoded image content…], obtain a reconstructed image by decoding the compressed image and input the reconstructed image into a decoder neural network model to obtain an AI-decoded image [See Van Rozendaal: at least Figs. 1-9, 11-14 and par. 130, 139-141, 149-151,154, 159-162, 170, 174, 221 regarding The decoder 514 can decode latent code 504 (latent zi) into approximate reconstruction image 516 (reconstruction x i ^ ). In some cases, the decoder 514 can implement a learnable function parameterized by θ. For example, the decoder 514 can implement function pθ(x|z). The learnable function implemented by the decoder 514 can be shared and/or made available at both the sender side (e.g., the encoder 502 and/or the arithmetic encoder 508) and the receiver side (e.g., the arithmetic decoder 512 and/or the decoder 514)… At block 1304, the process 1300 can include decompressing, using a shared probabilistic model, the compressed version of the updated neural network compression system into an updated neural network compression system model. At block 1306, the process 1300 can include decompressing, using the updated probabilistic model and the updated neural network compression system model, the compressed version of the encoded image content into a latent space representation. At block 1308, the process 1300 can include generating reconstructed image content using the updated neural network compression system model and the latent space representation…]. Van Rozendaal does not explicitly disclose identify operation setting information associated with AI decoding based on the AI encoding information, obtain a reconstructed image by decoding the compressed image, and input the reconstructed image into a decoder neural network model to which the operation setting information associated with the AI decoding is applied to obtain an AI-decoded image. However, identifying operation setting information associated with AI decoding based on the AI encoding information; obtaining a reconstructed image by decoding the compressed image; and inputting the reconstructed image into a decoder neural network model to which the operation setting information associated with the AI decoding is applied to obtain an AI-decoded image was well known in the art at the time of the invention was filed as evident from the teaching of Coban [See Coban: at least Figs. 1-9 and par. 28, 60, 99, 122-146, 148-159 regarding At block 802, the process 800 includes obtaining an encoded frame. The encoded frame can, for example, include encoded video data generated in block 708 above, or in accordance with operations of any other similar process to generate an encoded frame. In the example of FIG. 6 AD 638 receives both frames of encoded video data as compressed transmitted binary data 636, as well as entropy model data generated from transmitted binary data 646 which is used to improve the quality of the decoded video data. At block 804, the process 800 includes generating, by a first convolutional layer of a decoder sub-network of a neural network system, reconstructed output values associated with a luminance channel of the encoded frame. In the example of FIG. 6, after inverse processing of the data using convolutional layers 661, 662, and 663 corresponding to convolutional layers 614, 613, and 612, as well as IGDN layers 665 and 666, the video data is split into data to be output as reconstructed Y data to be output at reconstructed Y channel 670, and reconstructed UV data to be output as reconstructed UV channel 672. At block 806, the process 800 includes generating, by a second convolutional layer of the decoder sub-network, reconstructed output values associated with at least one chrominance channel of the encoded frame. At block 808, the process 800 includes generating an output frame including the reconstructed output values associated with the luminance channel and the reconstructed output values associated with the at least one chrominance channel...] Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Van Rozendaal with Coban teachings by including “identify operation setting information associated with AI decoding based on the AI encoding information, obtain a reconstructed image by decoding the compressed image, and input the reconstructed image into a decoder neural network model to which the operation setting information associated with the AI decoding is applied to obtain an AI-decoded image” because this combination has the benefit of providing improved image quality, reduced processing resource usage, or both [See Coban: at least par. 47-52]. Further on, when combined with Coban teachings, Van Rozendaal and Coban teach or suggest transmit AI decoding information related to the decoder neural network model to an external device[See Van Rozendaal: at least Figs. 1-9, 11-12, 14 and par. 50, 128, 139, 145, 157-159, 162, 170, 193-217, 223 regarding In block 1310, the process 1300 can include outputting the reconstructed image content...See Coban: at least par. 150-159 regarding At block 808, the process 800 includes generating an output frame including the reconstructed output values associated with the luminance channel and the reconstructed output values associated with the at least one chrominance channel. In the example of FIG. 6, after inverse processing of the data using convolutional layers 661, 662, and 663 corresponding to convolutional layers 614, 613, and 612, as well as IGDN layers 665 and 666, the video data is split into data to be output as reconstructed Y data to be output at reconstructed Y channel 670, and reconstructed UV data to be output as reconstructed UV channel 672]. Regarding claim 2, Van Rozendaal and Coban teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Van Rozendaal and Coban teach or suggest further comprising: a display[See Van Rozendaal: at least par. 23, 64, 65, 125 regarding In some aspects, the apparatus further includes a display for displaying one or more images, notifications, and/or other displayable data], wherein the context information of the electronic device comprises: at least one of a performance information of the electronic device or a state information of the electronic device[See Van Rozendaal: at least par. 58, 97, 166 regarding In some cases, an RD-AE can be fine-tuned on data to be transmitted to a receiver, such as a decoder. In some examples, by fine-tuning the RD-AE on a datapoint, the RD-AE can obtain high compression (e.g., Rate/Distortion) performance. An encoder associated with the RD-AE can send the RD-AE model or part of the RD-AE model to a receiver (e.g., a decoder) for the receiver to decode a bitstream including compressed data transmitted by the encoder…], and wherein the performance information of the electronic device comprises: at least one of information on an image size that can be processed, information on a scanning rate of the display, information on a number of pixels of the display, or information on a parameter of the first neural network model[See Van Rozendaal: at least par. 58, 97-98, 166 regardingn some cases, the RD-AE can also be fine-tuned for a particular datapoint to be sent to and decoded by a receiver. In some examples, by fine-tuning the RD-AE on a datapoint, the RD-AE can obtain a high compression (Rate/Distortion) performance. An encoder associated with the RD-AE can send the AE model or part of the AE model to a receiver (e.g., a decoder) to decode the bitstream. In some cases, a neural network compression system can reconstruct an input instance (e.g., an input image, video, audio, etc.) from a (quantized) latent representation. The neural network compression system can also use a prior to losslessly compress the latent representation. In some cases, the neural network compression system can determine a test-time data distribution is known and relatively low entropy (e.g. a camera watching a static scene, a dash cam in an autonomous car, etc.), and can be fine-tuned or adapted to such distribution. The fine-tuning or adaptation can lead to improved rate/distortion (RD) performance.], and wherein the state information of the electronic device comprises: at least one of information on a ratio of a remaining power, information on a capacity of the remaining power, or information on available time [See Van Rozendaal: at least par. 104, 167 regarding When coding video data, the latent code space can have a t variable or position, with the t variable in a representing a timestamp block of video data (in addition to the spatial x- and y-coordinates). By using the two dimensions of the horizontal and vertical pixel positions, the vector can describe an image patch in the image x…The neural network compression system 800 can be trained end-to-end. In some cases, the RDM loss can be minimized at inference time end-to-end. In some examples, a certain amount of compute can be spent once (e.g., fine-tuning the model) and high compression ratios can be subsequently obtained without extra cost to the receiver side…], and wherein the operation setting information associated with the AI encoding comprises: at least one of information on a number of layers of the first neural network model, information on a number of channels for each layer of the first neural network model, information on a filter size of the first neural network model, information on stride of the first neural network model, information on pulling of the first neural network model, or the information on the parameter of the first neural network model [See Van Rozendaal: at least par. 95, 114-118, 137, 149 regarding The received bitstream 415 can be input into the arithmetic decoder 426 to obtain one or more codes z from the bitstream. The arithmetic decoder 426 may extract a decompressed code z based on a probability distribution P(z) generated by the code model 424 over a set of possible codes and information associating each generated code z with a bitstream… In some cases, the RD-AE system can be fine-tuned on a single datapoint being compressed and sent to a receiver for decompression… The bitstream 712 can include compressed data representing the fine-tuned parameters of the encoder 702 and the latent prior 708, which the arithmetic decoder 716 can use to obtain the fine-tuned parameters of the encoder 702 and the latent prior 708… See Coban: at least Figs. 1-9 and par. 28, 60, 99, 122-146, 148-159 regarding At block 802, the process 800 includes obtaining an encoded frame. The encoded frame can, for example, include encoded video data generated in block 708 above, or in accordance with operations of any other similar process to generate an encoded frame. In the example of FIG. 6 AD 638 receives both frames of encoded video data as compressed transmitted binary data 636, as well as entropy model data generated from transmitted binary data 646 which is used to improve the quality of the decoded video data. At block 804, the process 800 includes generating, by a first convolutional layer of a decoder sub-network of a neural network system, reconstructed output values associated with a luminance channel of the encoded frame...]. Regarding claim 8, Van Rozendaal and Coban teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Van Rozendaal and Coban teach or suggest wherein the operation setting information associated with the AI decoding comprises: at least one of information on a number of layers of the decoder neural network model, information on a number of channels for each layer of the decoder neural network model, information on a filter size of the decoder neural network model, information on stride of the decoder neural network model, information on pulling of the decoder neural network model, or information on a parameter of the decoder neural network model[See Van Rozendaal: at least par. 95, 114-118, 137, 149 regarding The received bitstream 415 can be input into the arithmetic decoder 426 to obtain one or more codes z from the bitstream. The arithmetic decoder 426 may extract a decompressed code z based on a probability distribution P(z) generated by the code model 424 over a set of possible codes and information associating each generated code z with a bitstream… In some cases, the RD-AE system can be fine-tuned on a single datapoint being compressed and sent to a receiver for decompression… The bitstream 712 can include compressed data representing the fine-tuned parameters of the encoder 702 and the latent prior 708, which the arithmetic decoder 716 can use to obtain the fine-tuned parameters of the encoder 702 and the latent prior 708…See Coban: at least Figs. 1-9 and par. 28, 60, 99, 122-146, 148-159 regarding At block 802, the process 800 includes obtaining an encoded frame. The encoded frame can, for example, include encoded video data generated in block 708 above, or in accordance with operations of any other similar process to generate an encoded frame. In the example of FIG. 6 AD 638 receives both frames of encoded video data as compressed transmitted binary data 636, as well as entropy model data generated from transmitted binary data 646 which is used to improve the quality of the decoded video data. At block 804, the process 800 includes generating, by a first convolutional layer of a decoder sub-network of a neural network system, reconstructed output values associated with a luminance channel of the encoded frame...]. Regarding claim 12, Van Rozendaal and Coban teach all of the limitations of claim 11, and are analyzed as previously discussed with respect to that claim. Further on, Van Rozendaal and Coban teach or suggest wherein the context information of the electronic device comprises: at least one of a performance information of the electronic device or a state information of the electronic device[See Van Rozendaal: at least par. 58, 97, 166 regarding In some cases, an RD-AE can be fine-tuned on data to be transmitted to a receiver, such as a decoder. In some examples, by fine-tuning the RD-AE on a datapoint, the RD-AE can obtain high compression (e.g., Rate/Distortion) performance. An encoder associated with the RD-AE can send the RD-AE model or part of the RD-AE model to a receiver (e.g., a decoder) for the receiver to decode a bitstream including compressed data transmitted by the encoder…], and wherein the performance information of the electronic device comprises: at least one of information on an image size that can be processed, information on a scanning rate of a display, information on a number of pixels of the display, or information on a parameter of the first neural network model[See Van Rozendaal: at least par. 58, 97-98, 166 regardingn some cases, the RD-AE can also be fine-tuned for a particular datapoint to be sent to and decoded by a receiver. In some examples, by fine-tuning the RD-AE on a datapoint, the RD-AE can obtain a high compression (Rate/Distortion) performance. An encoder associated with the RD-AE can send the AE model or part of the AE model to a receiver (e.g., a decoder) to decode the bitstream. In some cases, a neural network compression system can reconstruct an input instance (e.g., an input image, video, audio, etc.) from a (quantized) latent representation. The neural network compression system can also use a prior to losslessly compress the latent representation. In some cases, the neural network compression system can determine a test-time data distribution is known and relatively low entropy (e.g. a camera watching a static scene, a dash cam in an autonomous car, etc.), and can be fine-tuned or adapted to such distribution. The fine-tuning or adaptation can lead to improved rate/distortion (RD) performance.], and wherein the state information of the electronic device comprises: at least one of information on a ratio of a remaining power, information on a capacity of the remaining power, or information on available time[See Van Rozendaal: at least par. 104, 167 regarding When coding video data, the latent code space can have a t variable or position, with the t variable in a representing a timestamp block of video data (in addition to the spatial x- and y-coordinates). By using the two dimensions of the horizontal and vertical pixel positions, the vector can describe an image patch in the image x…The neural network compression system 800 can be trained end-to-end. In some cases, the RDM loss can be minimized at inference time end-to-end. In some examples, a certain amount of compute can be spent once (e.g., fine-tuning the model) and high compression ratios can be subsequently obtained without extra cost to the receiver side…], and wherein the operation setting information associated with the AI encoding comprises: at least one of information on a number of layers of the first neural network model, information on a number of channels for each layer of the first neural network model, information on a filter size of the first neural network model, information on stride of the first neural network model, information on pulling of the first neural network model, or the information on the parameter of the first neural network model[See Van Rozendaal: at least par. 95, 114-118, 137, 149 regarding The received bitstream 415 can be input into the arithmetic decoder 426 to obtain one or more codes z from the bitstream. The arithmetic decoder 426 may extract a decompressed code z based on a probability distribution P(z) generated by the code model 424 over a set of possible codes and information associating each generated code z with a bitstream… In some cases, the RD-AE system can be fine-tuned on a single datapoint being compressed and sent to a receiver for decompression… The bitstream 712 can include compressed data representing the fine-tuned parameters of the encoder 702 and the latent prior 708, which the arithmetic decoder 716 can use to obtain the fine-tuned parameters of the encoder 702 and the latent prior 708… See Coban: at least Figs. 1-9 and par. 28, 60, 99, 122-146, 148-159 regarding At block 802, the process 800 includes obtaining an encoded frame. The encoded frame can, for example, include encoded video data generated in block 708 above, or in accordance with operations of any other similar process to generate an encoded frame. In the example of FIG. 6 AD 638 receives both frames of encoded video data as compressed transmitted binary data 636, as well as entropy model data generated from transmitted binary data 646 which is used to improve the quality of the decoded video data. At block 804, the process 800 includes generating, by a first convolutional layer of a decoder sub-network of a neural network system, reconstructed output values associated with a luminance channel of the encoded frame...]. Allowable Subject Matter 9. Claims 3-6, 9-10 and 13-16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion 10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANA J PICON-FELICIANO whose telephone number is (571)272-5252. The examiner can normally be reached Monday-Friday 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Kelley can be reached at 571 272 7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Ana Picon-Feliciano/Examiner, Art Unit 2482 /CHRISTOPHER S KELLEY/Supervisory Patent Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Nov 13, 2024
Application Filed
Jan 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598287
DISPLAY DEVICE, METHOD, COMPUTER PROGRAM CODE, AND APPARATUS FOR PROVIDING A CORRECTION MAP FOR A DISPLAY DEVICE, METHOD AND COMPUTER PROGRAM CODE FOR OPERATING A DISPLAY DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593021
ELECTRONIC APPARATUS AND METHOD FOR CONTROLLING THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12567163
IMAGING SYSTEM AND OBJECT DEPTH ESTIMATION METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12561788
FLUORESCENCE MICROSCOPY METROLOGY SYSTEM AND METHOD OF OPERATING FLUORESCENCE MICROSCOPY METROLOGY SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12554122
TECHNIQUES FOR PRODUCING IMAGERY IN A VISUAL EFFECTS SYSTEM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
90%
With Interview (+21.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 428 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month