Prosecution Insights
Last updated: April 19, 2026
Application No. 18/928,182

ENHANCED SYSTEMS AND METHODS FOR SYNTHETIC APERTURE RADAR IMAGE COMPRESSION WITH IMPROVED PHASE RECOVERY AND UNWRAPPING

Non-Final OA §102§103
Filed
Oct 28, 2024
Examiner
DHOOGE, DEVIN J
Art Unit
2677
Tech Center
2600 — Communications
Assignee
AtomBeam Technologies Inc.
OA Round
2 (Non-Final)
70%
Grant Probability
Favorable
2-3
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
50 granted / 71 resolved
+8.4% vs TC avg
Strong +43% interview lift
Without
With
+42.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
48 currently pending
Career history
119
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
49.4%
+9.4% vs TC avg
§102
35.8%
-4.2% vs TC avg
§112
5.7%
-34.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 71 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This communication is in response to the action filed on 01/26/2026. Claims 1-20 are pending. Response to Arguments Applicant’s arguments filed on 01/26/2026 on pages 6-12, under REMARKS with respect to 35 U.S.C. 103 claim rejections to claims 1-20 have been fully considered and are persuasive. The rejections to the claims have been withdrawn. However, upon further consideration, a new grounds of rejection is made in view of NPL: SPB-Net: A Deep Network for SAR Imaging and Despeckling with Down sampled Data. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 9, 11-13, and 19 are rejected under 35 § U.S.C. 102(a)(1) as being anticipated by SPB-Net: A Deep Network for SAR Imaging and Despeckling with Down sampled Data to XIONG et al. (hereinafter “XIONG”). As per claim 1, XIONG discloses a system for compressing synthetic aperture radar (SAR) images with enhanced phase recovery (a computing system and corresponding method of operation to compress synthetic aperture radar image and to allow for phase to be recovered with minimal loss; page 9238, Introduction; page 9240; page 9253), comprising: a computing device comprising at least a memory and a processor (the computing system to perform said method comprises a computer which would contain a computer processor and a memory component to store programs, data, and instructions related to the method of operation and to be executed by the processor components of the computer; page 9283; page 9247, paragraph IV); a plurality of programming instructions stored in the memory and operable on the processor, wherein the plurality of programming instructions, when operating on the processor, cause the computing device to (instructions related to the method of operation are stored on the memory component of the computer and the instructions to be executed by the processor components of the computer; page 9283; page 9247, paragraph IV): receive an input SAR image comprising complex-valued data (the system receives input SAR image data; figs 4-5; page 9238, Introduction); perform one or more preprocessing operations on the input SAR image (the system performs applying preprocessing by using a conventional observation matrix model according to a geometry relationship in order to reconstruct (preprocess) the SAR image; figs 4-5; page 9238, Introduction); transform the preprocessed SAR image into a frequency domain representation (the system is adapted to perform transformations using Nyquist rate as the threshold for down sampling to observe the SARs image as its frequency domain; page 9238, Introduction paragraph 2); implement a multi-stage compression technique using one or more neural networks to process amplitude information (the compression is performed by a multi stage neural network compression model and processes phase and amplitude information; page 9238, Introduction paragraph 2); extract phase information from the input SAR image (phase information and data/features are extracted via the model from the input SARs images; figs 2-3, 6; pages 9243, columns 1-2; page 9245, paragraph (29)); utilize at least one feature fusion mechanism to enhance information integration across different components of the SAR image data (the computing system includes a pixelwise addition feature in order to perform feature fusion to obtain particular desirable feature fusions, and further integrates the desirable features into the SARs images by performing the pixel to pixel feature fusion process; figs 3-4; page 9243, columns 1-2; page 9244, column 1); employ a phase processing neural network that utilizes both compressed amplitude information and phase information to produce processed phase data (the CNN model is adapted to process the SARs image data which include phase data of the SARs image which has been geometrically reconstructed (preprocessed); pages 9238-9239 column 2); implement a context recovery subsystem with one or more loss functions optimized for both amplitude and phase recovery (the computing system includes a loss function in order to compare SARs images after processing to determine feature preservation wherein the features include phase and amplitude; page 9248, column 1); generate at least one compressed representation of the processed SAR image data (the CNN model for SARs image compression compresses a SARs image and its corresponding features of the SARs image; page 9238, columns 1-2); jointly train the neural networks and other trainable components using a combined loss function that optimizes both amplitude and phase recovery (the CNN network is trained using the loss function used to optimize recovery of phase and amplitudes and trains the model using CNN frame work; page 9238, columns 1-2; page 9239, column 2; page 9247, column 1-2; page 9248, column 1); and reconstruct the SAR image from the compressed representation with enhanced phase information (the SARs images are reconstructed after compression and the phase information is enhanced after reconstruction; page 9238, columns 1-2; page 9239, columns 1-2). As per claim 2, XIONG discloses the system of claim 1, wherein the complex-valued data comprises in-phase and quadrature components (the image data and the computing system processes image data related to quadratic components; page 9249, column 1). As per claim 3, XIONG discloses the system of claim 1, wherein the one or more preprocessing operations include at least one of radiometric calibration, geometric calibration, speckle filtering, or region of interest extraction (the computing system performs de-speckling using a speckle removal process which acts substantially as a speckle filter; fig 8; page 9248, column 2; page 9249, column 1-2). As per claim 9, XIONG discloses the system of claim 1, wherein the context recovery subsystem implements separate loss functions for different frequency groups of the SAR image data (the system of functions provided in equation block (60) provides a method of determining loss for a particular parameter in the SARs image after reconstruction and would be adaptable to any feature/parameter as desired and applied to all frequency domains of the SARs image data; page 9238, columns 1-2; page 9248, columns 1-2). As per claim 11, XIONG discloses a method for compressing synthetic aperture radar (SAR) images with enhanced phase recovery (a computing system and corresponding method of operation to compress synthetic aperture radar image and to allow for phase to be recovered with minimal loss; page 9238, Introduction; page 9240; page 9253), comprising the steps of: receiving an input SAR image comprising complex-valued data (the system receives input SAR image data; figs 4-5; page 9238, Introduction); performing one or more preprocessing operations on the input SAR image (the system performs applying preprocessing by using a conventional observation matrix model according to a geometry relationship in order to reconstruct (preprocess) the SAR image; figs 4-5; page 9238, Introduction); transforming the preprocessed SAR image into a frequency domain representation (the system is adapted to perform transformations using Nyquist rate as the threshold for down sampling to observe the SARs image as its frequency domain; page 9238, Introduction paragraph 2); implementing a multi-stage compression technique using one or more neural networks to process amplitude information (the compression is performed by a multi stage neural network compression model and processes phase and amplitude information; page 9238, Introduction paragraph 2); extracting phase information from the input SAR image (phase information and data/features are extracted via the model from the input SARs images; figs 2-3, 6; pages 9243, columns 1-2; page 9245, paragraph (29)); utilizing at least one feature fusion mechanism to enhance information integration across different components of the SAR image data (the computing system includes a pixelwise addition feature in order to perform feature fusion to obtain particular desirable feature fusions, and further integrates the desirable features into the SARs images by performing the pixel to pixel feature fusion process; figs 3-4; page 9243, columns 1-2; page 9244, column 1); employing a phase processing neural network that utilizes both compressed amplitude information and phase information to produce processed phase data (the CNN model is adapted to process the SARs image data which include phase data of the SARs image which has been geometrically reconstructed (preprocessed); pages 9238-9239 column 2); implementing a context recovery subsystem with one or more loss functions optimized for both amplitude and phase recovery (the computing system includes a loss function in order to compare SARs images after processing to determine feature preservation wherein the features include phase and amplitude; page 9248, column 1); generating at least one compressed representation of the processed SAR image data (the CNN model for SARs image compression compresses a SARs image and its corresponding features of the SARs image; page 9238, columns 1-2); jointly training the neural networks and other trainable components using a combined loss function that optimizes both amplitude and phase recovery (the CNN network is trained using the loss function used to optimize recovery of phase and amplitudes and trains the model using CNN frame work; page 9238, columns 1-2; page 9239, column 2; page 9247, column 1-2; page 9248, column 1); and reconstructing the SAR image from the compressed representation with enhanced phase information (the SARs images are reconstructed after compression and the phase information is enhanced after reconstruction; page 9238, columns 1-2; page 9239, columns 1-2). As per claim 12, XIONG discloses the method of claim 11, wherein the complex-valued data comprises in-phase and quadrature components (the image data and the computing system processes image data related to quadratic components; page 9249, column 1). As per claim 13, XIONG discloses the method of claim 11, wherein the one or more preprocessing operations comprises at least one of radiometric calibration, geometric calibration, speckle filtering, or region of interest extraction (the computing system performs de-speckling using a speckle removal process which acts substantially as a speckle filter; fig 8; page 9248, column 2; page 9249, column 1-2). As per claim 19, XIONG discloses the method of claim 11, wherein the context recovery subsystem implements separate loss functions for different frequency groups of the SAR image data (the system of functions provided in equation block (60) provides a method of determining loss for a particular parameter in the SARs image after reconstruction and would be adaptable to any feature/parameter as desired and applied to all frequency domains of the SARs image data; page 9238, columns 1-2; page 9248, columns 1-2). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 4 and 14 are rejected under 35 § U.S.C. 103 as being obvious over SPB-Net: A Deep Network for SAR Imaging and Despeckling with Down sampled Data to XIONG et al. (hereinafter “XIONG”) in view of US 2017/0048537 A1 to BOUFOUNOS et al. (hereinafter “BOUFOUNOS”) As per claim 4, XIONG discloses the system of claim 1. Modified XIONG fails to disclose wherein the frequency domain representation is obtained using a discrete cosine transform. BOUFOUNOS discloses wherein the frequency domain representation is obtained using a discrete cosine transform (the system uses a matrix A which is a discrete cosine transform during data processing; paragraph [0049]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify XIONG to have the frequency domain representation is obtained using a discrete cosine transform of BOUFOUNOS reference. The Suggestion/motivation for doing so would have been to provide the ability to use a discrete cosine transform as suggested by BOUFOUNOS paragraph [0049]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine BOUFOUNOS with modified XIONG to obtain the invention as specified in claim 4. As per claim 14, XIONG discloses the method of claim 11. Modified XIONG fails to disclose wherein the frequency domain representation is obtained using a discrete cosine transform. BOUFOUNOS discloses wherein the frequency domain representation is obtained using a discrete cosine transform (the system uses a matrix A which is a discrete cosine transform during data processing; paragraph [0049]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify XIONG to have the frequency domain representation is obtained using a discrete cosine transform of BOUFOUNOS reference. The Suggestion/motivation for doing so would have been to provide the ability to use a discrete cosine transform as suggested by BOUFOUNOS paragraph [0049]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine BOUFOUNOS with modified XIONG to obtain the invention as specified in claim 14. Claims 5, 8, 15, and 18 are rejected under 35 § U.S.C. 103 as being obvious over SPB-Net: A Deep Network for SAR Imaging and Despeckling with Down sampled Data to XIONG et al. (hereinafter “XIONG”) in view of US 2024/0273691 A1 to CHEN et al (hereinafter “CHEN”). As per claim 5, XIONG discloses the system of claim 1. Modified XIONG fails to disclose wherein the multi-stage compression technique comprises: a first neural network for initial amplitude compression; and a second neural network for refined amplitude compression. CHEN discloses wherein the multi-stage compression technique comprises: a first neural network for initial amplitude compression (a first neural network model is trained using the training neural network and produces RESNET50 for compressed sensing reconstruction; abstract; fig 1; paragraphs [0026-0028], [0031], [0033]); and a second neural network for refined amplitude compression (a data set is created to train an additional network as there is no limit on the number of model which can be trained as a part of the neural network model to include holographic reconstruction algorithm to reconstruct the complex amplitude U; abstract; fig 1; paragraphs [0033], [0050-0051]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify XIONG to have a first and second neural network for amplitude compression one more refined than the other of CHEN reference. The Suggestion/motivation for doing so would have been to provide the ability to specifically train various neural network models using a training neural network in order to base on the data samples provided train the models to accomplish various specified tasks as suggested by CHEN [0026], [0033]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHEN with modified XIONG to obtain the invention as specified in claim 5. As per claim 8, XIONG discloses the system of claim 1. XIONG fails to disclose wherein the phase processing neural network performs phase unwrapping. CHEN discloses wherein the phase processing neural network performs phase unwrapping (as seen in figure 1 and described in the abstract the neural network model performs phase unwrapping; abstract; fig 1; paragraphs [0021-0025]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify XIONG to have phase unwrapping of CHEN reference. The Suggestion/motivation for doing so would have been to provide the quality of phase data is significantly improved, and automatic and accurate compensation of digital holographic phase aberration is achieved suggested at paragraph [0024] of CHEN. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHEN with XIONG to obtain the invention as specified in claim 8. As per claim 15, XIONG discloses the method of claim 11. Modified XIONG fails to disclose wherein the multi-stage compression technique comprises: a first neural network for initial amplitude compression; and a second neural network for refined amplitude compression. CHEN discloses wherein the multi-stage compression technique comprises: a first neural network for initial amplitude compression (a first neural network model is trained using the training neural network and produces RESNET50 for compressed sensing reconstruction; abstract; fig 1; paragraphs [0026-0028], [0031], [0033]); and a second neural network for refined amplitude compression (a data set is created to train an additional network as there is no limit on the number of model which can be trained as a part of the neural network model to include holographic reconstruction algorithm to reconstruct the complex amplitude U; abstract; fig 1; paragraphs [0033], [0050-0051]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify XIONG to have a first and second neural network for amplitude compression one more refined than the other of CHEN reference. The Suggestion/motivation for doing so would have been to provide the ability to specifically train various neural network models using a training neural network in order to base on the data samples provided train the models to accomplish various specified tasks as suggested by CHEN [0026], [0033]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHEN with modified XIONG to obtain the invention as specified in claim 15. As per claim 18, XIONG discloses the method of claim 11. XIONG fails to disclose wherein the phase processing neural network performs phase unwrapping. CHEN discloses wherein the phase processing neural network performs phase unwrapping (as seen in figure 1 and described in the abstract the neural network model performs phase unwrapping; abstract; fig 1; paragraphs [0021-0025]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify XIONG to have phase unwrapping of CHEN reference. The Suggestion/motivation for doing so would have been to provide the quality of phase data is significantly improved, and automatic and accurate compensation of digital holographic phase aberration is achieved suggested at paragraph [0024] of CHEN. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHEN with XIONG to obtain the invention as specified in claim 18. Claims 6-7, and 16-17 are rejected under 35 § U.S.C. 103 as being obvious over SPB-Net: A Deep Network for SAR Imaging and Despeckling with Down sampled Data to XIONG et al. (hereinafter “XIONG”) in view of US 2020/0357143 A1 to CHIU et al. (hereinafter “CHIU”). As per claim 6, XIONG discloses the system of claim 1. Modified XIONG fails to disclose wherein the feature fusion mechanism comprises a Channel-wise Transformer Fusion Block (CTFB). CHIU discloses wherein the feature fusion mechanism comprises a Channel-wise Transformer Fusion Block (CTFB) (the computing system includes a modality fusion module 205 adapted to output a fused modality into the channel attenuation module 210 functioning substantially as a CTFB; paragraphs [0036-0037], [0040]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify XIONG to have the feature fusion mechanism comprises a channel-wise transformer fusion block of CHIU reference. The Suggestion/motivation for doing so would have been to provide the functionality of a CTFB in order to fuse features of the input data as suggested by CHIU paragraph [0040]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHIU with modified XIONG to obtain the invention as specified in claim 6. As per claim 7, XIONG in view of CHIU discloses the system of claim 6. Modified XIONG fails to disclose wherein the CTFB includes a self-attention mechanism with position embedding. CHIU discloses wherein the CTFB includes a self-attention mechanism with position embedding (the images analyzed are embedded with data such as semantic embedding space having known locations, and predicting a location of the image associated with the projected, fused features by determining nearest embedded, fused appearance and semantic features to the projected, fused features of the image in the semantic embedding space based on the similarity measures computed for the projected, fused features of the image; paragraphs [0006], [0040]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify XIONG to have the CTFB includes a self-attention mechanism with position embedding of CHIU reference. The Suggestion/motivation for doing so would have been to provide the location of the image in the embedded data of the image to provide information to the user as suggested by CHIU at paragraph [0006]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHIU with modified XIONG to obtain the invention as specified in claim 7. As per claim 16, XIONG discloses the method of claim 11. Modified XIONG fails to disclose wherein the feature fusion mechanism comprises a channel-wise transformer fusion block (CTFB). CHIU discloses wherein the feature fusion mechanism comprises a channel-wise transformer fusion block (CTFB) (the computing system includes a modality fusion module 205 adapted to output a fused modality into the channel attenuation module 210 functioning substantially as a CTFB; paragraphs [0036-0037], [0040]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify XIONG to have the feature fusion mechanism comprises a channel-wise transformer fusion block of CHIU reference. The Suggestion/motivation for doing so would have been to provide the functionality of a CTFB in order to fuse features of the input data as suggested by CHIU paragraph [0040]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHIU with XIONG to obtain the invention as specified in claim 16. As per claim 17, XIONG in view of CHIU discloses the method of claim 16. Modified XIONG fails to disclose wherein the CTFB includes a self-attention mechanism with position embedding. CHIU discloses wherein the CTFB includes a self-attention mechanism with position embedding (the images analyzed are embedded with data such as semantic embedding space having known locations, and predicting a location of the image associated with the projected, fused features by determining nearest embedded, fused appearance and semantic features to the projected, fused features of the image in the semantic embedding space based on the similarity measures computed for the projected, fused features of the image; paragraphs [0006], [0040]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify XIONG to have the CTFB includes a self-attention mechanism with position embedding of CHIU reference. The Suggestion/motivation for doing so would have been to provide the location of the image in the embedded data of the image to provide information to the user as suggested by CHIU at paragraph [0006]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHIU with XIONG to obtain the invention as specified in claim 17. Claims 10 and 20 are rejected under 35 § U.S.C. 103 as being obvious over SPB-Net: A Deep Network for SAR Imaging and Despeckling with Down sampled Data to XIONG et al. (hereinafter “XIONG”) in view of US 2025/0218053 A1 to GALVIN (hereinafter “GALVIN”). As per claim 10, XIONG discloses the system of claim 1. Modified XIONG fails to disclose wherein generating the compressed representation comprises: creating a first compressed bitstream based on a latent space representation; and creating a second compressed bitstream based on hyperprior latent feature summarization. GALVIN discloses wherein generating the compressed representation comprises: creating a first compressed bitstream based on a latent space representation (the original input data is compressed into an original compressed bit stream, this bit stream is then decompressed and reconstructed; figs 19-20; paragraphs [0205], [0207-0208], [0229]); and creating a second compressed bitstream based on hyperprior latent feature summarization (the reconstructed decompressed bit stream is up sampled to increase quality and resolution of the images using CNN networks and is then compressed again into an up sampled bitstream; figs 19-20; paragraphs [0205], [0207-0208], [0229]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify XIONG to have creating two-bit stream compressions one as a latent space and a second as hyperprior latent features of GALVIN reference. The Suggestion/motivation for doing so would have been to provide the ability to up sample and improve data quality of the original input data as suggested by GALVIN paragraph [0205]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine GALVIN with modified XIONG to obtain the invention as specified in claim 10. As per claim 20, XIONG discloses the method of claim 11. Modified XIONG fails to disclose wherein generating the compressed representation comprises: creating a first compressed bitstream based on a latent space representation; and creating a second compressed bitstream based on hyperprior latent feature summarization. GALVIN discloses wherein generating the compressed representation comprises: creating a first compressed bitstream based on a latent space representation (the original input data is compressed into an original compressed bit stream, this bit stream is then decompressed and reconstructed; figs 19-20; paragraphs [0205], [0207-0208], [0229]); and creating a second compressed bitstream based on hyperprior latent feature summarization (the reconstructed decompressed bit stream is up sampled to increase quality and resolution of the images using CNN networks and is then compressed again into an up sampled bitstream; figs 19-20; paragraphs [0205], [0207-0208], [0229]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify XIONG to have creating two-bit stream compressions one as a latent space and a second as hyperprior latent features of GALVIN reference. The Suggestion/motivation for doing so would have been to provide the ability to up sample and improve data quality of the original input data as suggested by GALVIN paragraph [0205]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine GALVIN with modified XIONG to obtain the invention as specified in claim 20. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. These prior arts include the following: Automatic Target Recognition on Synthetic Aperture Radar Imagery: A Survey A Comprehensive Survey of Machine Learning Applied to Radar Signal Processing Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000. /Devin Dhooge/ USPTO Patent Examiner Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Oct 28, 2024
Application Filed
Oct 21, 2025
Non-Final Rejection — §102, §103
Jan 26, 2026
Response Filed
Feb 25, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602773
Deep-Learning-based T1-Enhanced Selection of Linear Coefficients (DL-TESLA) for PET/MR Attenuation Correction
2y 5m to grant Granted Apr 14, 2026
Patent 12579780
HYPERSPECTRAL TARGET DETECTION METHOD OF BINARY-CLASSIFICATION ENCODER NETWORK BASED ON MOMENTUM UPDATE
2y 5m to grant Granted Mar 17, 2026
Patent 12524982
NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM, VISUALIZATION METHOD AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Jan 13, 2026
Patent 12517146
IMAGE-BASED DECK VERIFICATION
2y 5m to grant Granted Jan 06, 2026
Patent 12505673
MULTIMODAL GAME VIDEO SUMMARIZATION WITH METADATA
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+42.9%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 71 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month