Prosecution Insights
Last updated: April 19, 2026
Application No. 18/067,430

METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR SELECTIVE IMAGE COMPRESSION

Non-Final OA §101§103
Filed
Dec 16, 2022
Examiner
JAMES, DOMINIQUE NICOLE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Here Global B V
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
16 granted / 21 resolved
+14.2% vs TC avg
Strong +38% interview lift
Without
With
+38.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
27 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
19.5%
-20.5% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
14.6%
-25.4% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 21 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status This action is in response to the application filed on November 13, 2025. Claims 1, 8, 11, 17, and 20 are amended and claim 21 is cancelled. Thus, claims 1, 3-11, 13-20, and 22 are pending for examination in this application. Response to Amendments Applicant’s remarks and amendments filed November 13, 2025, have been entered. Applicant’s amendments regarding the 35 U.S.C. 112(b) rejections previously set forth in the Final Office Action mailed July 29, 2025, regarding claims 1-20 are persuasive. Accordingly, the 35 U.S.C. 112(b) rejections are withdrawn in response. Applicant’s arguments regarding the 35 U.S.C. 101 rejections previously set forth in the Non-Final Office Action mailed July 29, 2025, are not persuasive. Accordingly, the 35 U.S.C. 101 interpretations are upheld in response. Response to Arguments Applicant's arguments filed November 13, 2025, have been fully considered but they are not persuasive. Argument: On page 11, the applicant alleges, “The bolded portions were not deemed part of an abstract idea nor extra solution activity within the Final Office Action. At least in view of the bolded portions, the rejections of Claim 1 under 35 U.S.C. 101 are overcome.” Response: The examiner respectfully disagrees. The bolded portion are the limitations from dependent claim 21 that were previously rejected in the Final Rejection made July 29, 2025 under 35 U.S.C. 101. Amending independent claim 1 to incorporate previously rejected dependent claim 21 does not overcome the 101 rejection as the added limitation simply provides information on data contents. The examiner suggests including limitations supported by the Specification in Paragraphs [0066], regarding, “Optionally, the low- frequency images can be reconstructed by the map services provider, or the low-frequency images can be reconstructed at the mobile device or vehicle.,” to overcome the 101 rejection of the record. Argument: On pages 11-12, the applicant alleges, “Applicant submits that the "receiving, dividing, applying, converting, establishing, and storing" elements are each part of a digital image compression process that cannot reasonably considered to be a mental process.” “Applicant respectfully disagrees as it is evident from the process recited in Claim 1 that the operations recited are done so to generate a compressed image from the original image.” Response: The examiner respectfully disagrees. The limitations of claim 1 do not explicitly state the elements, “receiving, dividing, applying, converting, establishing, and storing,” are part of a digital image compression process. The limitations are recited in a way that these steps are viewed as image processing steps that can be performed in the human mind and the resulting image is stored as a compressed file. Argument: On page 14, the applicant alleges, “Nowhere within Yang is it taught or suggested ‘to identify image information of relatively higher importance than a majority of the original image based on image detection’ where ‘the image information of relatively higher importance than a majority of the original image comprises at least one sign within the image information.’” Response: The examiner respectfully disagrees. Isenmann was relied on to teach identifying image information of relatively higher importance than a majority of the original image. Yang was relied on just teaching, “wherein the image information of relatively higher importance than a majority of the original image comprises at least one sign within the image information, wherein the at least one sign is identified through image detection,” Yang (4.1) teaches means to calculate the probability of each pixel belonging to a certain semantic type including a traffic sign board through semantic classification which is considered to be comprises of at least one sign within the image information that is identified through image detection. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Yang into Isenmann by utilizing means of Yang to identify a traffic sign board into the method of Isenmann utilizing identifying image information of relatively higher importance because identifying a traffic sign board would be obvious to one of ordinary skill in the art to incorporate semantic classification to be used with image information of relatively higher importance of Isenmann. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the teachings of Yang into Isenmann because by providing the semantic classification to identify image information of relatively higher importance to reduce the amount of map data required, thereby reducing the cost of map transmission, storage, and management in practical applications. Therefore, the combination of the references teach the limitation. Argument: On pages 15-17, the applicant alleges, “The combination of Rajesh, Pan, and Isenmann with Yang is improper.” Response: The examiner respectfully disagrees. To the arguments against the reference combination, the test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressively suggested in any one or all of the claims. Rather, the test is that what the combined teachings of the references would have suggested to those of ordinary skill in the art. Because each reference is thoughtfully and conscientiously combined based on the art at the time of the claimed invention, and one of ordinary skill in the art’s knowledge of said teachings, the rejection is maintained. Further, to the incorrect assertion that the stated motivation is not supported by any of the cited references, the Examiner has provided support by the citation for each and every motivation as proper under KSR rationale (G). Each motivation is taken directly from the art of record, and therefore, the rejection is respectfully maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-7, 11, 13-16, 20, and 22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claims 1, 11, and 20, these claims recite the following limitations which are found to be abstract ideas not reciting a practical application or significantly more, with claim 1 being exemplary: divide the original image into subdivisions of the original image, wherein the subdivisions of the original image are a predefined pixel width by a predefined pixel height (abstract idea as a mental process as a human mind is able to divide an image); apply a transformation to the subdivisions of the original image (abstract idea as a mental process as a human mind is able to transform an image); convert the subdivisions of the original image into a frequency domain (abstract idea as a mental process as a human mind is able to convert an image); identify, within the original image, image information of relatively higher importance than a majority of the original image based at least in part on image detection (abstract idea as a mental process as a human mind is able to identify an important area in an image), wherein the image information of relatively higher importance than a majority of the original image comprises at least one sign within the image information, wherein the at least one sign is identified through image detection (insignificant pre/post solution extra activity); establish a low-frequency component for the subdivisions (insignificant pre/post solution extra activity); store the low-frequency component as a value for the subdivisions of the original image as a compressed image file (insignificant pre/post solution extra activity); This judicial exception is not integrated into a practical application for the following reasons. Claims 1, 11, and 20 all recite the additional element of “and store the image information of relatively higher importance than the majority of the original image with the compressed image file,” however, this limitation also recites insignificant pre/post solution extra activity that does not integrate the abstract idea into a practical application. Claim 1 recites the additional element of “at least one processor” and “at least one memory including computer program code,” and Claim 20 recites, “a no-transitory computer-readable storage medium.” While these limitations includes additional elements they are not sufficient to recite a practical application of the abstract ideas recited in the claims as they amount to mere generic computer elements and thus amount to no more than a recitation of the words “apply it” (or an equivalent) or are no more than the mere instruction to implement an abstract idea or other exception on a computer. See MPEP 2106.05(f). Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because when considered separately and in combination, the above recited additional element from claim 1 and claim 20 do not add significantly more (also known as “inventive concept”) to the exception. Rather, the additional elements disclosed above perform well-understood, routine, conventional computer functions. Therefore, independent claims 1, 11, and 20 are directed towards an abstract idea without a practical application or significantly more. Regarding claims 4-6 and 14-16 the limitations are merely directed towards abstract ideas recited in the independent claims, specifically mathematical concepts, without reciting significantly more or a practical application. Regarding claims 3, 13 and 22 the limitations are merely directed toward abstract ideas a mental process, as the human mind is capable of identifying bounding boxes and image information of relatively higher importance than the majority of the original image, without reciting a practical application or significantly more. Regarding claims 7, the limitations are merely directed towards insignificant pre/post solution extra activity that nonetheless do not integrate the abstract idea recited from claim 1 into a practical application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim(s) 1, 3, 5-6, 11, 13, 15-16, 20 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rajesh et al. (T2CI-GAN: Text to Compressed Image generation using Generative Adversarial Network), hereinafter referred to as RAJESH in view of Pan et al. (CN 113709494 A), hereinafter referred to as PAN and further in view of Yang et al. (CN 108802785 B) hereinafter referred to as YANG and further in view of Isenmann et al. (US 20150365683 A1), hereinafter referred to as ISENMANN. Regarding claim 1, RAJESH discloses receive an original image from an image sensor ([pg. 3, 2.1 JPEG Compression] Firstly, the RGB channels of the image {where the image is the received image}; [pg. 2, Fig. 1] shows a source image), divide the original image into subdivisions of the original image ([pg. 4, 2.1 JPEG Compression] Then each channel is divided into 8x8 non-overlapping pixel blocks. [pg. 2, Fig. 1] shows a source image divided into 8x8 blocks), wherein the subdivisions of the original image are a predefined pixel width by a predefined pixel height ([pg. 4, 2.1 JPEG Compression] Then each channel is divided into 8x8 non-overlapping pixel blocks. {where the predefined width and height is 8 pixels}); apply a transformation to the subdivisions of the original image ([pg. 4, 2.1 JPEG Compression] Forward Discrete Cosine Transform (DCT) is applied on each block in each channel to convert the 8x8 pixel block (let's say P(x; y)) from spatial domain to frequency domain.; [pg. 2, Fig. 1] shows forward DCT); convert the subdivisions of the original image into a frequency domain ([pg. 4, 2.1 JPEG Compression] Forward Discrete Cosine Transform (DCT) is applied on each block in each channel to convert the 8x8 pixel block (let's say P(x; y)) from spatial domain to frequency domain.; [pg. 2, Fig. 1] shows forward DCT); establish a low-frequency component for the subdivisions ([pg. 4, 2.1 JPEG Compression] Each DCT block, i.e., F(u; v), is quantized to keep only the low frequency coefficients.; [pg. 2, Fig. 1] shows quantization); store the low-frequency component as a value for the subdivisions of the original image as a compressed image file ([pg. 2 Fig. 1] shows the low-frequency components {which were the outcome of the quantization step above} are stored as “Compressed Image Data”). RAJESH does not explicitly state at least one processor and at least one memory including computer program code, the at least one memory and computer program code. However, PAN teaches the aspects of the apparatus comprising at least one processor ([pg. 5, last paragraph] the processor calls the executable program code stored in the memory) and at least one memory including computer program code ([pg. 5, last paragraph] the processor calls the executable program code stored in the memory), the at least one memory and computer program code configured to, with the processor, cause the apparatus to at least: Therefore, it would have been obvious to persons of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of RAJESH’s store of low-frequency components with the teaching of PAN’s an apparatus comprising at least one processor and at least one memory including computer program code, the at least one memory and computer program code configured to, with the processor, cause the apparatus to at least. The motivation to combine the teachings of RAJESH and PAN is because both references teach methods of image compression and reconstruction, where PAN’s addition enhances RAJESH’s by providing a means to complete the proposed methodology (PAN [pg. 5]). RAJESH in view of PAN does not explicitly state the image corresponding to a geographical location. based at least in part on image detection, wherein the image information of relatively higher importance than a majority of the original image comprises at least one sign within the image information, wherein the at least one sign is identified through image detection; However, YANG teaches the image corresponding to a geographical location ([pg. 4, step 4] the monocular vision module transmits the collected road original information to the image processing module). based at least in part on image detection, wherein the image information of relatively higher importance than a majority of the original image comprises at least one sign within the image information, wherein the at least one sign is identified through image detection (see Yang pg. 5, (4.1) by the machine learning method, each pixel of the image is classified. In one example, through the PSPnet network, the city typical data set for training the network, the network calculates the probability of each pixel belonging to a certain semantic type, and outputting the maximum probability of semantics. As shown in FIG. 4, semantic classification with lane line, traffic sign board, traffic light, traffic lamp, tree, street lamp and so on. The result of the pixel-level semantic classification is shown in FIG. 5, wherein 1, 2 is a street lamp post, 3 is a traffic lamp post, 4, 5, 6 is lane line, 7, 9 is traffic light, 8 traffic sign board.); Therefore, it would have been obvious to persons of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of RAJESH in view of PAN’s apparatus to store the low-frequency components with the teaching of YANG’s the image corresponding to a geographical location and calculating the probability of pixel belonging to a certain semantic type. The motivation to combine the teachings of RAJESH in view of PAN and YANG is because all references teach image analysis where YANG ties in the image analysis for (semi-) automated driving using geographical information. YANG enhances RAJESH in view of PAN’s apparatus by satisfying the intelligent vehicle high precision positioning requirement, reducing the cost of the positioning system, and improving the robustness of the vehicle positioning in the urban dynamic change scene (YANG [pg. 2 Contents of the Invention]). RAJESH in view of PAN and further in view of YANG does not explicitly state wherein the apparatus is further caused to: identify, within the original image, image information of relatively higher importance than a majority of the original image; and store the image information of relatively higher importance than the majority of the original image with the compressed image file. However, ISENMANN teaches wherein the apparatus is further caused to: identify, within the original image, image information of relatively higher importance than a majority of the original image based at least in part on image detection ([Paragraph 0008] Therefore, at least one image region of the digital image is thus stored in a compressed manner, it being possible to store certain important regions of the image with as much detail as possible depending on the desired amount of detail (level of detail) of the region concerned.); and store the image information of relatively higher importance than the majority of the original image with the compressed image file ([Paragraph 0009] Different resolutions are therefore used in order to store important and less important image regions. As a result, the important regions of the image can remain very detailed in a targeted manner.). Therefore, it would have been obvious to persons of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of RAJESH in view of PAN and further in view of YANG’s apparatus with the teaching of ISENMANN’s wherein the apparatus is further caused to: identify, within the original image, image information of relatively higher importance than a majority of the original image; and store the image information of relatively higher importance than the majority of the original image with the compressed image file. The motivation to combine the teachings of RAJESH in view of PAN and further in view of YANG and ISENMANN is because the references teach image compression and storage. ISENMANN enhances the apparatus because the image is compressed by regions of increased user interest are stored at a high resolution and other "less important regions" are being stored at a lower resolution and therefore saving storage space (ISENMANN [Paragraphs 0010 and 0011]). Regarding claim 3, RAJESH in view of PAN and further in view of YANG and further in view of ISENMANN teaches the apparatus of claim 1, wherein causing the apparatus to identify, within the original image, the image information of relatively higher importance than the majority of the original image comprises causing the apparatus to: RAJESH in view of PAN and further in view of YANG does not explicitly state identify one or more bounding boxes within the original image, the image information of relatively higher importance than the majority of the original image being contained within the one or more bounding boxes. However, ISENMANN further teaches identify one or more bounding boxes within the original image, the image information of relatively higher importance than the majority of the original image being contained within the one or more bounding boxes ([Paragraph 0010] A key aspect of the invention is that the digital image is compressed by regions of increased user interest (“important regions”) being stored at a high resolution and other regions (“less important regions”) being stored at a lower resolution. {where the important regions or regions of interest are interpreted as being bounded regions}). Therefore, it would have been obvious to persons of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of RAJESH in view of PAN and further in view of YANG’s apparatus with the teaching of ISENMANN’s identify one or more bounding boxes within the original image, the image information of relatively higher importance than the majority of the original image being contained within the one or more bounding boxes. The motivation to combine the teachings of RAJESH in view of PAN and further in view of YANG and ISENMANN is because the references teach image compression and storage. ISENMANN enhances the apparatus because the image is compressed by regions of increased user interest are stored at a high resolution and other "less important regions" are being stored at a lower resolution and therefore saving storage space (ISENMANN [Paragraphs 0010 and 0011]). Regarding claim 5, RAJESH further teaches the apparatus of claim 1, wherein the predefined pixel width is eight pixels and the predefined pixel height is eight pixels, wherein the subdivisions of the original image are eight- by-eight pixel blocks ([pg. 4, 2.1 JPEG Compression] Then each channel is divided into 8x8 non-overlapping pixel blocks. {where the predefined width and height is 8 pixels}). Regarding claim 6, RAJESH further teaches the apparatus of claim 5, wherein causing the apparatus to apply the transformation to the subdivisions of the original image comprises causing the apparatus to apply a Discrete Cosine Transformation to the subdivisions of the original image to convert values of each subdivision to a frequency domain ([pg. 4, 2.1 JPEG Compression] Forward Discrete Cosine Transform (DCT) is applied on each block in each channel to convert the 8x8 pixel block (let's say P(x; y)) from spatial domain to frequency domain.; [pg. 2, Fig. 1] shows forward DCT). As per claim 11, Claim 11 claims a method comprising the same limitations as Claim 1. Therefore, the rejection and rationale are analogous to that made in Claim 1. As per claim 13, Claim 13 claims the same limitations as Claim 3 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale are analogous to that made in Claim 3. As per claim 15, Claim 15 claims the same limitations as Claim 5 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale are analogous to that made in Claim 5. As per claim 16, Claim 16 claims the same limitations as Claim 6 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale are analogous to that made in Claim 6. As per claim 20, Claim 20 claims a computer program product comprising at least one non-transitory computer-readable storage medium having computer-executable program storage code instructions stored therein, the computer-executable program code instructions comprising program code instructions to perform the same limitations as Claim 1. Therefore, the rejection and rationale are analogous to that made in Claim 1. Pan further teaches, the computer program product comprising at least one non-transitory computer-readable storage medium having computer-executable program code instructions stored therein (PAN [pg. 14, Embodiment 6] The embodiment of the invention claims a computer program product, the computer program product comprises a non-transitory computer-readable storage medium storing a computer program), the computer-executable program code instructions comprising program code instructions to: Isenmann further teaches, generate a compressed image file (see Isenmann Paragraph [0049], “The image can now be stored in a certain file format in compressed form. A distinction is made in the process between lossless and lossy compression methods.”) Regarding Claim 22, YANG further teaches the aspects of the apparatus of claim 1, wherein causing the apparatus to identify image information of relatively high importance than a majority of the original image based at least in part on image detection comprises causing the apparatus to: identify, using a trained machine learning model, image information of relatively higher importance than a majority of the original image including a sign (Section 4.1) by the machine learning method, each pixel of the image is classified. In one example, through the PSPnet network, the city typical data set for training the network, the network calculates the probability of each pixel belonging to a certain semantic type, and outputting the maximum probability of semantics. As shown in FIG. 4, semantic classification with lane line, traffic sign board, traffic light, traffic lamp, tree, street lamp and so on. The result of the pixel-level semantic classification is shown in FIG. 5, wherein 1, 2 is a street lamp post, 3 is a traffic lamp post, 4, 5, 6 is lane line, 7, 9 is traffic light, 8 traffic sign board.). Therefore, it would have been obvious to persons of ordinary skill in the art, before the effective filing date of the claimed invention, to further add the limitations taught by YANG’s wherein the original image from the image sensor is captured along a road of a first functional class at the geographical location, wherein the machine learning model is trained using image data from a geographic region within a predetermined degree of similarity to the geographical location and captured along road segments of the first functional class. The motivation to further include the teachings of YANG is because the network calculates the probability of each pixel belonging to a certain semantic type, and outputting the maximum probability of semantics, where semantic classification includes lane line, traffic sign board, traffic light, traffic lamp, tree, street lamp and so on (YANG [pg. 5, Step 4.1]). ISENMANN further teaches specify the image information of relatively higher importance using a bounding box ([Paragraph 0010] A key aspect of the invention is that the digital image is compressed by regions of increased user interest (“important regions”) being stored at a high resolution and other regions (“less important regions”) being stored at a lower resolution. {where the important regions or regions of interest are interpreted as being bounded regions}). Therefore, it would have been obvious to persons of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of RAJESH in view of YANG’s method with the teaching of ISENMANN’s identifying one or more bounding boxes within the original image, the image information of relatively higher importance than the majority of the original image being contained within the one or more bounding boxes. The motivation to combine the teachings of RAJESH in view of YANG and ISENMANN is because the references teach image compression and storage. ISENMANN enhances the apparatus because the image is compressed by regions of increased user interest are stored at a high resolution and other "less important regions" are being stored at a lower resolution and therefore saving storage space (ISENMANN [Paragraphs 0010 and 0011]). Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Rajesh et al. (T2CI-GAN: Text to Compressed Image generation using Generative Adversarial Network), hereinafter referred to as RAJESH in view of Pan et al. (CN 113709494 A), hereinafter referred to as PAN and further in view of Yang et al. (CN 108802785 B), hereinafter referred to as YANG and further in view of Isenmann et al. (US 20150365683 A1), hereinafter referred to as ISENMANN and further in view of Jamali et al. (Robust Watermarking in Non-ROI of Medical Images Based on DCT-DWT), hereinafter referred to as JAMALI. Regarding claims 4 RAJESH further teaches the apparatus of claim 2, wherein causing the apparatus to apply the transformation to the subdivisions of the original image comprises causing the apparatus to selectively apply a Discrete Cosine Transformation to each subdivision of the original image to convert values of each subdivision to a frequency domain ([pg. 4, 2.1 JPEG Compression] Forward Discrete Cosine Transform (DCT) is applied on each block in each channel to convert the 8x8 pixel block (let's say P(x; y)) from spatial domain to frequency domain.; [pg. 2, Fig. 1] shows forward DCT) RAJESH does not explicitly state not including the image information of relatively higher importance than a majority of the original image. However, JAMALI teaches not including the image information of relatively higher importance than a majority of the original image ([pg. 1201, A. ROI Region Extraction Using Saliency Detection] This phase can be considered as a preprocessing stage where we employ a saliency detection method to extract important part of an image. [B. Embedding Scheme] Embedding watermark into whole image can effect on quality of image more than hiding it into some blocks of the image. Also important parts of an image remain intact based on the above mentioned block selection method. In the embedding phase, after transforming the selected blocks into the wavelet domain, blocks of horizontal, vertical and diagonal coefficients are DCT transformed.). Therefore, it would have been obvious to persons of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of RAJESH in view of PAN and further in view of YANG’s apparatus with the teaching of JAMALI’s not including the image information of relatively higher importance than a majority of the original image. The motivation to combine the teachings of RAJESH in view of PAN and further in view of YANG and JAMALI is because the references use DCT, where JAMALI adds region of interest extraction and image analysis is performed according to how important information is, such as medical information in medical imaging. JAMALI enhances the apparatus because ROIs are very important in medical images and special attention must be paid to these parts to keep the content intact (JAMALI [pg. 1201, II. Proposed Method]). As per claim 14, Claim 14 claims the same limitations as Claim 4 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale are analogous to that made in Claim 4. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Rajesh et al. (T2CI-GAN: Text to Compressed Image generation using Generative Adversarial Network), hereinafter referred to as RAJESH in view of Pan et al. (CN 113709494 A), hereinafter referred to as PAN and further in view of Yang et al. (CN 108802785 B) hereinafter referred to as YANG and further in view of Isenmann et al. (US 20150365683 A1), hereinafter referred to as ISENMANN and further in view of Kurniawan et al. (Implementation of Image Compression Using Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT)), hereinafter referred to as KURNIAWAN. Regarding claim 7, RAJESH in view of PAN and further in view of YANG teaches the apparatus of claim 1, RAJESH further teaches wherein a value for each pixel of the Portable Graphics Format image file comprises a value corresponding to the low-frequency component for a corresponding subdivision of the original image ([pg. 4, 2.1 JPEG Compression] Then each channel is divided into 8x8 non-overlapping pixel blocks. Forward Discrete Cosine Transform (DCT) is applied on each block in each channel to convert the 8 x 8 pixel block (let's say P(x; y)) from spatial domain to frequency domain. Each DCT block, i.e., F(u; v), is quantized to keep only the low frequency coefficients.). RAJESH does not explicitly state wherein the compressed image file comprises a Portable Graphics Format image file. However, KURNIAWAN teaches wherein the compressed image file comprises a Portable Graphics Format image file ([pg. 13952, Lossless and Lossy Compression] The images in file formats like .png and .gif must be in lossless compression formats.). Therefore, it would have been obvious to persons of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of RAJESH in view of PAN and further in view of YANG’s apparatus with the teaching of KURNIAWAN’s wherein the compressed image file comprises a Portable Graphics Format image file. The motivation to combine the teachings of RAJESH in view of PAN and further in view of YANG and KURNIAWAN is because the references teach image compression using DCT where KURNIAWAN enhances the apparatus by using a Portable Graphics Format, which uses lossless compression and thereby overcomes the downfall of lossy compression where images cannot be reconstructed due to degradation in the data (KURNIAWAN [pg. 13952, Lossless and Lossy Compression]). Allowable Subject Matter Claims 8-10 and 17-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOMINIQUE JAMES whose telephone number is (703)756-1655. The examiner can normally be reached 9:00 am - 6:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DOMINIQUE JAMES/Examiner, Art Unit 2666 /MING Y HON/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Dec 16, 2022
Application Filed
Jan 31, 2025
Non-Final Rejection — §101, §103
Apr 29, 2025
Response Filed
Jul 25, 2025
Final Rejection — §101, §103
Sep 29, 2025
Response after Non-Final Action
Nov 13, 2025
Request for Continued Examination
Nov 24, 2025
Response after Non-Final Action
Mar 05, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591976
CELL SEGMENTATION IMAGE PROCESSING METHODS
2y 5m to grant Granted Mar 31, 2026
Patent 12567138
REGISTRATION METROLOGY TOOL USING DARKFIELD AND PHASE CONTRAST IMAGING
2y 5m to grant Granted Mar 03, 2026
Patent 12548159
SCENE PERCEPTION SYSTEMS AND METHODS
2y 5m to grant Granted Feb 10, 2026
Patent 12462681
Detection of Malfunctions of the Switching State Detection of Light Signal Systems
2y 5m to grant Granted Nov 04, 2025
Patent 12462346
MACHINE LEARNING BASED NOISE REDUCTION CIRCUIT
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+38.5%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 21 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month