Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 13, 2026 has been entered.
Status of Claims
Claims 1 and 7 are amended
Claims 1 – 15 remain pending.
Response to Arguments
Applicant's arguments filed 02/13/2026 with respect to claims 1 – 15 have been considered but are moot because the new grounds of rejection do not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 14 are rejected under 35 U.S.C 103 as being unpatentable over Wu US Patent Application Publication No. US-20210383176-A1 (hereinafter Wu) in view of Kang ‘Semantics-to-Signal Scalable Image Compression with Learned Revertible Representations’ (hereinafter Kang) and further in view of Zhang Patent Application Publication No. CN-102495725-A (hereinafter Zhang).
Regarding claim 1, Wu discloses, comprising the steps of: a) extracting from the image semantic information at a semantic layer (Wu in [0005] discloses, “extracting multi-level semantic information of an object in the harmonized image based on the context feature information”); b) extracting from the image structure information at a structure layer; c) extracting from the image signal information at a signal layer (Wu in Fig. 2 discloses about context feature extraction. Additionally, Wu in [0048] discloses that the context feature includes structural information and color and texture (signal information), “The context feature information of the image may represent overall structural information of the image and context semantic information of the image surrounding the harmonized region. The overall structural information of the image may be pixel-level underlying features, and may relate to an overall color feature of the image, for example, a video frame, an overall texture feature of the image”); and d) compressing the semantic information, the structure information, and the signal information respectively into a semantic stream, a structure (Wu in [0174] discloses about encoding which equates to compressing, “a third harmonization subunit 4034 , configured to harmonize an extracted semantic feature and a middle-layer context feature at the same level, the middle-layer context feature at the same level being a middle-layer context feature outputted by an encoding convolutional layer in a skip connection with the decoding convolutional layer”. Additionally, Wu in [0048] discloses that the context feature includes structural information and color and texture (signal information)); wherein Steps a), b) and c) (Wu discloses about conducting step a), b) and c) in [0005], [0048] and Fig. 2 as described above).
Wu doesn’t disclose the limitation further recited in the claim shown in strike through above.
Kang further discloses computer-implemented method for scalable compression of a digital image (Kang in [Abstract] discloses, “Image/video compression and communication need to serve both human vision and machine vision. To address this need, we propose a scalable image compression solution”), bitstream (Kang in [Section - 1; Paragraph - 6] discloses, “we design a layered compression network to compress the multiple features into a scalable bitstream”).
It would have been obvious to one of an ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Kang into the system of Wu because image compression would allow to remove redundancy resulting in more efficient encoding.
The combined teachings of Wu and Kang as a whole are not relied upon for the following limitations as further recited. Zhang discloses that bitstream image layers such as those recited in steps a), b) and c) are conducted independently from each other and in parallel (Zhang in [0045] discloses about parallel processing wherein image/video retrieval is divided into a plurality of sub-stages and processed in parallel wherein the feature detection equates to extracting structure and signal layer and feature description quates to semantic layer, “the processing process of the feature extraction algorithm for image/video retrieval is divided into a plurality of sub-stages, data is transmitted in each stage in a stream mode, and different data is processed in parallel in different stages. The feature extraction algorithm for image/video retrieval can be divided into two stages of feature detection and feature description”. Additionally, in [0013] Zhang suggests about performing the detection process independently).
It would have been obvious to one with one having an ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Zhang into the system of Wu in view of Kang because independent extraction enables parallel computation improving processing speed.
Summary of Citations (Zhang)
Paragraph [0012]; “The pipeline-level parallel technology is that the processing process of the feature extraction algorithm for image/video retrieval is divided into a plurality of sub-stages, data is transmitted in each stage in a stream mode, and different data is processed in parallel in different stages. The feature extraction algorithm for image/video retrieval can be divided into two stages of feature detection and feature description”.
Paragraph [0013]; “the invention adopts a task-level parallel technology to divide an image into a plurality of data blocks and divide limited resources (threads) into a plurality of groups, and each group independently processes the detection and description work of one data block”.
Summary of Citations (Wu)
Paragraph [0005]; “extracting multi-level semantic information of an object in the harmonized image based on the context feature information. The method may further include performing image reconstruction based on the context feature information and the multi-level semantic information to obtain a reconstructed image”.
Paragraph [0048]; “The context feature information of the image may represent overall structural information of the image and context semantic information of the image surrounding the harmonized region. The overall structural information of the image may be pixel-level underlying features, and may relate to an overall color feature of the image, for example, a video frame, an overall texture feature of the image, an overall space layout feature of the image, and the like”.
Paragraph [0174]; “a third harmonization subunit 4034 , configured to harmonize an extracted semantic feature and a middle-layer context feature at the same level, the middle-layer context feature at the same level being a middle-layer context feature outputted by an encoding convolutional layer in a skip connection with the decoding convolutional layer”.
PNG
media_image1.png
322
426
media_image1.png
Greyscale
Summary of Citations (Kang)
[Abstract]; “Image/video compression and communication need to serve both human vision and machine vision. To address this need, we propose a scalable image compression solution. We assume that machine vision needs less information that is related to semantics, whereas human vision needs more information that is to reconstruct signal. We then propose semantics-to-signal scalable compression, where partial bitstream is decodeable for machine vision and the entire bitstream is decodeable for human vision”.
[Fig – 1 Description]; “Image signal, i.e. pixels, is transformed into a set of features”.
[Section - 1; Paragraph - 6]; “we design a layered compression network to compresses the multiple features into a scalable bitstream”.
Regarding claims 14, the combination of Wu, Kang, and Zhang as a whole teaches claim 1, and Wu teaches claim 14 for the same grounds of rejection from the Final Office Action of 11/20/2025.
Claims 2 – 4 are rejected under 35 U.S.C 103 as being unpatentable over Wu in view of Kang and Zhang and further in view of Abramson Patent Application Publication No. WO-2023139395-A1 (hereinafter Abramson).
Regarding claims 2 – 4, the combination of Wu, Kang, and Zhang as a whole teaches claim 1 but fails to teach the further limitations as recited in claims 2-4. Abramson teaches claims 2-4 for the same grounds of rejection and motivation established in the Non-Final Office Action of 01/30/2025.
Claim 5 is rejected under 35 U.S.C 103 as being unpatentable over Wu in view of Kang and Zhang and further in view of Chang ‘Conceptual Compression via Deep Structure and Texture Synthesis’ (hereinafter Chang).
Regarding claims 5, the combination of Wu, Kang, and Zhang as a whole teaches claim 1 but fails to teach the further limitations as recited in claim 5. Chang teaches claim 5 for the same grounds of rejection and motivation established in the Non-Final Office Action of 01/30/2025.
Claims 6 is rejected under 35 U.S.C 103 as being unpatentable over Wu in view of Kang and Zhang and further in view of Huang Patent Application Publication No. CN-114998379-A (hereinafter Huang).
Regarding claims 6, the combination of Wu, Kang, and Zhang as a whole teaches claim 1 but fails to teach the further limitations as recited in claim 6. Huang teaches claim 6 for the same grounds of rejection and motivation established in the Non-Final Office Action of 01/30/2025.
Claims 7 and 15 rejected under 35 U.S.C 103 as being unpatentable over Wu in view of Liu Patent Application Publication No. CN-114330400-A (hereinafter Liu).
Regarding claim 7, Wu discloses a computer-implemented method for reconstructing a digital image from multiple bitstreams including a semantic stream, a structure stream, and a signal stream (Wu in [0005] discloses, “The method may further include performing image reconstruction based on the context feature information and the multi-level semantic information to obtain a reconstructed image”. Furthermore, Wu in [0048] discloses that the context feature includes structural information and color and texture (signal information), “The context feature information of the image may represent overall structural information of the image and context semantic information of the image surrounding the harmonized region. The overall structural information of the image may be pixel-level underlying features, and may relate to an overall color feature of the image, for example, a video frame, an overall texture feature of the image”); the method comprising the steps of: a) decoding, from the semantic stream, semantic information of the digital image (Wu in [0081] discloses, “for any one of the plurality of decoding convolutional layers, performing, by using the decoding convolutional layer, semantic extraction on a feature outputted by a previous layer”); b) decoding, from the structure stream, structure information of the digital image (Wu in [0082 – 0083] discloses, “the middle-layer context feature at the same level being a middle-layer context feature outputted by an encoding convolutional layer in a skip connection with the decoding convolutional layer ... The downsampling convolutional neural network may include a context encoder and a decoder” wherein context feature includes structure information (disclosed in [0048])); c) combining the structure information and the semantic information to obtain a perceptual reconstruction of the image (Wu in [0005] discloses, “The method may further include performing image reconstruction based on the context feature information and the multi-level semantic information to obtain a reconstructed image”. Furthermore, Wu in [0048] discloses that the context feature includes structural information); d) decoding, from the signal stream, signal information of the digital image (Wu in [0082 – 0083] discloses, “the middle-layer context feature at the same level being a middle-layer context feature outputted by an encoding convolutional layer in a skip connection with the decoding convolutional layer ... The down-sampling convolutional neural network may include a context encoder and a decoder” wherein context feature includes color and texture (signal information) (disclosed in [0048])); and e) reconstructing the image using the signal information based on the perceptual reconstruction (Wu in [0005] discloses, “The method may further include performing image reconstruction based on the context feature information and the multi-level semantic information to obtain a reconstructed image” wherein context feature includes color and texture (signal information) (disclosed in [0048])); wherein Steps a), b) and d) are (Wu in [0005], [0048] and [0082 – 0083] discloses about Steps a), b) and d) as described above).
Wu is not relied upon for the following limitations as shown in the strike-through above.
Liu discloses (Liu in [0005] and [0014] discloses about independent and parallel decoding, “a first decoding unit to decode the first divided image to obtain a first decoding result, and simultaneously controlling a second decoding unit to decode the second divided image to obtain a second decoding result; and combining the first decoding result and the second decoding result to obtain the final decoding result”. There separate decoding unit processing different image segments simultaneously equates to performing independent parallel decoding).
It would have been obvious to one with one having an ordinary skill in art before the effective filling date of the claimed invention to integrate the technique of Liu into the system of Wu because independent extraction enables parallel computation improving efficiency of the process.
Summary of Citations (Liu)
Paragraph [0005]; “after the two-dimensional code image is segmented, performing two-way parallel decoding to obtain a decoding result”.
Paragraph [0014]; “a first decoding unit to decode the first divided image to obtain a first decoding result, and simultaneously controlling a second decoding unit to decode the second divided image to obtain a second decoding result; and combining the first decoding result and the second decoding result to obtain the final decoding result”.
Summary of Citations (Wu)
Paragraph [0005]; “The method may further include performing context feature extraction on the harmonized image to obtain context feature. The method may further include performing image reconstruction based on the context feature information and the multi-level semantic information to obtain a reconstructed image”.
Paragraph [0048]; “The context feature information of the image may represent overall structural information of the image and context semantic information of the image surrounding the harmonized region. The overall structural information of the image may be pixel-level underlying features, and may relate to an overall color feature of the image, for example, a video frame, an overall texture feature of the image, an overall space layout feature of the image, and the like”.
Paragraph [0081]; “for any one of the plurality of decoding convolutional layers, performing, by using the decoding convolutional layer, semantic extraction on a feature outputted by a previous layer”.
Paragraph [0082 – 0083]; “the middle-layer context feature at the same level being a middle-layer context feature outputted by an encoding convolutional layer in a skip connection with the decoding convolutional layer ... The downsampling convolutional neural network may include a context encoder and a decoder”.
Regarding claims 15, is a non-transitory computer readable storage medium claim corresponds to method claim 7. Therefore, the rejection analysis of claim 7 is applied in claim 15. Wu in [0008] discloses, "In addition, an embodiment of the present disclosure further provides a non-transitory computer-readable storage medium having processor executable instructions stored thereon".
Summary of Citations (Wu)
Paragraph [0008]; "In addition, an embodiment of the present disclosure further provides a non-transitory computer-readable storage medium having processor executable instructions stored thereon".
Claim 8 is rejected under 35 U.S.C 103 as being unpatentable over Wu in view of Liu and further in view of Abramson.
Regarding claims 8, the combination of Wu and Liu, as a whole, teaches claim 1 but is not relied upon to teach claim 8. Abramson teaches claim 8 for the same grounds of rejection and motivation as established in the Final Office Action of 11/20/2025.
Claims 9 – 11 and 13 are rejected under 35 U.S.C 103 as being unpatentable over Wu in view of Liu and Chang and further in view of Jiang Patent Application Publication No. CN-114283080-A (hereinafter Jiang).
Regarding claims 9 – 11 and 13, the combination of Wu and Liu, as a whole, teaches claim 1 but is not relied upon to teach claim 9. Chang and Jiang teach claim 9 for the same grounds of rejection and motivation as established in the Final Office Action of 11/20/2025. With respect to claims 10, 11 and 13 respectively, Chang and Jiang in the combination further teach these claims as set forth in the grounds of rejection from the Non-Final Office Action of 01/30/2025.
Claim 12 is rejected under 35 U.S.C 103 as being unpatentable over Wu in view of Liu, Chang and Jiang and further in view Chen Patent Application Publication No. CN-113822147-A (hereinafter Chen).
Regarding claims 12, the combination of Wu, Liu, Chang, and Jiang as a whole, teaches claim 10 but is not relied upon to teach claim 12. Chen teaches claim 12 for the same grounds of rejection and motivation as set forth in the grounds of rejection from the Non-Final Office Action of 01/30/2025.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to ZAID MUHAMMAD SALEH whose telephone number is (703)756-1684.
The examiner can normally be reached M-F 8 am - 5 pm ET. Examiner interviews are available
via telephone, in-person, and video conferencing using a USPTO supplied web-based
collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO
Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.If attempts to
reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be
reached on (571)272-7332. The fax phone number for the organization where this application or
proceeding is assigned is 571-273-8300. Information regarding the status of published or
unpublished applications may be obtained from Patent Center. Unpublished application
information in Patent Center is available to registered users. To file and manage patent
submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit
https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center
and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For
additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
If you would like assistance from a USPTO Customer Service Representative, call 800-786
9199 (IN USA OR CANADA) or 571-272-1000.
/ZAID MUHAMMAD SALEH/
Examiner, Art Unit 2668
03/10/2026
/VU LE/Supervisory Patent Examiner, Art Unit 2668