Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/07/2026 has been entered.
Claims 1-3, 5-10, and 13-16 are currently pending in U.S. Patent Application No. 18/112,403 and an Office Action on the merits follows.
Response to Arguments/Amendments
Applicant’s remarks filed 01/07/2026 have been fully considered and are responded to below.
The previous claim objection is removed in view of the claim amendments.
Regarding the 35 U.S.C. 103 rejections, the Applicant’s remarks have been fully considered but are moot because the new grounds of rejection regarding the amended limitation no longer relies on the combination of references presented in the Non-Final Rejection. A change in scope necessitated by the Applicant’s amendments has led to an updated search revealing new art.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 9-10, 13-16, and 18 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 9 recites “wherein reconstructing the feature map comprises reconstructing…a channel of the reconstructed feature map…that is decoded at a different resolution.”. As presented, it is unclear what (i.e., the decoded feature map, the inverse-converted feature map, or something else) is being compared with the reconstructed channel to determine a “different resolution”. Further clarification is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 9 are rejected as being unpatentable over Yang et al. (US 2024/0223790; hereinafter “Yang”) in view of Bhagavathy et al. (US 2012/0294369; hereinafter “Bhagavathy”).
Regarding Claim 1, Yang discloses a feature map encoding method, comprising:
extracting a feature map from an input image (Fig. 2, [0129], Yang discloses inputting an image into a feature extraction module to extract a feature map.);
determining an encoding feature map based on the extracted feature map (Fig. 3A, [0132-0133], Yang discloses a feature extraction module ga to obtain feature map y. The Examiner notes that the term “determining” is broadly interpreted as selecting or choosing, as supported by [0450] of the Applicant’s specification (cited from the corresponding US Publication US 2023/0281960), and therefore the Examiner asserts that Yang’s extraction of feature map y and consequent conversion by quantization is analogous to determining/selecting.);
generating a converted feature map by performing conversion on the encoding feature map; and performing encoding on the converted feature map (Fig. 2, [0129], Yang discloses performing quantizing on the feature map to obtain a quantized feature map, which then undergoes entropy encoding.)
wherein the feature map extracted from the input image has a width, a height, and a channel size (Fig. 3A, [0008], Yang discloses the feature map having M channels x W width x H height.),
wherein determining the encoding feature map comprises determining only a subset of channels of the extracted feature map as the encoding feature map (Fig. 4, [0147-0149], Yang discloses obtaining an initial feature map with M channels, and consequently encoding the initial feature map to obtain a first feature map which contains M1 channels where M1 < M.),
Yang does not disclose metadata of the encoding feature map, and wherein the metadata of the encoding feature map includes information related to the encoding feature map and information related to a feature map to be reconstructed from the encoding feature map.
Bhagavathy discloses metadata of the encoding feature map, and wherein the metadata of the encoding feature map includes information related to the encoding feature map and information related to a feature map to be reconstructed from the encoding feature map ([0036], [0039], Fig. 1, Bhagavathy teaches down sampling high resolution frames to low resolution frames and obtaining metadata (which is used to guide post-processing), which are encoded together using encoder 152.).
Yang and Bhagavathy are considered to be analogous to the claimed invention as they are in the same field of using deep learning to extract image features. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Yang by incorporating Bhagavathy’s disclosure of metadata, such that image metadata is encoded alongside the feature map obtained by Yang, and consequently used to guide image reconstruction. The motivation for this combination being the ability to include relevant metadata information which is used to guide post-processing steps.
Regarding Claim 9, Yang discloses a feature map decoding method, comprising:
decoding, from a bitstream, a decoded feature map (Fig. 2, [0130], Yang discloses an entropy decoding module which parses and decodes a feature map.)
performing inverse conversing on the decoded feature map (Fig. 2, [0130], Yang discloses performing dequantization (i.e., inverse conversing) on a decoded feature map.); and
reconstructing a feature map based on the inverse-converted decoded feature map (Fig. 2, [0130], Yang discloses inputting a dequantized feature map (i.e., inverse-converted decoded feature map) to a feature decoding module to reconstruct a feature map.),
wherein the reconstructed feature map has a width, a height, and a channel size ([0240], Yang discloses decoding a bitstream to obtain a two-dimensional feature map with M2 channels. The Examiner asserts that the initial feature map has dimensions of M channels x W width x H height, and the reconstructed feature map also has dimensions of a width and height.), and
wherein reconstructing the feature map comprises reconstructing, based on the decoded feature map ([0166], Yang discloses reconstructing a second feature map at a target resolution lower (i.e., different) than the resolution of the initial image. The Examiner notes that the reconstructed second feature map disclosed by Yang consists of M2 channels, and if the second feature map is reconstructed at a lower resolution, than the M2 channels are also at a lower resolution).).
Yang does not disclose metadata of the decoded feature map, wherein the metadata of the decoded feature map includes information related to the decoded feature map and information related to a feature map to be reconstructed from the decoded feature map.
Bhagavathy discloses metadata of the decoded feature map, wherein the metadata of the decoded feature map includes information related to the decoded feature map and information related to a feature map to be reconstructed from the decoded feature map ([0039], Fig. 1, Bhagavathy teaches using a decoder 153 to decode low resolution frames and metadata, wherein the metadata is used to guide post-processing.).
Yang and Bhagavathy are considered to be analogous to the claimed invention as they are in the same field of using deep learning networks to extract image features. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Yang such that the decoding process performed by Yang also incorporated metadata information as taught by Bhagavathy in order to guide reconstruction. The motivation for this combination being the ability to include metadata information relevant to post-processing steps.
Claim 2 is rejected as being unpatentable over Yang in view of Bhagavathy in view of Zheng et al. (“Pop-Net: Encoder-Dual Decoder for Semantic Segmentation and Single-View Height Estimation”, DOI: 10.1109/IGARSS.2019.8897927, Publication Year: 2019; hereinafter “Zheng”).
Regarding Claim 2, Yang in view of Bhagavathy teaches the feature map encoding method of claim 1.
Yang in view of Bhagavathy does not explicitly teach wherein determining the encoding feature map comprises determining the encoding feature map for a single-layer feature map, and wherein the determined encoding feature map has a resolution that is the same as or different from a resolution of the single-layer feature map.
Zheng discloses wherein determining the encoding feature map comprises determining the encoding feature map for a single-layer feature map, and wherein the determined encoding feature map has a resolution that is the same as or different from a resolution of the single-layer feature map (Zheng discloses a utilizing a feature pyramid network to obtain single-layer feature maps C2-C5 (which considered together form a multi-layer feature map). The Examiner notes that the resolutions of each feature map C2-C5 are different.).
Yang, Bhagavathy, and Zheng are considered to be analogous to the claimed invention as they are in the same field of using deep learning to extract image features. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Yang in view of Bhagavathy such that the feature extraction module taught by Yang in view of Bhagavathy is replaced by the feature pyramid network disclosed by Zheng. The motivation for this combination being the ability to combine low-level detail information and high-level semantic information together, which can aid in improving image processing tasks.
Claim 3 is rejected as being unpatentable over Yang in view of Bhagavathy in view of Ahn and Lee (US 2024/0056575; hereinafter “Ahn”).
Regarding Claim 3, Yang in view of Bhagavathy teaches the feature map encoding method of claim 1.
Yang in view of Bhagavathy does not explicitly teach wherein determining the encoding feature map comprises determining the encoding feature map based on a quantization parameter.
Ahn discloses determining the encoding feature map comprises determining the encoding feature map based on a quantization parameter (Fig. 9, [0122], Ahn discloses a feature map selection unit which can select a feature map based on a compression rate (i.e., a quantization parameter)).
Yang, Bhagavathy, and Ahn are considered to be analogous to the claimed invention as they are in the same field of using deep learning to extract image features. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Yang in view of Bhagavathy such that the deciding/selection of a feature map utilized the logic of the feature map selection unit disclosed by Ahn. The motivation for this combination being the ability to provide further context and control when determining an appropriate feature map.
Claims 7 and 15 are rejected as being unpatentable over Yang in view of Bhagavathy in view of Choi et al. (US 2016/0323592; hereinafter “Choi-I”).
Regarding Claim 7, Yang in view of Bhagavathy teaches the feature map encoding method of claim 1.
Yang in view of Bhagavathy does not explicitly disclose wherein the metadata of the encoding feature map further includes information about a feature map reconstruction mode, and wherein the information about the feature map reconstruction mode indicates one of an intra-layer resolution adjustment mode or a resolution non-adjustment model.
Choi-I discloses wherein the metadata of the encoding feature map further includes information about a feature map reconstruction mode ([0068-0069], [0103], Choi-I discloses encoding a prediction mode information with an image sequence, which is used by a decoder to determine how the image sequence should be reconstructed. The Examiner notes that the claimed “feature map” and the image processed by Choi are equivalent in that they are both represented by data in the same format, and therefore similar reconstruction methods are applicable, which supports the motivation and combination made below.), and wherein the information about the feature map reconstruction mode indicates one of an intra-layer resolution adjustment mode or a resolution non-adjustment model ([0068], [0103], Choi-I discloses utilizing an intra or inter prediction mode to reconstruct an image, by referring to images in the same layer (inter) or images in another layer (intra).).
Yang, Bhagavathy, and Choi-I are considered to be analogous to the claimed invention as they are in the same field of using encoder and decoder frameworks for image processing. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Yang in view of Bhagavathy such that the metadata taught by Yang in view of Bhagavathy further included the prediction mode information as taught by Choi-I, such that the prediction mode information provided information on what types of methods/modes can be used to reconstruct a feature map, as taught by Choi-I. The motivation for this combination being the ability utilize the reconstruction mode information to aide decoding an image.
Regarding Claim 15, Yang in view of Bhagavathy teaches the feature map decoding method of claim 9.
Yang in view of Bhagavathy does not teach wherein the metadata of the decoded feature map further includes information about a feature map reconstruction mode, and wherein the information about the feature map reconstruction mode indicates one of an intra-layer resolution adjustment mode or a resolution non-adjustment modes.
Choi-I discloses wherein the metadata of the decoded feature map further includes information about a feature map reconstruction mode ([0068-0069], [0103], Choi-I discloses encoding a prediction mode information with an image sequence, which is used by a decoder to determine how the image sequence should be reconstructed. The Examiner notes that the claimed “feature map” and the image processed by Choi-I are equivalent in that they are both represented by data in the same format, and therefore similar reconstruction methods are applicable, which supports the motivation and combination made below.), and wherein the information about the feature map reconstruction mode indicates one of an intra-layer resolution adjustment mode or a resolution non-adjustment modes ([0068], [0103], Choi-I discloses utilizing an intra or inter prediction mode to reconstruct an image, by referring to images in the same layer (inter) or images in another layer (intra).).
Yang, Bhagavathy, and Choi-I are considered to be analogous to the claimed invention as they are in the same field of using encoder and decoder frameworks for image processing. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Yang in view of Bhagavathy such that the metadata taught by Yang in view of Bhagavathy further included the prediction mode information as taught by Choi-I, such that the prediction mode information provided information on what types of methods/modes can be used to reconstruct a feature map, as taught by Choi-I. The motivation for this combination being the ability utilize the reconstruction mode information to aide decoding an image.
Claims 8 and 16-18 are rejected as being unpatentable over Yang in view of Bhagavathy in view of Choi et al. (US 2022/0108127; hereinafter “Choi-II”).
Regarding Claim 8, Yang in view of Bhagavathy teaches the feature map encoding method of claim 1.
Yang in view of Bhagavathy does not explicitly teach wherein the information related to the encoding feature map includes channel size information of the encoding feature map, and the information related to the feature map to be reconstructed includes channel size information of the feature map to be reconstructed.
Choi-II discloses wherein the information related to the encoding feature map includes channel size information of the encoding feature map, and the information related to the feature map to be reconstructed includes channel size information of the feature map to be reconstructed ([0058], [0063-0065], [0078], [0162-0165], Choi-II discloses obtaining feature map information including the channel length (i.e., the channel size) of the feature map). The Examiner notes that this information is obtained both for a feature map to be encoded (i.e., encoding feature map) and a feature map to be decoded (i.e., decoded feature map). Furthermore, the feature map information is obtained and used consequently used to determine the final k’ feature map channel sizes on a reconstructed feature map.).
Yang, Bhagavathy, and Choi-II are considered to be analogous to the claimed invention as they are in the same field of using deep learning networks to extract image features. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Yang in view of Bhagavathy such that the metadata information taught by Yang in view of Bhagavathy incorporated the channel size information disclosed by Choi-II. The motivation for this combination being able to incorporate additional information in to the metadata to help guide image post-processing.
Regarding Claim 16, Yang in view of Bhagavathy teaches the decoding method of claim 9.
Yang in view of Bhagavathy does not teach wherein the information related to the decoded feature map includes channel size information of the decoded feature map, and the information related to the feature map to be reconstructed includes channel size information of the reconstructed feature map.
Choi-II discloses wherein the information related to the decoded feature map includes channel size information of the decoded feature map, and the information related to the feature map to be reconstructed includes channel size information of the reconstructed feature map (0058], [0063-0065], [0078], [0162-0165], Choi-II discloses obtaining feature map information including the channel length (i.e., the channel size) of the feature map). The Examiner notes that this information is obtained both for a feature map to be encoded (i.e., encoding feature map) and a feature map to be decoded (i.e., decoded feature map). Furthermore, the feature map information is obtained and used consequently used to determine the final k’ feature map channel sizes on a reconstructed feature map.).
Yang, Bhagavathy, and Choi-II are considered to be analogous to the claimed invention as they are in the same field of using deep learning networks to extract image features. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Yang in view of Bhagavathy such that the metadata information taught by Yang in view of Bhagavathy incorporated the channel size information disclosed by Choi-II. The motivation for this combination being able to incorporate additional information in to the metadata to help guide image post-processing.
Regarding Claim 17, Yang in view of Bhagavathy teaches the feature map encoding method of claim 1.
Yang in view of Bhagavathy does not explicitly teach wherein the metadata of the encoding feature map further includes information indicating a difference between the channel size of the feature map to be reconstructed and a channel size of the encoding feature map.
Choi-II discloses wherein the metadata of the encoding feature map further includes information indicating a difference between the channel size of the feature map to be reconstructed and a channel size of the encoding feature map ([0077-0078], [0090-0091], Choi-II discloses obtaining feature map information which includes a delta_channel_idx value, wherein the value indicates a difference between a current feature map channel number (i.e., channel size of an encoding feature map or channel size of a decoding feature map) and a reference feature map channel.).
Yang, Bhagavathy, and Choi-II are considered to be analogous to the claimed invention as they are in the same field of using deep learning networks to extract image features. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Yang in view of Bhagavathy such that the metadata information taught by Yang in view of Bhagavathy incorporated the logic behind the delta_channel_idx value disclosed by Choi-II, such that a difference is calculated between the channel size of the encoding feature map and the feature map to be reconstructed. The motivation for this combination being able to incorporate additional information in to the metadata to help guide image post-processing.
Regarding Claim 18, Yang in view of Bhagavathy teaches the feature map decoding method of claim 9.
Yang in view of Bhagavathy does not explicitly teach wherein the metadata of the decoded feature map further includes information indicating a difference between the channel size of the feature map to be reconstructed and a channel size of the decoded feature map.
Choi-II discloses wherein the metadata of the decoded feature map further includes information indicating a difference between the channel size of the feature map to be reconstructed and a channel size of the decoded feature map ([0077-0078], [0090-0091], Choi-II discloses obtaining feature map information which includes a delta_channel_idx value, wherein the value indicates a difference between a current feature map channel number (i.e., channel size of an encoding feature map or channel size of a decoding feature map) and a reference feature map channel.).
Yang, Bhagavathy, and Choi-II are considered to be analogous to the claimed invention as they are in the same field of using deep learning networks to extract image features. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Yang in view of Bhagavathy such that the metadata information taught by Yang in view of Bhagavathy incorporated the logic behind the delta_channel_idx value disclosed by Choi-II, such that a difference is calculated between the channel size of the decoded feature map and the feature map to be reconstructed. The motivation for this combination being able to incorporate additional information in to the metadata to help guide image post-processing.
Claims 10 and 13-14 are rejected as being unpatentable over Yang in view of Bhagavathy in view of Jung et al. (US 2023/0222626; hereinafter “Jung”).
Regarding Claim 10, Yang in view of Bhagavathy teaches the feature map decoding method of claim 9
Yang in view of Bhagavathy does not explicitly teach wherein the decoded feature map and the feature map to be reconstructed belong to a single layer.
Jung discloses wherein the decoded feature map and the feature map to be reconstructed belong to a single layer (Fig. 30, [0237-0238], Jung discloses reconstructing a feature map, wherein a decoded feature map F’ (i.e., “decoded feature map”) is used to reconstruct various feature maps FP2’, FP3’, FP4’, FP5’. Specifically, FP5’ is reconstructed from F’ and is “identically set as the first feature map corresponding to the P5 layer” (i.e., belong to a single layer as they are directly correlated).).
Yang, Bhagavathy, and Jung are considered to be analogous to the claimed invention as they are in the same field of using deep learning networks to extract image features. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Yang in view of Bhagavathy such that multi-layer feature maps disclosed by Jung were extracted and utilized in the encoding-decoding process, such that the feature map to be reconstructed and the decoded feature map belong to a single layer. The motivation for this combination being the ability to utilize a multi-layer feature map combine low-level detail information and high-level semantic information together, which can aid in improving image processing tasks.
Regarding Claim 13, Yang in view of Bhagavathy in view of Jung teaches the feature map decoding method of claim 10, wherein reconstructing the feature map comprises reconstructing, in the single layer, a feature map having a resolution identical to a resolution of the decoded feature map (Fig. 30, [0237-0238], Jung discloses reconstructing a feature map, wherein a decoded feature map F’ (i.e., “decoded feature map”) is used to reconstruct various feature maps FP2’, FP3’, FP4’, FP5’. Specifically, FP5’ is reconstructed from F’ and is “identically set (i.e., identical resolution) as the first feature map corresponding to the P5 layer”).
Regarding Claim 14, Yang in view of Bhagavathy in view of Jung teaches the feature decoding method of claim 10.
The current combination of Yang in view of Bhagavathy in view of Jung does not explicitly teach wherein reconstructing the feature map comprises reconstructing, in the single layer, a feature map having a resolution different from a resolution of the decoded feature map.
Jung further teaches wherein reconstructing the feature map comprises reconstructing, in the single layer, a feature map having a resolution different from a resolution of the decoded feature map (Fig. 28, [0214-0216], Jung discloses using a decoded feature map F’ and performing upsampling to obtain feature map FP2’.).
It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to further modify the invention of Yang in view of Bhagavathy in view of Jung such that the decoded feature map can be used to generate a feature map of a different resolution. The motivation for this combination being the ability to further modify how a multi-layer feature map is used to extract information from an image.
Allowable Subject Matter
Claims 5-6 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Kim et al. (US20230105112)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PROMOTTO TAJRIAN ISLAM whose telephone number is (703)756-5584. The examiner can normally be reached Monday - Friday 8:30 am - 5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PROMOTTO TAJRIAN ISLAM/ Examiner, Art Unit 2669
/CHAN S PARK/ Supervisory Patent Examiner, Art Unit 2669