DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
2. The information disclosure statements (IDS) were submitted on 11/05/2026. The submissions are in compliance with the provisions of 37 CFR § 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Status
3. Claims 1-20 are currently pending.
Response to Arguments
4. Applicant’s arguments with respect to the rejection(s) of claims 1-20 have been fully considered but are found un persuasive.
Applicants’ Argument
(i) At Pg.7/10 of Remarks, it is argued;
Duan and Camuffo fail to disclose or suggest "updating the first feature map to
obtain a second feature map, wherein updating the first feature map comprises performing an adaptive affine process on the first feature map according to the
rate-distortion trade-off parameter" in claim 1.
(ii) Independent claims 12 and 13 are argued under the same alleged failure of the combined arts or record.
(iii) The dependent claims 2-11 and 14-20, are allegedly considered allowable under the premise of their dependency from the respective allowable independent claims.
Examiner’s Rebuttal
To point (I), the argument in chief addressing claim 1, alleging the failure of Duan and Camuffo to disclose, must be remarked that the skilled in the art would have found obvious to interpret the process recited at claim 1, taught by Duan and similarly found in Camuffo, to represent an inherent feature compression-based method would have been incomprehensible absent an update process performed during the iteration process at an end-to-end encoder/decoder pair, as part of the adaptive R-D loss determination, corresponding to the claim reciting inter alia, as mapped according to both arts below.
Re Duan: teaching about;
updating the first feature map (for an information map, I(X;Y) in a representation Z, Sec.2.2, processing by training iterations i.e., updating, at an end-to-end prediction model, during the feature compression-based method used for feature classification, Sec.3.2.2) to obtain a second feature map, wherein updating the first feature map (by processing data X, at a 2-D datapoint compression map for classification, Fig.2, Sec.3.1 i.e., the feature map, and applying affine transform along thus inferring, updating the Z1, per Eq.(6) below
PNG
media_image1.png
200
400
media_image1.png
Greyscale
, at Sec.4.2.3) comprises performing an adaptive affine process on the first feature map (performing affine transform and its inverse Z1 before and after rounding at Eq.(6), Fig.5b Sec.4.2.3) according to the rate-distortion trade-off parameter (performing rate distortion RD trade-off, Sec.3 by obtaining a rate-distortion function to the RD, in a (X,Y) mapping domain at, Eq.(2) and Sec.2.2, Eq. (3)
PNG
media_image2.png
200
400
media_image2.png
Greyscale
, by using a rate-accuracy constraint at decoder on the R-D trade-off, Sec.3, Fig.2 or using a rate-distortion Lagrangian, at Sec.4.1 and validating the rate-distortion complexity trade-off, Sec.5.1 Fig.6b);.
In Camuffo:
updating the first feature map to obtain a second feature map (the first input data matrix of features F is updated at the subsequent transformation matrix i.e., or mapped data, is performed by the PointNet model in Fig.4 or Fig.7 and updating by citing (emphasis added); “applying PointNet recursively, learning local features with a progressively increasing contextual scale based on K-nearest neighbor (KNN) and query-ball searching methods.”, Fig.7 and Sec.5.4.1), wherein updating the first feature map comprises performing an adaptive affine process (where the PointNet computes point-wise features by repeatedly applying a transformation h by a symmetric function g, the transformation f, pre Eq.(1) in an affine registration learned transform, hence by updating the data points of a first feature map or matrix, Sec.4.1.1) on the first feature map according to the rate-distortion trade-off parameter (updating the feature maps by repeating the affine processing of the point cloud features, per Fig.4, Sec.4, specifically at Sec.4.1.1);
decoding a point cloud from the second feature map (decoding according to the updated feature at Sec.6.1.3 and Fig.15, by reconstructing the whole PS, per Sec.7.5); and
outputting the point cloud (outputting the point cloud (PCs) at Sec.7 or Fig.20 Sec.6.1, Sec.7 or 7.4).
In this claim analysis it has been emphasized that a data matrix is commonly considered a type of structured mapped data set which organizes raw data into a grid of rows and columns being mapped to respective variables or descriptors, e.g., of features.
To Points (ii) and (iii) of the alleged arguments relying on the same recited claimed limitation, Examiner extends the rebuttal based on the same terms given at point (i) defined above.
The original rejection on merit is considered pertinently applied in reference to combined arts of Duan and Camuffo hence it is respectively maintained.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently does not name joint inventors.
5. Claims 1-20, are rejected under 35 U.S.C. 103 as being obvious over Zhihao Duan et al., (hereinafter Duan) “Balancing the encoder and decoder complexity in image compression for classification”, SpringerOpen, EURASIP Journal on Image and Video Processing © The Authors (2024) and Elena Camuffo et al., (hereinafter Camuffo) “Recent Advancements in Learning Algorithms for Point Clouds: An Updated View”; Sensors 2022, 22,1357. https://doi.org/10.3390/s22041357.
Re Claim 1. (original) Duan discloses, a method comprising:
obtaining a feature bitstream (decoding a feature compression-based bitstream, Abstract);
decoding a first feature map from the feature bitstream (at a neural network-based decoder, using a number of layers and feature dimensions, i.e., among which a first feature, Sec.2.3);
obtaining a rate-distortion trade-off parameter (obtaining a rate-distortion function at Sec.2.2 Eq. (3) and using a rate-accuracy constraint at decoder on the R-D trade-off, Sec.3, Fig.2 or using a rate-distortion Lagrangian, at Sec.4.1 and validating the rate-distortion complexity trade-off, Sec.5.1 Fig.6b);
updating the first feature map (for an information map, I(X;Y) in a representation Z, Sec.2.2, processing by training iterations i.e., updating, at an end-to-end prediction model, during the feature compression-based method used for feature classification, Sec.3.2.2) to obtain a second feature map, wherein updating the first feature map (by processing data X, at a 2-D datapoint compression map for classification, Fig.2, Sec.3.1 i.e., the feature map, and applying affine transform along thus inferring, updating the Z1, per Eq.(6) below
PNG
media_image1.png
200
400
media_image1.png
Greyscale
, at Sec.4.2.3) comprises performing an adaptive affine process on the first feature map (performing affine transform and its inverse Z1 before and after rounding at Eq.(6), Fig.5b Sec.4.2.3) according to the rate-distortion trade-off parameter (performing rate distortion RD trade-off, Sec.3 by obtaining a rate-distortion function to the RD, in a (X,Y) mapping domain at, Eq.(2) and Sec.2.2, Eq. (3)
PNG
media_image2.png
200
400
media_image2.png
Greyscale
, by using a rate-accuracy constraint at decoder on the R-D trade-off, Sec.3, Fig.2 or using a rate-distortion Lagrangian, at Sec.4.1 and validating the rate-distortion complexity trade-off, Sec.5.1 Fig.6b);
As part of a similar neural network decoding process, Camuffo expressly teaches the affine prediction, processing of the point-wise features repeatedly, thus by updating the feature maps in the data properties learning and further outputting the data as in,
updating the first feature map to obtain a second feature map (the first input data matrix of features F is updated at the subsequent transformation matrix i.e., or mapped data, is performed by the PointNet model in Fig.4 or Fig.7 and updating by citing (emphasis added); “applying PointNet recursively, learning local features with a progressively increasing contextual scale based on K-nearest neighbor (KNN) and query-ball searching methods.”, Fig.7 and Sec.5.4.1), wherein updating the first feature map comprises performing an adaptive affine process (where the PointNet computes point-wise features by repeatedly applying a transformation h by a symmetric function g, the transformation f, pre Eq.(1) in an affine registration learned transform, hence by updating the data points of a first feature map or matrix, Sec.4.1.1) on the first feature map according to the rate-distortion trade-off parameter (updating the feature maps by repeating the affine processing of the point cloud features, per Fig.4, Sec.4, specifically at Sec.4.1.1);
decoding a point cloud from the second feature map (decoding according to the updated feature at Sec.6.1.3 and Fig.15, by reconstructing the whole PS, per Sec.7.5); and
outputting the point cloud (outputting the point cloud (PCs) at Sec.7 or Fig.20 Sec.6.1, Sec.7 or 7.4).
According to the teachings in Duan, the neural network training used in video coding is based on decoding compressed features from the bitstream and predicting class labels according rate-accuracy e.g., the average number of bits per sample, trade-off, as rate-distortion trade-off being part of an end-to-end image coding based on intermediate feature compression for object classification purpose. Duan, also determines the probabilistic models (Sec.4.2.3, Sec.5.1 or Fig.6b) and a variable-rate compression by applying affine transform parameters produced by lambda embedding layer. The ordinary skilled in the art would have been compelled to consider the affine feature updating as being similarly applied to the neural network training generating point cloud data as determined from Camuffo, specifically applying a CNN architecture to point-wise features by enabling affine registration to data points, (Sec.4) as found suggested in Duan, hence deeming the combination predictable.
Re Claim 2. (original) Duan and Camuffo disclose, the method of claim 1,
Camuffo teaches about, wherein the adaptive affine process further comprises scaling values of each respective channel of the first feature map by a scaling factor σ associated with the respective channel (applying scaling Sec.4.1.1, or scaling according to a grid resolution, i.e., a scaling factor, Sec.4.1.3).
Re Claim 3. (original) Duan and Camuffo disclose, the method of claim 1,
Camuffo teaches about, wherein the adaptive affine process further comprises shifting values of each respective channel of the first feature map by a scalar shift m associated with the respective channel (shifting kernels to fit the point geometry, Sec.5.4.2).
Re Claim 4. (original) Duan and Camuffo disclose, the method of claim 1,
Duan teaches about, further comprising rendering the point cloud in an immersive environment (applying the training hyperparameters to data augmenting, Table 2, Sec.5).
Camuffo teaches the applicability of the NN coding in Augmented and Virtual Reality, of the immersive technology at (Sec.1).
Re Claim 5. (original) Duan and Camuffo disclose, the method of claim 1, wherein updating the first feature map further comprises:
Duan teaches about, performing a computation using a neural network layer with the rate-distortion trade-off parameter as an input (using at input a rate-distortion function Sec.4.2.3 and at Sec.2.2 Eq. (3) by using a rate-accuracy constraint at decoder on the R-D trade-off, Sec.3, Fig.2 or using a rate-distortion Lagrangian, at Sec.4.1 and validating the rate-distortion complexity trade-off, Sec.5.1 Fig.6b); and
performing a layer normalization process on the first feature map to generate a normalized version of the first feature map (decoding the data by normalizing at layer Normalization (LN) per Fig.5b, Sec.4.1, or 4.2.1), wherein performing the adaptive affine process is performed on the normalized version of the first feature map (and affine transform, Sec.4.2.3).
Re Claim 6. (original) Duan and Camuffo disclose, the method of claim 5,
Duan teaches about, wherein performing the computation using a neural network layer generates, for each channel in the normalized version of the first feature map, a scaler shift m and a scaling factor σ (applying scalable coding, Sec.6.1).
Camuffo teaches about applying scalability composed at each model at (Sec.3.1, or 3.4, or 3.5 at Sec.4.1.1, or 4.1.3, or 5.2.1, etc.)
Re Claim 7. (original) Duan and Camuffo disclose, the method of claim 1, further comprising:
Camuffo teaches about, performing a feature refinement process one or more times, wherein the feature refinement process comprises (Sec.6.2.3):
updating the first refinement feature map to obtain a second refinement feature map, wherein updating the first refinement feature map comprises performing an adaptive affine process on the first refinement feature map according to the rate-distortion trade-off parameter (refining to obtain NL at different quality levels, Sec.6.2.3); and
decoding a third refinement feature map from the second refinement feature map, wherein the first refinement feature map is the first feature map for a first pass through the feature refinement process, and setting the first feature map equal to the third refinement feature map after a last pass through the feature refinement process (and depicting refinement by filtering passes of at decoding “VAE Decoder” and “AE Decoder”, setting the first feature map equal to the prior feature map at sequential filtering layers, per Fig.18, Sec.6.2.3, or Sec 2.3.1 for volumetric models of 3D CNN architecture).
Re Claim 8. (original) Duan and Camuffo disclose, the method of claim 1,
Camuffo teaches about, wherein decoding the point cloud from the second feature map comprises performing a feature decoding process on the second feature map (it would have been obvious to the ordinary skilled to decode the point cloud data from its respective feature map, Sec.4.1.1).
Re Claim 9. (original) Duan and Camuffo disclose, the method of claim 1, further comprising:
Camuffo teaches about, concatenating a reference feature map with the first feature map to generate a concatenated feature map (concatenating with points in a 2D grid, Sec.6.1.3;
aggregating the concatenated feature map (using an aggregating function for the concatenated features, Sec.6.1); and
setting the first feature map to be equal to the aggregated feature map (e.g., per Sec.7.2, Fig.21, or Sec.7.5).
Re Claim 10. (original) Duan and Camuffo disclose, the method of claim 9, further comprising:
Camuffo teaches about, obtaining a reference point cloud (reference point cloud, Sec.3.3);
performing a feature encoding on the reference point cloud to generate a preliminary reference feature map (encoding, Sec.6.1); and
performing an adaptive affine process on the preliminary reference feature map according to the rate-distortion trade-off parameter, wherein an output of the adaptive affine process is the reference feature map (encoding the feature maps by repeating the affine processing of the point cloud features, per Fig.4, Sec.4, specifically at Sec.4.1.1 as a direct process, performed in reverse at decoding method of claim 1).
Re Claim 11. (original) Duan and Camuffo disclose, the method of claim 10,
Camuffo teaches about, wherein the reference adaptive affine process performed on the preliminary reference feature map is identical to the adaptive affine process performed on the first feature map (being the direct process, performed in reverse at decoding method of claim 1 and as the affine and scaling is processed on the respective feature map associated with the respective channel, per the subsequent dependent claims 2).
Re Claim 12. (original) This claim represents the apparatus comprising implementing each and every limitation of the method claim 1, hence is accordingly rejected mutatis mutandis.
Re Claim 13. (original) This claim represents the encoding method generating the feature bitstream to be decoded according to the same predictive mode at the decoding method of claim 1, hence it is similarly rejected on the same evidence mapped mutatis mutandis.
Re Claim 14. (original) This claim represents the encoding method generating the feature bitstream to be decoded according to the same predictive mode at the decoding method of claim 5, (where Camuffo teaches the MLP coding, Sec.5.4.1, or Sec.6.1) hence it is similarly rejected on the same evidence mapped mutatis mutandis.
Re Claim 15. (original) This claim represents the encoding method generating the feature bitstream to be decoded according to the same predictive mode at the decoding method of claim 7, hence it is similarly rejected on the same evidence mapped mutatis mutandis.
Re Claim 16. (original) Duan and Camuffo disclose, the method of claim 13,
Camuffo teaches, wherein extracting a first feature map from the point cloud comprises performing a feature encoding process on the point cloud (encoding the point cloud, Sec.6).
Re Claim 17. (currently amended) This claim represents the encoding method generating the feature bitstream to be decoded according to the same predictive mode at the decoding method of claim 9, hence it is similarly rejected on the same evidence mapped mutatis mutandis.
Re Claim 18. (original) This claim represents the encoding method generating the feature bitstream to be decoded according to the same predictive mode at the decoding method of claim 10, hence it is similarly rejected on the same evidence mapped mutatis mutandis.
Re Claim 19. (original) This claim represents the encoding method generating the feature bitstream to be decoded according to the same predictive mode at the decoding method of claim 11, hence it is similarly rejected on the same evidence mapped mutatis mutandis.
Re Claim 20. (original) Duan and Camuffo disclose, the method of claim 13, wherein applying the hyperprior encoder to the second feature map comprises:
Duan teaches this limitation, performing a hyperprior analysis process on the second feature map to generate a third feature map (the hyperprior structure at Sec.4.2);
Camuffo teaches about,
performing a hyperprior analysis process on the second feature map to generate a third feature map (performing the hyperprior coding to the second feature map, to improve the effect of entropy coding, at Sec.6.2.2, Sec.6.2.3 and Fig.17);
generating the hyperprior bitstream from the third feature map (according to the subsequent feature map, generating the hyperprior bitstream in Fig.17 Left block diagram);
performing a hyperprior synthesis process on the third feature map to generate one or more distribution parameters (performing synthesis transform Fig.17); and
arithmetically encoding the second feature map based on the one or more distribution parameters to generate the feature bitstream (encoding according to the distribution parameters of metadata, in Fig.17).
Conclusion
6. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVE J CZEKAJ. The examiner can normally be reached on 8-6:00 Monday-Thursday and every other Friday.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at (571)272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DRAMOS KALAPODAS/Primary Examiner, Art Unit 2487