Prosecution Insights
Last updated: April 19, 2026
Application No. 18/530,090

POINT CLOUD ATTRIBUTE INFORMATION ENCODING METHOD AND APPARATUS, POINT CLOUD ATTRIBUTE INFORMATION DECODING METHOD AND APPARATUS, AND RELATED DEVICE

Final Rejection §103
Filed
Dec 05, 2023
Examiner
LIMA, FABIO S
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
Vivo Mobile Communication Co., Ltd.
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 1m
To Grant
92%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
319 granted / 415 resolved
+18.9% vs TC avg
Moderate +15% lift
Without
With
+14.8%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
32 currently pending
Career history
447
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
19.7%
-20.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1, 2, 4, 6-12 and 14-24 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 11 and 12 rejected under 35 U.S.C. 103 as being unpatentable over Cohen et al. (Point Cloud Attribute Compression Using 3-D Intra Prediction and Shape-Adaptive Transforms," 2016 Data Compression Conference (DCC), Snowbird, UT, USA, 2016, pp. 141-150, doi: 10.1109/DCC.2016.67.), hereinafter referred to as Cohen, in view of Zhu (US20230055026A1), hereinafter referred to as Zhu. Regarding claim 1, Cohen discloses point cloud attribute information encoding method (Cohen, Title -Point Cloud Attribute Compression …), comprising: performing discrete cosine transform DCT transform on K to-be-coded points to obtain a transform coefficient of the K to-be-coded points; wherein K is a positive integer (Cohen, Section 4.1, shape-adaptive DCT (SA-DCT) is performed on the coded points); and quantizing the transform coefficient of the K to-be-coded points and performing entropy coding based on a quantized transform coefficient, to generate a binary bit stream (Cohen, Sections 4/4.1 and 5 - quantized transform coefficients are then signaled in a bitstream; Fig. 5 – Entropy coder). Cohen does not explicitly disclose wherein the method further comprises: performing sorting on points of a to-be-coded point cloud and obtaining K to-be- coded points in sorted to-be-coded point cloud; wherein the performing sorting on points of a to-be-coded point cloud and obtaining K to-be-coded points in sorted to-be-coded point cloud comprises: calculating a Hilbert code corresponding to each point in the to-be-coded point cloud, performing sorting on the points of the to-be-coded point cloud based on the Hilbert codes, and determining the K to-be-coded points based on the sorted to-be-coded point cloud; or calculating a Morton code corresponding to each point in the to-be-coded point cloud, performing sorting on the points of the to-be-coded point cloud based on the Morton codes, and determining the K to-be-coded points based on the sorted to-be-coded point cloud. However, Zhu from the same or similar endeavor of data compression discloses wherein the method further comprises: performing sorting on points of a to-be-coded point cloud and obtaining K to-be- coded points in sorted to-be-coded point cloud (¶ [0101] Corresponding points in each layer of the octree are acquired according to a sorting result of the Morton codes); wherein the performing sorting on points of a to-be-coded point cloud and obtaining K to-be-coded points in sorted to-be-coded point cloud comprises: calculating a Hilbert code corresponding to each point in the to-be-coded point cloud, performing sorting on the points of the to-be-coded point cloud based on the Hilbert codes, and determining the K to-be-coded points based on the sorted to-be-coded point cloud (¶[0141] Hilbert curve); or calculating a Morton code corresponding to each point in the to-be-coded point cloud, performing sorting on the points of the to-be-coded point cloud based on the Morton codes, and determining the K to-be-coded points based on the sorted to-be-coded point cloud (¶ [0101] Corresponding points in each layer of the octree are acquired according to a sorting result of the Morton codes). It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings disclosed by Cohen to add the teachings of Zhu as above, in order to “improves efficiency of point cloud data encoding, and improves the user experience” (Zhu, [0092]). Regarding claim 2, Cohen and Zhu disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Furthermore, Cohen discloses the method according to claim 1, wherein before the performing DCT transform on K to-be-coded points to obtain a transform coefficient of the K to-be-coded points, the method further comprises: obtaining attribute residual information for the K to-be-coded points based on attribute prediction information for the K to-be-coded points (Cohen, Section 3 – form prediction for the current block and obtain prediction error (residuals)); and the performing DCT transform on K to-be-coded points to obtain a transform coefficient of the K to-be-coded points comprises: performing DCT transform on the attribute residual information for the K to-be-coded points to obtain the transform coefficient corresponding to the K to-be-coded points (Cohen, Sections 4/4.1- perform SA-DCT to the attribute residual samples before coding). wherein after the quantizing the transform coefficient of the K to-be-coded points and performing entropy coding based on a quantized transform coefficient, the method further comprises: performing inverse quantization on the quantized transform coefficient, and performing inverse transform on an inverse transform coefficient obtained after inverse quantization, so as to obtain attribute reconstruction information for the K to-be-coded points (Cohen, FIG. 5 - perform inverse DCT on inverse quantized coef. and combine with prediction to reconstruct the attributes). Regarding claims 11 and 12, these claims are rejected based on the same art and evidentiary limitations applied to the encoding method of claims 1-3, since they claim analogous subject matter in the form of a decoding method for performing the same or equivalent functionality. The examiner notes that it is well-known in the art that video compression involves a complementary pair of systems: a compressor (encoder) and a decompressor (decoder). The encoder converts the source data into a compressed form, occupying a reduced number of bits prior to transmission or storage, while the decoder converts the compressed form back into a representation of the original video data by performing a reciprocal process to that of the encoder, decoding the encoded video data from the bitstream. Claims 4, 6-7 and 13-24 are rejected under 35 U.S.C. 103 as being unpatentable over Cohen, in view of Zhu, and further, in view of Klein Gunnewiek (US20030058940A1), hereinafter referred to as Klein Gunnewiek. Regarding claim 4, Cohen and Zhu disclose all the limitations of claim 2, and is analyzed as previously discussed with respect to that claim. Cohen does not explicitly disclose obtaining a high-frequency coefficient quantization step corresponding to the high frequency coefficient, and obtaining a low-frequency coefficient quantization step corresponding to the low-frequency coefficient; and quantizing the high frequency coefficient based on the high frequency coefficient and the high frequency coefficient quantization step, and quantizing the low-frequency coefficient based on the low-frequency coefficient and the low-frequency coefficient quantization step. However, Klein Gunnewiek from the same or similar endeavor of com discloses obtaining a high-frequency coefficient quantization step corresponding to the high frequency coefficient, and obtaining a low-frequency coefficient quantization step corresponding to the low-frequency coefficient; and quantizing the high frequency coefficient based on the high frequency coefficient and the high frequency coefficient quantization step, and quantizing the low-frequency coefficient based on the low-frequency coefficient and the low-frequency coefficient quantization step (Klein Gunnewiek ¶¶ [0021]-[0027]) It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings disclosed by Cohen and Zhu to add the teachings of Klein Gunnewiek as above, in order to obtain low bit-rates, attenuating higher-frequency transform coefficients which is more advantageous than increasing a step-size for all transform coefficients performance (Klein Gunnewiek, [0004]). Regarding claim 6, Cohen, Zhu and Klein Gunnewiek disclose all the limitations of claim 4, and is analyzed as previously discussed with respect to that claim. Furthermore, Cohen discloses the method according to claim 5, wherein the obtaining coefficient quantization step corresponding to the high frequency coefficient, and obtaining a low-frequency coefficient quantization step corresponding to the low-frequency coefficient comprises: obtaining the high-frequency coefficient quantization step corresponding to the high-frequency coefficient, and obtaining a low-frequency coefficient quantization step corresponding to the low-frequency coefficient, based on a distribution status of components corresponding to attribute information for the K to-be-coded points (Cohen, Sections 3-4; and FIG 5); Cohen does not explicitly disclose obtaining separate high frequency and low frequency quantization step based on a distribution status of components corresponding to attribute information for the K to-be-coded points. However, Klein Gunnewiek from the same or similar endeavor of data compression discloses the obtaining separate high frequency and low frequency quantization step based on a distribution status of components corresponding to attribute information for the K to-be-coded points (Klein Gunnewiek ¶¶ [0021]- [0027]). The motivation for combining Cohen, Zhu and Klein Gunnewiek has been discussed in connection with claim 4, above. Regarding claim 7, Cohen, Zhu and Klein Gunnewiek disclose all the limitations of claim 6, and is analyzed as previously discussed with respect to that claim. Cohen does not explicitly disclose the method according to claim 6, wherein the obtaining the high-frequency coefficient quantization step corresponding to the high-frequency coefficient, and obtaining a low-frequency coefficient quantization step corresponding to the low-frequency coefficient, based on a distribution status of components corresponding to attribute information for the K to-be-coded points comprises: in a case that distribution of the components corresponding to the attribute information for the K to-be-coded points is flat, the quantization step of the high-frequency transform coefficient is a sum of an original quantization step, a preset quantization step offset, and a high-frequency coefficient quantization step offset, and the quantization step of the low-frequency transform coefficient is a sum of an original quantization step, a preset quantization step offset, and a low-frequency coefficient quantization step offset; and in a case that distribution of the components corresponding to the attribute information for the K to-be-coded points is not flat, the quantization step of the high-frequency coefficient is a sum of an original quantization step, a preset quantization step offset, and a low-frequency coefficient quantization step offset, and the quantization step of the low-frequency coefficient is equal to the quantization step of the high frequency coefficient. However, Zhu or Klein Gunnewiek from the same or similar endeavor of data compression discloses the method according to claim 6, wherein the obtaining the high-frequency coefficient quantization step corresponding to the high-frequency coefficient, and obtaining a low-frequency coefficient quantization step corresponding to the low-frequency coefficient, based on a distribution status of components corresponding to attribute information for the K to-be-coded points comprises: in a case that distribution of the components corresponding to the attribute information for the K to-be-coded points is flat, the quantization step of the high-frequency transform coefficient is a sum of an original quantization step, a preset quantization step offset, and a high-frequency coefficient quantization step offset, and the quantization step of the low-frequency transform coefficient is a sum of an original quantization step, a preset quantization step offset, and a low-frequency coefficient quantization step offset; and in a case that distribution of the components corresponding to the attribute information for the K to-be-coded points is not flat, the quantization step of the high-frequency coefficient is a sum of an original quantization step, a preset quantization step offset, and a low-frequency coefficient quantization step offset, and the quantization step of the low-frequency coefficient is equal to the quantization step of the high frequency coefficient (Klein Gunnewiek ¶¶ [0021]- [0027] and FIG. 3). The motivation for combining Cohen, Zhu and Klein Gunnewiek has been discussed in connection with claim 4, above. Regarding claims 13-15, these claims are rejected based on the same art and evidentiary limitations applied to the encoding method of claims 4-7, since they claim analogous subject matter in the form of a decoding method for performing the same or equivalent functionality. The examiner notes that it is well-known in the art that video compression involves a complementary pair of systems: a compressor (encoder) and a decompressor (decoder). The encoder converts the source data into a compressed form, occupying a reduced number of bits prior to transmission or storage, while the decoder converts the compressed form back into a representation of the original video data by performing a reciprocal process to that of the encoder, decoding the encoded video data from the bitstream. The motivation for combining Cohen, Zhu and Klein Gunnewiek has been discussed in connection with claim 4, above. Regarding claims 16 -20, these claims are rejected based on the same art and evidentiary limitations applied to the encoding method of claims 1-6, since they claim analogous subject matter in the form of a device or performing the same or equivalent functionality. Cohen does not explicitly disclose terminal, comprising a processor, a memory, and a program or instructions stored in the memory and capable of running on the processor. However, Klein Gunnewiek from the same or similar endeavor of data compression discloses terminal, comprising a processor, a memory, and a program or instructions stored in the memory and capable of running on the processor (Klein Gunnewiek ¶ [0039]). The examiner notes that it is well-known in the art that video compression involves a complementary pair of systems: a compressor (encoder) and a decompressor (decoder). The encoder converts the source data into a compressed form, occupying a reduced number of bits prior to transmission or storage, while the decoder converts the compressed form back into a representation of the original video data by performing a reciprocal process to that of the encoder, decoding the encoded video data from the bitstream. Regarding claim 7, Cohen, Zhu and Klein Gunnewiek disclose all the limitations of claim 6, and is analyzed as previously discussed with respect to that claim. Cohen does not explicitly disclose the method according to claim 6, wherein the obtaining the high- frequency coefficient quantization step corresponding to the high-frequency coefficient, and obtaining a low-frequency coefficient quantization step corresponding to the low-frequency coefficient, based on a distribution status of components corresponding to attribute information for the K to-be-coded points comprises: in a case that distribution of the components corresponding to the attribute information for the K to-be-coded points is flat, the quantization step of the high-frequency transform coefficient is a sum of an original quantization step, a preset quantization step offset, and a high-frequency coefficient quantization step offset, and the quantization step of the low- frequency transform coefficient is a sum of an original quantization step, a preset quantization step offset, and a low-frequency coefficient quantization step offset; Andin a case that distribution of the components corresponding to the attribute information for the K to-be-coded points is not flat, the quantization step of the high-frequency coefficient is a sum of an original quantization step, a preset quantization step offset, and a low- frequency coefficient quantization step offset, and the quantization step of the low-frequency coefficient is equal to the quantization step of the high frequency coefficient However, Zhu or Klein Gunnewiek from the same or similar endeavor of data compression discloses the method according to claim 6, wherein the obtaining the high- frequency coefficient quantization step corresponding to the high-frequency coefficient, and obtaining a low-frequency coefficient quantization step corresponding to the low-frequency coefficient, based on a distribution status of components corresponding to attribute information for the K to-be-coded points comprises: in a case that distribution of the components corresponding to the attribute information for the K to-be-coded points is flat, the quantization step of the high-frequency transform coefficient is a sum of an original quantization step, a preset quantization step offset, and a high-frequency coefficient quantization step offset, and the quantization step of the low- frequency transform coefficient is a sum of an original quantization step, a preset quantization step offset, and a low-frequency coefficient quantization step offset; Andin a case that distribution of the components corresponding to the attribute information for the K to-be-coded points is not flat, the quantization step of the high-frequency coefficient is a sum of an original quantization step, a preset quantization step offset, and a low- frequency coefficient quantization step offset, and the quantization step of the low-frequency coefficient is equal to the quantization step of the high frequency coefficient (Klein Gunnewiek ¶¶ [0021]- [0027] and FIG. 3) The motivation for combining Cohen, Zhu and Klein Gunnewiek has been discussed in connection with claim 4, above. Regarding claim 21, Cohen, Zhu and Klein Gunnewiek disclose all the limitations of claim 4, and is analyzed as previously discussed with respect to that claim. Cohen does not explicitly disclose the method according to claim 4, wherein the quantization step of the high-frequency transform coefficient determined based on original quantization step, preset quantization step offset, and high-frequency coefficient quantization step offset, and the quantization step of the low-frequency transform coefficient is determined based on original quantization step, preset quantization step offset, and low-frequency coefficient quantization step offset. However, Zhu or Klein Gunnewiek from the same or similar endeavor of data compression discloses the method according to claim 4, wherein the quantization step of the high-frequency transform coefficient determined based on original quantization step, preset quantization step offset, and high-frequency coefficient quantization step offset, and the quantization step of the low-frequency transform coefficient is determined based on original quantization step, preset quantization step offset, and low-frequency coefficient quantization step offset (Klein Gunnewiek ¶¶ [0018], [0021]- [0024] and FIG. 3) The motivation for combining Cohen, Zhu and Klein Gunnewiek has been discussed in connection with claim 4, above. Regarding claim 22, Cohen, Zhu and Klein Gunnewiek disclose all the limitations of claim 21, and is analyzed as previously discussed with respect to that claim. Cohen does not explicitly disclose the method according to claim 21, wherein the quantization step of the high-frequency transform coefficient is a sum of the original quantization step, the preset quantization step offset, and the high-frequency coefficient quantization step offset, and the quantization step of the low-frequency transform coefficient is a sum of the original quantization step, the preset quantization step offset, and the low-frequency coefficient quantization step offset. However, Zhu or Klein Gunnewiek from the same or similar endeavor of data compression discloses the method according to claim 21, wherein the quantization step of the high-frequency transform coefficient is a sum of the original quantization step, the preset quantization step offset, and the high-frequency coefficient quantization step offset, and the quantization step of the low-frequency transform coefficient is a sum of the original quantization step, the preset quantization step offset, and the low-frequency coefficient quantization step offset (Klein Gunnewiek ¶¶ [0018], [0021]- [0024] and FIG. 3) The motivation for combining Cohen, Zhu and Klein Gunnewiek has been discussed in connection with claim 4, above. Regarding claims 23 and 24, these claims are rejected based on the same art and evidentiary limitations applied to the encoding method of claims 21 and 22, since they claim analogous subject matter in the form of a device or performing the same or equivalent functionality. Claim 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Cohen and Zhu, in view of Thirumalai et al. (US20150296209A1), hereinafter referred to as Thirumalai Regarding claim 8, Cohen and Zhu disclose all the limitations of claim 1, and is analyzed as previously discussed with respect to that claim. Furthermore, Cohen discloses the method according to claim 1, wherein before the performing DCT transform on K to-be-coded points, the method further comprises (Cohen, Sections 4/4.1, and FIG. 5): obtaining first information (Cohen, Sections 3 and 4/4.1); and the first information comprises the K to-be-coded points and the second information comprises attribute prediction information for the K to-be-coded points (Cohen, Sections 4/4.1; and Fig. 5); or the first information comprises N coded points prior to the K to-be-coded points and the second information comprises attribute reconstruction information for the N coded points, and N is an integer greater than 1 (Cohen, Section 3). Cohen does not explicitly disclose determining, based on second information associated with the first information, to perform DCT transform on the K to-be-coded points; wherein However, Thirumalai from the same or similar endeavor of data compression discloses determining, based on second information associated with the first information, to perform DCT transform on the K to-be-coded points (Thirumalai, ¶¶ [0065] – [0072], [0085]- [0096] and [0107]- [0108]; and Figs 3-5 – user reconstructed blocks to compute a flatness complexity matric, then controls the coding operation based on the determination); It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings disclosed by Cohen and Zhu to add the teachings of Thirumalai as above, in order to provide, among other things, picture quality that is visually lossless (i.e., good enough that users cannot tell the compression is active). The display link video compression technique should also provide a scheme that is easy and inexpensive to implement in real-time with conventional hardware (Thirumalai, [0005]). Regarding claim 9, Cohen, Zhu and Thirumalai disclose all the limitations of claim 8, and is analyzed as previously discussed with respect to that claim. Furthermore, Cohen discloses the method according to claim 8, wherein the first information comprises the K to-be-coded points and the second information comprises the attribute prediction information for the K to-be-coded points (Cohen, Sections 3 and 4/4.1). Cohen does not explicitly disclose the e determining, based on second information associated with the first information, to perform DCT transform on the K to-be-coded points comprises: obtaining a maximum attribute prediction value and a minimum attribute prediction value in the attribute prediction information corresponding to the K to-be-coded points in a case that an absolute difference between the maximum attribute prediction value and the minimum attribute prediction value is less than a first threshold, determining to perform DCT transform on the K to-be-coded points and; or in a case that an absolute ratio of the maximum attribute prediction value to the minimum attribute prediction value is less than a second threshold, determining to perform DCT transform on the K to-be-coded points. However, Thirumalai from the same or similar endeavor of data compression discloses the determining, based on second information associated with the first information, to perform DCT transform on the K to-be-coded points comprises: obtaining a maximum attribute prediction value and a minimum attribute prediction value in the attribute prediction information corresponding to the K to-be-coded points (Thirumalai, Figs 3-4 and ¶¶ [0065]- [0072]) in a case that an absolute difference between the maximum attribute prediction value and the minimum attribute prediction value is less than a first threshold, determining to perform DCT transform on the K to-be-coded points (Thirumalai, Figs 3-5 and ¶¶ [0067]- [0072], [0085]- [0096] and [0107]- [0108] – thresholder maximum and minimum Flatness and complexity test) and; or in a case that an absolute ratio of the maximum attribute prediction value to the minimum attribute prediction value is less than a second threshold, determining to perform DCT transform on the K to-be-coded points s (Thirumalai, Figs 3-5 and ¶¶ [0085]- [0096] and [0107]- [0108] – ratio/deriving flatness indicators with thresholds leading to a code decision) The motivation for combining Cohen, Zhu and Thirumalai has been discussed in connection with claim 8, above. Regarding claim 10, Cohen, Zhu and Thirumalai disclose all the limitations of claim 8, and is analyzed as previously discussed with respect to that claim. Furthermore, Cohen discloses the method according to claim 8, wherein the first information comprises N coded points prior to the K to-be-coded points and the second information comprises the attribute reconstruction information for the N coded points (Cohen, Section 3). Cohen does not explicitly disclose the determining, based on second information associated with the first information, to perform DCT transform on the K to-be-coded points comprises: obtaining a maximum attribute reconstruction value and a minimum attribute reconstruction value in the attribute reconstruction information corresponding to the N coded points However, Thirumalai from the same or similar endeavor of data compression discloses the determining, based on second information associated with the first information, to perform DCT transform on the K to-be-coded points comprises: obtaining a maximum attribute reconstruction value and a minimum attribute reconstruction value in the attribute reconstruction information corresponding to the N coded points, (Thirumalai, FIG. 3 and ¶¶ [0067]- [0072]) and in a case that an absolute difference between the maximum attribute reconstruction value and the minimum attribute reconstruction value is less than a third threshold, determining to perform DCT transform on the K to-be-coded points (Thirumalai, Figs 3-5 and ¶¶ [0067]- [0072], [0085]- [0096] and [0107]- [0108]) in a case that an absolute ratio of the maximum attribute reconstruction value to the minimum attribute reconstruction value is less than a fourth threshold, determining to perform DCT transform on the K to-be-coded points Thirumalai, Figs 3-5 and ¶¶ [0085]- [0096] and [0107]- [0108]) The motivation for combining Cohen, Zhu and Thirumalai has been discussed in connection with claim 8, above. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FABIO S LIMA whose telephone number is (571)270-0625. The examiner can normally be reached on Monday - Friday 8 am - 4 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached on (571) 272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FABIO S LIMA/Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Dec 05, 2023
Application Filed
Aug 16, 2025
Non-Final Rejection — §103
Nov 20, 2025
Response Filed
Mar 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604015
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12593038
TEMPORAL PREDICTION OF PARAMETERS IN NON-LINEAR ADAPTIVE LOOP FILTER
2y 5m to grant Granted Mar 31, 2026
Patent 12593045
ENTROPY CODING-BASED FEATURE ENCODING/DECODING METHOD AND DEVICE, RECORDING MEDIUM HAVING BITSTREAM STORED THEREIN, AND METHOD FOR TRANSMITTING BITSTREAM
2y 5m to grant Granted Mar 31, 2026
Patent 12581099
INFORMATION PROCESSING DEVICE AND METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12581094
IMAGE SIGNAL ENCODING/DECODING METHOD AND DEVICE THEREFOR
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
92%
With Interview (+14.8%)
2y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month