Prosecution Insights
Last updated: April 19, 2026
Application No. 18/925,857

MULTIMEDIA DATA PROCESSING METHOD AND APPARATUS, DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT

Final Rejection §103
Filed
Oct 24, 2024
Examiner
TARKO, ASMAMAW G
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
81%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
284 granted / 395 resolved
+13.9% vs TC avg
Moderate +9% lift
Without
With
+9.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
24 currently pending
Career history
419
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
58.2%
+18.2% vs TC avg
§102
23.9%
-16.1% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 395 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Remarks This communication in is response to Applicant’s Amendment filed on 01/12/2026. Claims 1-20 were pending. Claims 1, 11 and 20 are amended. Drawings objection is moot in view of Applicant submission of replacement drawing to Fig. 13C. Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/27/2025 was filed after the mailing date of the Non-Final Rejection on 10/01/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 9-11 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over NA et al. (US 20210021823 A1 (Applicant Admitted Prior Art, AAPA), hereinafter “NA”) in view of Rahman et al. (US 20230385643 A1, hereinafter “Rahman”). Regarding claim 1. (Currently Amended): NA discloses a multimedia data processing method executed by a computer device, comprising: determining a first associated image block associated with a target image block to be filtered in multimedia data (0183; Figure 14); acquiring target coding and decoding information associated with the target image block (0183; Figure 14); and filtering the target image block by inputting the target image block, the first associated image block and the target coding and decoding information to obtain a filtered image block corresponding to the target image block (0270-0275; Figure 23). NA failed to disclose wherein the neural network is trained to minimize a loss function measuring a difference between a true value comprising an original image block and a predicted filtered image block output by the image filter based on: (i) an input target image block, (ii) an input associated image block, and (iii) input target coding and decoding information. Rahman, however, in the same field of endeavor, shows the filtering the target image comprises: filtering the target image block by inputting the target image block, the first associated image block and the target coding and decoding information into an image filter based on a neural network to obtain a filtered image block corresponding to the target image block, wherein the neural network is trained to minimize a loss function measuring a difference between a true value comprising an original image block and a predicted filtered image block output by the image filter based on: (i) an input target image block, (ii) an input associated image block, and (iii) input target coding and decoding information (0068 and 0075; Figures 8-10; Claims 9 and 21; “[0068] … FIG. 9 presents an example original CT image 901, a DL generated denoised version 902 of the original CT image (i.e., the output image) and a residual image 903 that represents the difference between the input and output images. …”, and “[0075] FIG. 10 presents an example feature extraction process 1000 for CT … the feature extraction process at 810 in process 800 can comprise one or more aspects of feature extraction process 1000. Process 1000 is exemplified in association with detecting vessel-like structures in the residual image, which are important features for retaining in NN corrected liver CT images. In accordance with feature extraction process 1000, image 1004 corresponds to the original input image processed by the DL NN, and image 1002 corresponds to the residual image that is the difference between the input image 1002 and the DL NN output image (not shown). …”). It would have been obvious to the person of having ordinary skilled in the art before the effective filing date of the invention to combine the filtering of target image block based on a neural network as shown by Rahman in the method for applying an artificial neural network (ANN) to video encoding or decoding of NA in order to improve the coding efficiency by using the trained neural network to minimize a loss function (See, Rahman [0009]). Regarding claim 9. (Original): NA discloses the method according to claim 1, wherein the target coding and decoding information indicates a degree of influence of the first associated image block on the target image block (Figures 22 and 26), and wherein the target coding and decoding information comprises at least one of an influence factor of the first associated image block on the target image block, a reference direction of the first associated image block relative to the target image block, partitioned image block information of the first associated image block, a quantization parameter corresponding to the first associated image block, reconstructed image block information corresponding to the first associated image block, filtering intensity image block information corresponding to the first associated image block, or predicted image block information corresponding to the first associated image block (0262, 0274, 0296 and 0300; Figures 22 and 26), and wherein the influence factor is determined according to an image distance between the first associated image block and the target image block (0274 and 0300; Figures 22 and 26). Regarding claim 10. (Original): NA discloses the method according to claim 9, wherein the method further comprises: determining the image distance between the first associated image block and the target image block according to a picture order count (POC) corresponding to the first associated image block and a POC corresponding to the target image block based on a slice in which the target image block is located being a non-full intraframe coding slice (0262, 0274, 0296 and 0300; Figures 22 and 26); and determining a preset distance value as the image distance between the first associated image block and the target image block based on the slice in which the target image block is located being a full intraframe coding slice (0274 and 0300; Figures 22 and 26). Regarding claims 11. (Currently Amended): Apparatus claim 11 is drawn to the apparatus of using the corresponding method claims claimed in claim 1. Therefore apparatus claim 11 corresponds to method claim 1 is rejected for same reasons of obviousness as used above. Regarding claims 19. (Original): Apparatus claim 19 is drawn to the apparatus of using the corresponding method claims claimed in claim 1. Therefore apparatus claim 19 corresponds to method claim 9 is rejected for same reasons of obviousness as used above. Regarding claim 20. (Currently Amended): Non-transitory computer-readable storage medium claim 20 is drawn to the non-transitory computer-readable storage medium of using the corresponding to the method of using the same as claimed in claim 1. Therefore, non-transitory computer-readable storage medium claim 20 corresponds to the method claim 1 is rejected for the same reasons of obviousness as used above. Claim Rejections - 35 USC § 103 Claims 2-3, 5, 12-13 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over NA in view of Rahman as applied to claims 1 and 11 above, and further and in view of GAO et al. (US 20210409685 A1 hereinafter “GAO”). Regarding claims 2-3. (Original): NA in view of Rahman shows the method according to claim 1, but failed to show wherein the filtering the target image comprises: fusing the first associated image block with first coding and decoding information corresponding to the first associated image block by an information fusion layer of the image filter to obtain target fusion data corresponding to the first associated image block; and filtering the target image block according to the target fusion data and second coding and decoding information corresponding to the target image block by a filtering layer of the image filter to obtain the filtered image block. GAO, however, in the same field of endeavor, shows the filtering the target image comprises: fusing the first associated image block with first coding and decoding information corresponding to the first associated image block by an information fusion layer of the image filter to obtain target fusion data corresponding to the first associated image block (0088, 0207 and 0254; Figures 10 and 12); and filtering the target image block according to the target fusion data and second coding and decoding information corresponding to the target image block by a filtering layer of the image filter to obtain the filtered image block (0082 and 0087). It would have been obvious to the person of having ordinary skilled in the art before the effective filing date of the invention to combine the video coding method of GAO in the video coding of NA in view of Rahman in order to improve the video coding that would increase the efficiency and reduce the cost of video coding. Regarding claim 5. (Original): Claim 5 has similar limitations as to those treated in the above rejections, and are met by the references as discussed above, and has been rejected for the same reasons of obviousness as used in the rejection to claims 2 and 3 above. Regarding claims 12, 13 and 15. (Original): Apparatus claims 12, 13 and 15 are drawn to the apparatus of using the corresponding method claims claimed in claims 2, 3 and 5. Therefore apparatus claims 12, 13 and 15 correspond to method claims 2, 3 and 5 are rejected for same reasons of obviousness as used above. Claim Rejections - 35 USC § 103 Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over NA in view of Rahman further in view of GAO as applied to claims 3 and 13 above, and further in view of Garbacea et al. (US 20130188689 A1, hereinafter “Garbacea”). Regarding claim 7. (Original): NA in view of Rahman further in view of GAO shows the method according to claim 3, but failed to show wherein the second coding and decoding information comprises at least one of: a sequence level quantization parameter, a slice level quantization parameter, a slice level coding type, a block coding type, filtering intensity image block information corresponding to the target image block, predicted image block information corresponding to the target image block, or partitioned image block information of the target image block. Garbacea, however, in the same field of endeavor, shows the filtering the target image wherein the second coding and decoding information comprises at least one of: a sequence level quantization parameter, a slice level quantization parameter, a slice level coding type, a block coding type, filtering intensity image block information corresponding to the target image block, predicted image block information corresponding to the target image block, or partitioned image block information of the target image block (0019, 0034 and 0067-0069; Figure 4). It would have been obvious to the person of having ordinary skilled in the art before the effective filing date of the invention to combine the video coding method of Garbacea in the video coding of NA in view of Rahman further in view of GAO in order to yield a predictive result which is to improve the video coding that would increase the efficiency and reduce the cost of video coding. Regarding claim 17. (Original): Apparatus claim 17 is drawn to the apparatus of using the corresponding method claims claimed in claim 7. Therefore apparatus claim 17 corresponds to method claim 7 is rejected for same reasons of obviousness as used above. Allowable Subject Matter Claims 4, 6, 8, 14, 16 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot based on the new ground of rejection. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASMAMAW TARKO whose telephone number is (571)272-9205. The examiner can normally be reached Monday -Friday 9:00AM-5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at (571) 272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ASMAMAW G TARKO/ Patent Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Oct 24, 2024
Application Filed
Sep 26, 2025
Non-Final Rejection — §103
Oct 05, 2025
Interview Requested
Nov 05, 2025
Applicant Interview (Telephonic)
Nov 08, 2025
Examiner Interview Summary
Jan 12, 2026
Response Filed
Feb 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12529288
SYSTEMS AND METHODS FOR ESTIMATING RIG STATE USING COMPUTER VISION
2y 5m to grant Granted Jan 20, 2026
Patent 12511768
METHOD AND APPARATUS FOR DEPTH IMAGE ENHANCEMENT
2y 5m to grant Granted Dec 30, 2025
Patent 12506865
SYSTEMS AND METHODS FOR REDUCING A RECONSTRUCTION ERROR IN VIDEO CODING BASED ON A CROSS-COMPONENT CORRELATION
2y 5m to grant Granted Dec 23, 2025
Patent 12498482
CAMERA APPARATUS
2y 5m to grant Granted Dec 16, 2025
Patent 12469164
VEHICLE EXTERNAL DETECTION DEVICE
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
81%
With Interview (+9.3%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 395 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month