Prosecution Insights
Last updated: April 19, 2026
Application No. 18/686,962

MODEL TRAINING METHOD, VIDEO QUALITY ASSESSMENT METHOD AND APPARATUS, AND DEVICE AND MEDIUM

Non-Final OA §101§102
Filed
Feb 27, 2024
Examiner
SETH, MANAV
Art Unit
2672
Tech Center
2600 — Communications
Assignee
ZTE CORPORATION
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
98%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
716 granted / 789 resolved
+28.7% vs TC avg
Moderate +8% lift
Without
With
+7.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
13 currently pending
Career history
802
Total Applications
across all art units

Statute-Specific Performance

§101
19.5%
-20.5% vs TC avg
§103
29.0%
-11.0% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
15.0%
-25.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 789 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 1. The information disclosure statements (IDS) submitted on 04/26/2025, 01/24/2025 and 02/27/2024 have been considered by the examiner. Claim Rejections - 35 USC § 101 2. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 3. Claims 1-4 and 9-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The following reasons are provided to evaluate subject matter eligibility. (1) Are the claims directed to a process, machine, manufacture or composition of matter; (2A) Prong One: Are the claims directed to a judicially recognized exception, i.e., a law of nature, a natural phenomenon, or an abstract idea; Prong Two: If the claims are directed to a judicial exception under Prong One, then is the judicial exception integrated into a practical application; (2B) If the claims are directed to a judicial exception and do not integrate the judicial exception, do the claims provide an inventive concept. With regard to (1), the analysis is a ‘yes’, claim 1 recites a process, claim 11 recites a machine and 15 recites a manufacture. With regard to (2A) Prong One, the analysis is a “yes”. Claim 1 recites “determining a mean opinion score (MOS) of each piece of training video data; and training a preset initial video quality assessment model according to the training video data and the MOS of the training video data, until a convergence condition is met, so as to obtain a final video quality assessment model” When viewed under the broadest most reasonable interpretation the claim limitation recites an abstract idea directed to "mathematical algorithms" requiring use of a series of mathematical calculations (e.g., determining a mean opinion score and training a model). The step of determining a mean opinion score (MOS) can also be mentally assigned by human for the videos. The steps— determining a score (MOS), and training a model until convergence—are standard, conventional, and routine steps for creating machine learning models. Simply applying a general machine learning algorithm to a specific field (video quality) does not remove the abstract nature of the training process. One can perform the process using the recitation of modules/steps in the system/device claim is a mere use of generic computer components. See MPEP 2106.04 and the 2019 PEG. With regard to (2A) Prong Two: the analysis is a “No”. Claim 1 recites the additional elements of “acquiring training video data, wherein the training video data comprises reference video data and distorted video data”; and these additional elements represents mere data gathering that is necessary for use of the recited abstract idea. Therefore, the limitation(s) is/are insignificant extra-solution activity, a generic operation. See MPEP 2106.05(1). The claim as a whole, looking at the additional elements individually and in combination, does not integrate the abstract idea into a practical application. To be patent-eligible, the method must provide a non-conventional, specific, and practical application that solves a technical problem, such as significantly reducing computing resource usage or improving a specific hardware component. The description "training a preset initial video quality assessment model" indicates that the training process itself is standard. With regard to (2B): the pending claims do not show what is more than a routine in the art presented in the claims, i.e., the additional elements are nothing more than routine and well-known steps. The additional elements do not reflect an improvement to a technology or technical field, including the use of a particular machine or particular transformation. The claim focuses on the result (a final video quality assessment model) rather than a specific improvement to the computing process. Claims 2-4, 9-10, 12-14, and 16-20 are similarly rejected for the same reasons as claims 1, 11 and 15. Dependent claims 2-4, 9-10, 12-14, and 16-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception. For example, claim 2 again recites “data gathering of video sets” and adjusting parameters of the model using some mathematical calculations. All other claims are rejected for the same reasons and not repeated herewith. Claim Rejections - 35 USC § 102 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 6. Claim(s) 1, 4-5 and 10-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wang et al., Chinese Patent Publication CN 110751649 A (Published on February 04, 2020 – with citation made in machine generated translation) (as also cited by applicant). Regarding claim 1, Wang discloses “A model training method for video quality assessment, comprising: acquiring training video data, wherein the training video data comprises reference video data and distorted video data (step S101 - reference video – which is undistorted, and, to-be-processed video – is a distorted video; step S102 – residual video is computed from reference video and to-be-processed video; page 10 – last few paragraphs – neural network training); determining a mean opinion score (MOS) of each piece of training video data (page 11 – equation/formula 3 – cites S in computation of the training loss which represents the MOS); and training a preset initial video quality assessment model according to the training video data and the MOS of the training video data, until a convergence condition is met, so as to obtain a final video quality assessment model (page 11 of the translation– training the initial neural network based on each the training sample, until the loss function converge - obtaining training samples, each training sample comprising sample videos, each labeled with a sample label characterizing the labeling quality of the sample videos, e.g., average subjective score MOS; training an initial neural network model based on each training sample until a loss function corresponding to the initial neural network model converges, using the neural network model at the end of training as a video quality assessment model). Regarding claim 4, Wang discloses “the method according to claim 1, wherein the initial video assessment model comprises a three-dimensional convolutional neural network for extracting motion information of image frame (figure 3 – Conv3D blocks; page 10 – last few paragraphs - the video quality estimation model can be two-dimensional, three dimensional, 2.5 dimensional neural network model; page 8 – 1st para – space-time characteristics show along the change information of each video frame time, and last para - based on the associated video frame of the video frame to be processed, extracting the video frame to be processed corresponding to space time characteristic, can pass through with the image change information of the video frame for processing the video frame time is related to characterize processing video frames to be space characteristic variations over time – where change in in frames due to space-time characteristics variations implies motion). Regarding claim 5, Wang discloses “The method according to claim 4, wherein the initial video quality assessment model further comprises an attention model, a data fusion processing module, a global pooling module, and a fully-connected layer, wherein the attention model, the data fusion processing module, the three- dimensional convolutional neural network, the global pooling module, and the fully-connected layer are cascaded in sequence” (Attention model – the residual video is regarded as basic form of attention model by indicating important areas of distortion in Figure 3 – top right; Data Fusion processing module – fusion of input features after Conv2D blocks in Figure 3; Three- dimensional convolutional neural network – Conv3D blocks in figure 3; Global pooling module – global pooling block after multiplication with residual frames in figure 3; Fully connected layer – final two blocks after global pooling in figure 3; where all of the above are cascaded in sequence). Regarding claim 10, claim 10 has been analyzed and rejected as per citations made in the rejection of claim 1 w.r.t neural network for video quality assessment (as cited the model is a video quality assessment model, that’s what is done using it; further see page 3 – “In one possible implementation, determining the quality evaluation result of the video to be processed by the video quality estimation model.”; further see page 5 – last para – for computer, and processor and memory are inherent components of computer) Regarding claim 11, claim 11 has been analyzed and rejected as per citations made in the rejection of claim 1 w.r.t neural network for video quality assessment (as cited the model is a video quality assessment model, that’s what is done using it; further see page 3 – “In one possible implementation, determining the quality evaluation result of the video to be processed by the video quality estimation model.”; further see page 5 – last para – for computer, and processor and memory are inherent components of computer) Regarding claim 12, claim 12 has been analyzed and rejected as per citations made in the rejection of claim 1 w.r.t neural network for video quality assessment (as cited the model is a video quality assessment model, that’s what is done using it; further see page 3 – “In one possible implementation, determining the quality evaluation result of the video to be processed by the video quality estimation model.”; further see page 5 – last para – for computer, and processor and memory are inherent components of computer). . Regarding claim 13, claim 13 has been analyzed and rejected as per citations made in the rejection of claim 1 w.r.t neural network for video quality assessment (as cited the model is a video quality assessment model, that’s what is done using it; further see page 3 – “In one possible implementation, determining the quality evaluation result of the video to be processed by the video quality estimation model.”; further see page 5 – last para – for computer, and processor and memory are inherent components of computer). Regarding claim 14, claim 14 has been analyzed and rejected as per citations made in the rejection of claim 1 w.r.t neural network for video quality assessment (as cited the model is a video quality assessment model, that’s what is done using it; further see page 3 – “In one possible implementation, determining the quality evaluation result of the video to be processed by the video quality estimation model.”; further see page 5 – last para – for computer, and processor and memory are inherent components of computer). Regarding claim 15, claim 15 has been analyzed and rejected as per citations made in the rejection of claim 1 w.r.t neural network for video quality assessment (as cited the model is a video quality assessment model, that’s what is done using it; further see page 3 – “In one possible implementation, determining the quality evaluation result of the video to be processed by the video quality estimation model.”; further see page 5 – last para – for computer, and processor and memory are inherent components of computer). Regarding claim 16, claim 16 has been analyzed and rejected as per citations made in the rejection of claim 1 w.r.t neural network for video quality assessment (as cited the model is a video quality assessment model, that’s what is done using it; further see page 3 – “In one possible implementation, determining the quality evaluation result of the video to be processed by the video quality estimation model.”; further see page 5 – last para – for computer, and processor and memory are inherent components of computer). 5. Claims 6-8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The prior art(s) of record as cited no not teach subject matter as cited in claims 2-3, 6-9 and 17-20. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Manav Seth whose telephone number is (571) 272-7456. The examiner can normally be reached on Monday to Friday from 8:30 am to 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Sumati Lefkowitz, can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https:/Awww.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000 /Manav Seth/ Primary Examiner, Art Unit 2672 February 26, 2026
Read full office action

Prosecution Timeline

Feb 27, 2024
Application Filed
Feb 26, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597243
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12579633
PERIODIC-PATTERN BACKGROUND REMOVAL
2y 5m to grant Granted Mar 17, 2026
Patent 12567269
METHOD OF TRAINING IMAGE CAPTIONING MODEL AND COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12561969
Object Re-Identification Apparatus and Method Thereof
2y 5m to grant Granted Feb 24, 2026
Patent 12555368
Method for Temporal Correction of Multimodal Data
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
98%
With Interview (+7.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 789 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month