DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Preliminary Response to Amendment
Claims 1-20 are pending
Claim 4, 6, 7, 8, 10, 11 are currently amended
Claim 12, 13, 14, 15, 16, 17, 18, 19, and 20 are new
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 03/16/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 01/19/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 03/24/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 06/30/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “acquiring module” configured to classify each video in a video set; and an “evaluation module” configured to input videos of different categories into different preset models, and acquire quality evaluation results of the videos by using the preset models in claim 9.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim 1, 2, 9, 10 and 11 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Li et al (Li hereinafter US 20180167619 A1).
As per claim 1,
Li teaches a video quality evaluation method, (Paragraph [0008] “present invention sets forth a computer-implemented method for predicting absolute video quality.”) comprising classifying each video in a video set (Paragraph [0037] “upon receiving any number of sources, the subjective metric subsystem 140 assigns each of the sources to one of the assigned source buckets based on the spatial resolution of the source.” Videos are categorized/classified into buckets based on their spatial resolution), inputting videos of different categories into different preset models (Paragraph [0036] “ The metric engine 142(1) is associated with the assigned spatial resolution of 480p, the metric engine 142(2) is associated with the assigned spatial resolution of 720p, and the metric engine 143(3) is associated with the assigned spatial resolution of 1080p.” the “source” are the videos and they are inputted into the metric engines based off their corresponding spatial resolution. The data is then passed to the source models which is shown in paragraph [0068] “The source model 155(1) is associated with the spatial resolution of 480p. The source model 155(2) is associated with the spatial resolution of 720p. The source model 155(3) is associated with the spatial resolution of 1080p”) and acquiring quality evaluation results of the videos by using the preset models. (Paragraph [0039] “The subjective source bucket 125(i) includes an absolute quality score 195 for each encode generated by the metric engine 142(i).”)
As per claim 2
Li covers all claim limitations rejected in claim 1’s 102 rejection see claim 1’s 102 rejection
Li teaches the videos comprise videos of a first category (Figure 1. The first category is held within metric engine 142(1). That first category is sources with 480p resolution.) and the preset models comprise a measurement mapping evaluation model (Figure 1’s Source model 155(1), 155(2) and 153(3) (the presets) are comprised of the information from subjective metric subsystem 142(i) with the measurement being mapped being resolution quality. This is described in paragraphs [0036]-[0042]), before the inputting videos of different categories into different preset models, the method further comprises: acquiring transmission characteristic data of the videos on a video link (Figure 4 label 402);and the inputting videos of different categories into different preset models, and acquiring quality evaluation results of the videos by using the preset models, comprises: inputting transmission characteristic data of the videos of the first category into the measurement mapping evaluation model, (In figure 2 metric engine 142(2) takes video characteristic data which is the spatial resolution), acquiring a first score of the videos of the first category by using the measurement mapping evaluation model( in figure 1 an absolute quality score (label 195) is outputted downstream after the metric engine is used) , and outputting the first score after being evaluated by the measurement mapping evaluation model according to the transmission characteristic data (Paragraph [0040] “ The subjective metric subsystem 140 also generates a base subjective bucket 142 that includes the absolute quality scores 195 derived from values for the absolute perceptual quality” That perceptual quality is the transmission characteristic of spatial resolution. )
As per claim 9
Claim 9 is the parallel apparatus claim of claim 1 and will be rejected under the same premise. See claim 1’s 102 rejection
Li states in paragraph [0122] that “Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. “The implementation represents a routine and conventional design choice for carrying out known method steps using processors or system components.
As per claim 10
Li covers the claim limitations previously rejected in claim 1’s 102 rejection. See claim 1’s 102 rejection. Claim 10 is simply the parallel electronic device including a processor and memory storing instructions that when executed cause the processor to perform video quality evaluation of method 1
Li teaches an n electronic device, comprising: at least one processor; and a memory in communication connection with the at least one processor, wherein the memory stores an instruction able to be executed by the at least one processor, and the instruction is executed by the at least one processor to enable the at least one processor to be able to implement the video quality evaluation method (Figure 1)
As per claim 11
Li covers the claim limitations previously rejected in claim 1’s 102 rejection. See claim 1’s 102 rejection. Claim 11 is simply the non-transitory computer readable storage medium counterpart of claim 1 that implements the video quality evaluation method described in claim 1.
Li teaches A non-transitory computer readable storage medium, storing a computer program, the computer program, when executed by a processor, implementing the video quality evaluation method (“ non-transitory computer-readable storage medium including instructions that, when executed by a processor, cause the processor to perform the steps…”)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1, 9, 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Ma et al (Ma hereinafter, US 20210174152 A1) in view of Cai et all (Cai hereinafter CN 112634268 A “Video Quality Evaluation Method, Device And Electronic Device”)
As per claim 1
Ma teaches classifying each video in a video set (Paragraph 0054 “As shown in FIG. 1, the video classification system may include a server 10 and one or more terminal devices 20. The server 10 obtains videos that need to be classified.” Paragraph 0055 “According to the prediction result 013, a type of the to-be-processed video can be determined, and the to-be-processed video can therefore be classified.” Paragraph 0198 “obtaining a classification prediction result corresponding to the target signal feature sequence by using the video classification prediction model, the classification prediction result being used for predicting a video type of the to-be-processed video.”)
Ma is not solely relied upon for inputting videos of different categories into different preset models, and acquiring quality evaluation results of the videos by using the preset models.
Cai teaches acquiring quality evaluation results of the videos by using the preset models (Abstract “inputting the target video into the quality evaluation model trained in advance, to obtain the output result output by the quality evaluation mode”, Specific implementation examples: “obtaining the video quality level corresponding to the target video according to the output result”)
Cai does not solely teach inputting videos of different categories into different preset models
The combined teaching of Ma and Cai teach inputting videos of different categories into different preset models. Ma teaches classification by category/type (Paragraph [0012])and shows category outputs (Paragraph [0072]It can be seen from the result of the formula that, the probability of the to-be-processed video belonging to the third category is the highest, followed by the probability of belonging to the first category. Therefore, the to-be-processed video can be displayed in a video list of the third category first.). Cai then showcases inputting a video into a trained quality evaluating model “; inputting the target video into the quality evaluation model trained in advance, to obtain the output result output by the quality evaluation model”. A person of ordinary skill in the art would have found it obvious to use different pre trained quality evaluation models for different classified video categories so that category specific content is evaluated by a model suited to that category. This is an obvious integration of the two teachings provided by the references.
Accordingly a person of ordinary skill in the art at the time this invention was effectively filed would have would have found it obvious to modify the video processing method of Ma (which teaches obtaining classification prediction results used for predicting a video type of the to-be-processed video) to further include performing a video quality evaluation using a pre trained model as taught by Cai (which discloses inputting the target video into a pre trained quality evaluation model to obtain an output result as well as obtaining the video quality grade corresponding to the target video correlating to the output result.) A person of ordinary skill I the art would see that Ma already determines a category/type for the video and Cai already evaluates video quality by model inference. Using the category output to select a corresponding model would have been a predictable and technically reasonable way to improve model appropriateness across differing video content . The video quality evaluation process would be more reliable and better suited to the content characteristics of the classified video and improve the modified systems overall quality. This while still preserving Ma’s classification functionality and Cai’s model based quality scoring functionality.
As per claim 9
Claim 9 is the parallel apparatus claim of claim 1 and will be rejected under the same premise.
The combination and motivation of claim 1 renders it obvious to perform the claimed functions within an apparatus . The implementation represents a routine and conventional design choice for carrying out known method steps using processors or system components.
As per claim 10
Ma and Cai cover the claim limitations previously rejected in claim 1’s 103 rejection. See claim 1’s 103 rejection.
Claim 10 is simply the electronic device including a processor and memory storing instructions that when executed cause the processor to perform video quality evaluation of method 1. The modified/combined system of Ma and Cai described in claim 1 support “electronic device, comprising:
a memory for storing the application program and the data generated by the operation of the application program; a processor, for executing the application program” and “processor 702, for executing the application program” as stated by Cai. Figure 7 disclosed by Cai further supports this
Accordingly, it would have been obvious at the time this invention was effectively filed to implement claim 1’s method in the form of an electronic device having a processor and memory storing executable instructions. Such processor and memory implementations are ordinary and customary wat of carrying out the disclosed video processing and quality evaluation operations.
As per claim 11
Ma and Cai cover the claim limitations previously rejected in claim 1’s 103 rejection. See claim 1’s 103 rejection.
Claim 11 is simply the non-transitory computer readable storage medium counterpart of claim 1 that implements the video quality evaluation method described in claim 1. As described in claim 1’s 103 rejection Ma teaches the classification portion of the method and Cai teaches the model-based video quality evaluation portion. Cai also shows the standard process/memory implementation of that functionality including “a memory for storing an application program and data generated by the operation of the application program” and “a processor for executing the application program” to carry out the video quality evaluation operations.
Accordingly, it would have been obvious to one of ordinary skill in the art to embody the method rendered obvious by claim 1 in the form of a computer readable storage medium storing program instructions executable by a processor. Doing so is simply the conventional program product implementation of the same method steps.
Claim 2 ,7, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Ma et al (Ma hereinafter, US 20210174152 A1) in view of Cai et all (Cai hereinafter CN 112634268 A “Video Quality Evaluation Method, Device And Electronic Device”) in further view of Garcia De Blas et al (Garcia De Blas hereinafter US 20140337871 A1)
As per claim 2
Ma and Cai cover the claim limitations previously rejected in claim 1’s 103 rejection. See claim 1’s 103 rejection.
Ma’s classification by type can be considered a video’s first category “classification prediction results being used for predicting a video type of the to-be-processed video” and “a type of the to-be-processed video can be therefore classified” affirms this basis for the claimed “first category”
Garcia De Blas teaches the preset models comprise a measurement mapping evaluation model ( Paragraph [0024] “calculating a Key Performance Indicator, or KPI, from measurable network parameters of said network for each video provided by said video service, assigning a Key Quality Indicator, or KQI, to each KPI by means of analytical models and calculating a global KQI function of a set of KQI” This maps to “measurement mapping evaluation model” because Garcia De Blas shows analytical models to map measured network parameters (transmissions) to quality indicators. This shows a model that maps transmission side measurements into quality evaluation output. ) before the inputting videos of different categories into different preset models, acquiring transmission characteristic data of the videos on a video link ( Paragraph [0024] “calculating a Key Performance Indicator, or KPI, from measurable network parameters of said network for each video provided by said video service,“ Garcia De Blas further says “The basic concept of the invention relies in the use of a method that collects network measurements in just one point…From these measurements, the method follows an analytical model in order to calculate some Key Performance Indicators (KPIs) required to obtain the user's QoE” The acquired transmission characteristic data in Garcia De Blas is the measurable network parameters and collected network measurements/KPI and does this for each video provided by the video service over a network. Garcia De Blas KPI’s “the KPI that has been chosen to reflect the smooth video playback is the interruption rate (number of interruptions or breaks during the video playback time)” Interruption rates are video characteristics. ) inputting transmission characteristic data of the videos of the first category into the measurement mapping evaluation model (Paragraph 0024] “assigning a Key Quality Indicator, or KQI, to each KPI by means of analytical models” Garcia de Blas takes the transmission derived KPI values and then they processes them through a analytical model to produce the quality side KQI. In essence the transmission characteristic data is the KPI from measurable network parameters and then the measurement mapping evaluation model is the analytical model that assigns KQI to KPI) and acquiring a first score of the videos of the first category by using the measurement mapping evaluation model, and outputting the first score after being evaluated by the measurement mapping evaluation model according to the transmission characteristic data. (Paragraph [0024] “ assigning a Key Quality Indicator, or KQI, to each KPI by means of analytical models and calculating a global KQI function of a set of KQIs.”
Accordingly a person of ordinary skill in the art at the time this invention was filed to modify the video classification method of the Ma/Cai system to further acquire transmission related quality data and evaluate the classified videos using a measurement mapping model as taught by Garcia De Blas which showcases calculating a KPI from measurable network parameters for each video provided by said video service and assigning a KQI to each KPI by means of analytical models as well as calculating a global KQI function of a set of KQIs. A person of ordinary skill in the art would rightfully understand that the classification output of the Ma/Cai modified system provides organized category information for the input videos while Garcia de Blas teaches that measurable transmission/network parameters for each video can be processed through analytical mapping models to generate a KQI. Since both references operate on video specific inputs and both use model-based processing to derive high level outputs, a person of ordinary skill in the art would have found it technically sound to incorporate the KPI to KQI evaluation into the classification pipeline of the Ma/Cai system so that videos identified as belonging to a given category could be evaluated using their characteristic data and corresponding measurement mapping evaluation. This predictably allows the system to not just classify a video by its type but also create a category correlated quality score from transmission characteristics. This improves the practicality of the system in environments where service quality depends on transmission behavior and content type. Once videos are classified into their categories, using the transmission measurements per video to create a mapped quality result is a compatible downstream process step. The new Ma/Cai/Garcia De Blas system enables classified videos to be evaluated using their transmission characteristics quality scoring. This lets the modified system produce a score for category of videos using measurable link parameters mapped through an analytical model heightening efficiency/ lowering processing by assigning a sort of rank system.
As per claim 7
Ma and Cai cover all claim limitations previously rejected in claim 1’s 103 rejection. See claim 1’s 103 rejection
Ma teaches video classification (Paragraph [0012] “obtaining a classification prediction result corresponding to the target signal feature sequence, the classification prediction result being used for predicting a video type of the to-be-processed video.” and paragraph [0055] “. According to the prediction result 013, a type of the to-be-processed video can be determined, and the to-be-processed video can therefore be classified.”) according to video length (Paragraph [0061] “In this embodiment, for a to-be-processed video with a length of T seconds, the to-be-processed video can be inputted to a video classification prediction model…”)
Garcia De Blas teaches classifying each video in a video by network environment parameters (Paragraph [0024] “calculating a Key Performance Indicator, or KPI, from measurable network parameters of said network for each video provided by said video service” and “assigning a Key Quality Indicator, or KQI, to each KPI by means of analytical models” This shows measurable network parameters per video processing which corresponds to claimed network environment parameters. )
Accordingly a person of ordinary skill in the art at the time this invention was effectively filed would have found it obvious to modify Ma’s video classification and processing method which teaches acquisition of a “classification prediction result… used for predicting a video type of the to-be- processed video” such that the to be processed video can be classified to further classify each video according to at least one additional parameter including network environment parameters as taught by Garcia De Blas. A person of ordinary skill in the art would have recognized that video classification used in processing pipeline benefits from incorporating parameters that affect downstream handling and evaluation of the video. In particular Ma provides a framing for classifying videos based on extracted features and contextual information including video length characteristics as part of the input the classification model while Garcia De Blas teaches that measurable network parameters associated with each video are meaningful indicators of how the video is delivered and experienced and should be subject to implementation within an analytical model. A person of ordinary skill in the art would find it reasonable to incorporate network environment parameters and video length into the classification process so that videos are classified according to at least one parameter relevant to their context. Within the Ma/Cai/Garcia De Blas modified model, incorporating classification based on network environment parameters and or video length would have improved the suitability of classified video output for downstream processing in in the model-based evaluation since the classification would better reflect conditions relevant to the inevitable quality evaluation. The Ma/Cai/Garcia De Blas system allows for the predictable benefit of enabling classification of videos based on parameters relevant to both video characteristics and delivery conditions. This enables the system to have increased efficiency and usefulness within the classification step.
As per claim 17
The Ma/Cai/Garcia De Blas system covers all claim limitations previously rejected in claim 2’s 103 rejection. See claim 2’s 103 rejection.
Claim 17 is the same limitations presented in claim 7 and will be rejected under the same premise with the same motivation.
Claim 3 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Ma et al (Ma hereinafter, US 20210174152 A1) in view of Cai et all (Cai hereinafter CN 112634268 A “Video Quality Evaluation Method, Device And Electronic Device”) in further view of Garcia De Blas et al (Garcia De Blas hereinafter US 20140337871 A1) in further view of Wang et al (Wang hereinafter US 20140256310 A1)
As per claim 3
The Ma/Cai/Garcia De Blas system covers all claim limitations previously rejected in claim 2’s 103 rejection. See claim 2’s 103 rejection.
Wang teaches backwards deducing and locating abnormal transmission characteristic data of a signal according to the measurement mapping evaluation model in a case that the first score is less than a first expected score (Paragraph [0007] “ according to a parsing result, whether a call of a subscriber is a key quality indicator KQI exception event” and Paragraph [0096] “ a locating unit, configured to locate a location and a cause of the KQI exception event.” Identifying the KQI exception event is being seen as “Backwards deducing”. Locating location and cause of said exception event corresponds to “locating abnormal transmission characteristic data”. Wang then says in paragraph [0023] “By monitoring the KQIs obtained through mapping, when the KQI indicators decrease to thresholds, the KQI indicators are mapped as KPIs for problem analysis” this shows a score be scrutinized under a threshold. This corresponds to the claimed “first score is less than first expected score”)
Wang will not be relied upon for outputting video quality warning information according to the first score in a case that the first score is less than an expected score
Garcia De Blas teaches the video link according to the measurement mapping evaluation model (Paragraph [0024] “…calculating a Key Performance Indicator, or KPI, from measurable network parameters of said network for each video…” and “ assigning a Key Quality Indicator, or KQI, to each KPI by means of analytical models “ Claim 3’s” transmission characteristics” correspond to Garcia De Blas’ KPI which is a measurable network parameter. The measurement mapping evaluation model is Garcia De Blas’ analytical model mapping which correlates the KPI in a KQI.)
The Ma/Cai/Garcia De Blas system teaches deriving a first score (KQI) from transmission network data (KPI) via an analytical mapping model. Wang’s integration to this system then shows that once a quality condition is abnormal (KQI exception event), the system locates the cause of the problem, essentially traces back to underlying transmission issues.
Accordingly it would have been obvious to one of ordinary skill in the art at the time this invention was effectively filed to modify the Ma/Cai/Garcia De Blas system with Wang’s concepts so that when the first score produced from transmission characteristic data by the Garcia De Blas mapping model indicates an abnormal condition, the system performs Wang’s location and cause analysis of the exception event thereby arriving at the backwards deducing and locating aspects in claim 3.This modification would allow the overall system not only to classify videos and evaluate their quality but also, when the quality when the quality result based on the transmission characteristics dips, the abnormality is traced back to the underlying transmission side cause and location so that analysis and possible remedying can be assessed. This leads to user quality enhancement downstream as well as overall efficiency
As per claim 18
The Ma/Cai/Garcia De Blas/Wang modified system cover all claim limitations in claim 3’s 103 rejection. See claim 3’s 103 rejection.
`Ma teaches video classification (Paragraph [0012] “obtaining a classification prediction result corresponding to the target signal feature sequence, the classification prediction result being used for predicting a video type of the to-be-processed video.” and paragraph [0055] “. According to the prediction result 013, a type of the to-be-processed video can be determined, and the to-be-processed video can therefore be classified.”) according to video length (Paragraph [0061] “In this embodiment, for a to-be-processed video with a length of T seconds, the to-be-processed video can be inputted to a video classification prediction model…”)
Garcia De Blas teaches classifying each video in a video by network environment parameters (Paragraph [0024] “calculating a Key Performance Indicator, or KPI, from measurable network parameters of said network for each video provided by said video service” and “assigning a Key Quality Indicator, or KQI, to each KPI by means of analytical models” This shows measurable network parameters per video processing which corresponds to claimed network environment parameters. )
Accordingly, a person of ordinary skill in the art at the time this invention was effectively filed to have further used concepts within the Ma/Cai/Garcia De Blas/ Wang system and arrived at the claimed limitations of claim 18. Ma supplies the classification framework for organizing videos, Cai then supplies the preset model quality evaluation pipeline into which those classified videos are processed. Garcia De Blas teaches that network environment related measurements for each video can be transformed into a quality indicator by analytical mapping and Wang teaches using an abnormal quality condition as the basis for training back to the underlying cause. The overall combined system, the classification aspects of claim 18 further informs the broader video quality evaluation workflow while Garcia de Blas and Wang continue to supply the quality/abnormal score to cause functionality. A person of ordinary skill in the art can see that classifying the videos in Ma’s portion using criteria relevant to later processing would improve how those videos are handled within Cai’s preset model evaluation architecture and even more so when Garcia de Blas shows that network related parameters are meaningful to quality assessment. Wang then shows that abnormal quality outcomes can be back traced to their root. This gives the advantage of providing multiple avenues for how quality itself is classified and filtered so that multiple avenues of differing abnormal attributes can be assessed. It makes the system more subjective as to exactly why the video could be giving a user a lower quality experience and potentially increases the next step of rectifying that issue.
Allowable Subject Matter
Claim 4, 5, 6, 8 , 12, 13, 14, 15, 16, 19, and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANE WRENSFORD CODRINGTON whose telephone number is (571)272-8130. The examiner can normally be reached 8:00am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHANE WRENSFORD CODRINGTON/ Examiner, Art Unit 2667
/TOM Y LU/ Primary Examiner, Art Unit 2667