Prosecution Insights
Last updated: April 19, 2026
Application No. 17/964,305

PROCESSING FOR ENCODING SCREEN CONTENT VIDEO USING BIT ALLOCATION

Final Rejection §103
Filed
Oct 12, 2022
Examiner
NAVAS JR, EDEMIO
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
City University Of Hong Kong
OA Round
4 (Final)
71%
Grant Probability
Favorable
5-6
OA Rounds
2y 9m
To Grant
96%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
384 granted / 540 resolved
+13.1% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
31 currently pending
Career history
571
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
23.5%
-16.5% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 540 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments with respect to claims 1-5, 7-21, 23 and 24 have been considered but are moot in view of the new ground(s) of rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 19, 20 and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vanam et al. (“Vanam”) (U.S. Patent No. 11,778,224) in view of He et al. (“He”) (“Adaptive Quantization Parameter Selection For H.265/HEVC by Employing Inter-Frame Dependency,” – IEEE Transactions On Circuits and Systems For Video Technology, Vol. 28, No. 12, December 2018) and Sharman et al. (“Sharman”) (U.S. PG Publication No. 2022/0038690). In regards to claim 1, Vanam teaches a method for processing a screen content video (See col. 1, li. 31-51), the screen content video comprising a plurality of frames each including a plurality of coding tree units (CTUs) and a plurality of coding units in each of the coding tree units (See col. 5, li. 7-19), the method comprising: performing a coding-tree-unit-based analysis operation on the screen content video to determine content information associated with the screen content video, the content information including content complexity information associated with the screen content video (See col. 3, li. 63 – col. 4, li. 17) and temporal importance information associated with the screen content video (See col. 3, li. 63 – col. 4, li. 17 wherein temporal importance information may be taught as motion vectors); and performing a rate control operation on the screen content video (See col. 3, li. 63 – col. 4, li. 17 wherein a rate control logic would determine appropriate quantization parameters or frame types [and associated intra- and inter-dependent properties such as motion vectors], as well as an analysis of frame complexity, in the overall encoding process). Vanam, however, fails to teach modelling a rate-distortion relationship of the screen content video by incorporating the content complexity information into a rate and distortion model; wherein the rate control operation includes steps of performing bits allocation using a cost function incorporating based on the temporal importance information and the modelled rate-distortion relationship, and deriving coding parameters according to allocated bits resulting from the bits allocation step. In a similar endeavor He and Sharman teach modelling a rate-distortion relationship of the screen content video by incorporating the content complexity information into a rate and distortion model (The examiner notes that in video compression, content complexity refers to how much detail and motion a portion of a video contains, which then affects how it can be compressed as it naturally affects bit allocation since more complex areas require more bit allocation, thus as taught by He in Section I. Introduction, complexity, i.e. including spatial energy ratio, temporal motion activity, motion estimation/quantization/entropy coding, content textures, and inter-layer dependencies, must be taken into consideration for optimal coding performance in RDO [Rate-Distortion Optimization] schemes, and are thus already incorporated into RDO models as taught by He); wherein the rate control operation includes steps of performing bits allocation using a cost function incorporating the temporal importance information and the modelled rate-distortion relationship (See Formula 2 of He which introduces the total Rate-Distortion cost function, with Section C. Inter-Frame Dependencies for the RA Coding Structure of He further fleshing out cost function computations, this is of course given with the purpose of minimizing the total coding distortion at a given bit budget [bit allocation determination] and motion consideration as taught in Section I. Introduction of He), wherein the temporal importance information represents a measure of distortion impact of a coding unit, a coding tree unit, or a frame on a total distortion of a group of pictures, derived recursively through propagation of distortion based on inter-frame prediction (See Section C. Inter-Frame Dependencies for the RA Coding Structure of He which provides formulas in which distortion of frames impact a GOP structure, including those which are derived recursively in inter-frame prediction) and intra block copy prediction (As described in Section C. Inter-Frame Dependencies for the RA Coding Structure of He an I-frame [Intra-frame] is only dependent upon itself [thus why it is the first frame in a GOP, and thus its coding distortion depends only on itself, this is taken in view of Sharman’s teachings in ¶0095 of an intra-block-copy frame prediction which is meant to represent an I-frame which is intra-predicted, i.e. dependent upon itself in order to prevent erroneous data from propagating into subsequent images as taught by Sharman, and thus may naturally be incorporated into He’s teachings with regards to I-frame considerations for propagation of distortion), and weights distortion in the cost function to influence bit allocation (See Section B. Estimation of the Distortion Dependency μ of He wherein prediction weights are taken into consideration of the RA cost functions that influence bit allocation) and deriving coding parameters according to allocated bits resulting from the bits allocation step (See Section I. Introduction which describes an adaptive frame-level QP selection based on inter-frame dependency [including motion], inter-frame distortion model [including motion], the energy of prediction residuals [a type of complexity measurement] within the RDO frame-work [which itself takes bit allocation and determination into account as described above]). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of He and Sharman into Vanam because it allows for the reduction , on average, of the BD-rate with negligible increase in encoding time as described in the Abstract of He, and it allows for a periodic refresh through the use of intra-block-processing in a GOP, thus preventing additional erroneous data propagation into subsequent images as taught in ¶0095 of Sharman. In regards to claim 2, Vanam teaches the method of claim 1, wherein the content complexity information associated with the screen content video comprises content complexity measures for each of the coding units (See col. 7, li. 9-32 wherein complexity may be determined for each of the blocks in an image frame); and wherein the temporal importance information comprises temporal importance measures for each of the coding units (See col. 2, li. 11-34). In regards to claim 19, the claim is rejected under the same basis as claim 1 by Vanam in view of He and Sharman wherein the processor and memory are taught as seen in col. 3, li. 34-62 In regards to claim 20, the claim is rejected under the same basis as claim 1 by Vanam in view of He and Sharman wherein the computer-readable medium is taught as seen in col. 3, li. 34-62. In regards to claim 23, Vanam teaches the method of claim 1, comprising a further step of incorporating screen content coding tools in the coding-tree-unit-based analysis operation (See for example col. 5, li. 7-34 and col. 7, li. 45 – col. 8, li. 8). Claim(s) 3, 4 and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vanam et al. (“Vanam”) (U.S. Patent No. 11,778,224) in view of He et al. (“He”) (“Adaptive Quantization Parameter Selection For H.265/HEVC by Employing Inter-Frame Dependency,” – IEEE Transactions On Circuits and Systems For Video Technology, Vol. 28, No. 12, December 2018) and Sharman et al. (“Sharman”) (U.S. PG Publication No. 2022/0038690), in further view of Rosewarne (“Rose”) (U.S. PG Publication No. 2022/0150509). In regards to claim 3, Vanam fails to teach the method of claim 2, wherein the coding-tree-unit-based analysis operation comprises: processing the screen content video to perform inter prediction, intra prediction, and intra block copy prediction. In a similar endeavor Rose teaches processing the screen content video to perform inter prediction (See ¶0009 and 0082), intra prediction (See ¶0009 and 0081), and intra block copy prediction (See ¶0159 and 0176). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Rose into Vanam because it allows for a reduction of bandwidth usage through the use of less necessary picture information with regards to image data dependent upon referencing images as seen in at least ¶0180. In regards to claim 4, Vanam fails to teach the method of claim 3, wherein the coding-tree-unit-based analysis operation comprises: determining the content complexity measures based on Hadamard transform of residuals of the intra prediction, the inter prediction, and/or the intra block copy prediction. In a similar endeavor Rose teaches wherein the coding-tree-unit-based analysis operation comprises: determining the content complexity measures based on Hadamard transform of residuals of the intra prediction, the inter prediction, and/or the intra block copy prediction (See ¶0072). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Rose into Vanam because it allows for a reduction of bandwidth usage through the use of less necessary picture information with regards to image data dependent upon referencing images as seen in at least ¶0180. In regards to claim 24, Vanam fails to teach the method of claim 23, wherein the screen content coding tools are selected from any one of intra block copy (IBC), palette mode, adaptive color Transform (ACT), transform skip with residual coding (TSRC), block-based differential pulse-coded modulation (BDPCM), or a combination thereof. In a similar endeavor Rose teaches wherein the screen content coding tools are selected from any one of intra block copy (IBC), palette mode, adaptive color Transform (ACT), transform skip with residual coding (TSRC), block-based differential pulse-coded modulation (BDPCM), or a combination thereof (See ¶0159 and 0176). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Rose into Vanam because it allows for a reduction of bandwidth usage through the use of less necessary picture information with regards to image data dependent upon referencing images as seen in at least ¶0180. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vanam et al. (“Vanam”) (U.S. Patent No. 11,778,224) in view of He et al. (“He”) (“Adaptive Quantization Parameter Selection For H.265/HEVC by Employing Inter-Frame Dependency,” – IEEE Transactions On Circuits and Systems For Video Technology, Vol. 28, No. 12, December 2018) and Sharman et al. (“Sharman”) (U.S. PG Publication No. 2022/0038690), in further view of Rosewarne (“Rose”) (U.S. PG Publication No. 2022/0150509) and Chuang et al. (“Chuang”) (U.S. Patent No. 11,736,704). In regards to claim 5, Vanam fails to teach the method of claim 4, wherein the content complexity measures are based on: PNG media_image1.png 40 115 media_image1.png Greyscale where C denotes a content complexity measure, HADk denotes a sample of Hadamard- transformed prediction residual at position k within a coding unit, W and H are width and height of a corresponding one of the frame. In a similar endeavor Chuang teaches wherein the content complexity measures are based on: PNG media_image1.png 40 115 media_image1.png Greyscale where C denotes a content complexity measure, HADk denotes a sample of Hadamard- transformed prediction residual at position k within a coding unit, W and H are width and height of a corresponding one of the frame (See col. 5, li. 16 – col. 6, li. 19 wherein a version of the equation is provided such that a summation of values produced by a Hadamard transform is then divided by a product of the width and the height). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Chuang into Vanam because it allows for a good tradeoff between coding efficiency and computational complexity as described in at least col. 5, li. 16 – 50. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vanam et al. (“Vanam”) (U.S. Patent No. 11,778,224) in view of He et al. (“He”) (“Adaptive Quantization Parameter Selection For H.265/HEVC by Employing Inter-Frame Dependency,” – IEEE Transactions On Circuits and Systems For Video Technology, Vol. 28, No. 12, December 2018) and Sharman et al. (“Sharman”) (U.S. PG Publication No. 2022/0038690), in further view of Rosewarne (“Rose”) (U.S. PG Publication No. 2022/0150509) and Leontaris et al. (“Leon”) (U.S. PG Publication No. 2009/0086816). In regards to claim 7, Vanam fails to teach the method of claim 4, wherein the coding-tree-unit-based analysis comprises: determining the temporal importance measures based on the recursive propagation process takes into account the content complexity measures associated with the coding units. In a similar endeavor Leon teaches wherein the determining of the temporal importance measures based on the recursive propagation process takes into account the content complexity measures associated with the coding units (See ¶0094 and 0167 in view of 0170-0171). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Leon into Vanam because it allows for complexity consideration and determination as described in ¶0167 with regards to rate allocation as described in ¶0170-0171. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vanam et al. (“Vanam”) (U.S. Patent No. 11,778,224) in view of He et al. (“He”) (“Adaptive Quantization Parameter Selection For H.265/HEVC by Employing Inter-Frame Dependency,” – IEEE Transactions On Circuits and Systems For Video Technology, Vol. 28, No. 12, December 2018) and Sharman et al. (“Sharman”) (U.S. PG Publication No. 2022/0038690), in further view of Vetro et al. (“Vetro”) (U.S. PG Publication No. 2005/0175090). In regards to claim 8, Vanam fails to teach the method of claim 1, wherein the rate and distortion models comprise one or more rate models and one or more distortion models. In a similar endeavor Vetro teaches wherein the rate and distortion models comprise one or more rate models and one or more distortion models (See ¶0001 wherein it is understood that there are various rate and distortion models for allocation of bits used to code the video source and bits that are applied for error resilience). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Vetro into Vanam because a proper balance must be achieved in the encoding process because as described in ¶0002, though lower bit rate may be achieved or desired, a noisy channel by which the encoded video stream is transmitted may easily corrupt a quality of the video, and thus a more resilient stream with overall larger number of bits would also be required. Claim(s) 9, 17 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vanam et al. (“Vanam”) (U.S. Patent No. 11,778,224) in view of He et al. (“He”) (“Adaptive Quantization Parameter Selection For H.265/HEVC by Employing Inter-Frame Dependency,” – IEEE Transactions On Circuits and Systems For Video Technology, Vol. 28, No. 12, December 2018) and Sharman et al. (“Sharman”) (U.S. PG Publication No. 2022/0038690), in further view of Vetro et al. (“Vetro”) (U.S. PG Publication No. 2005/0175090) and Zhao et al. (“Zhao”) (U.S. PG Publication No. 2020/0275104). In regards to claim 9, Vanam fails to teach the method of claim 8, wherein each of the one or more rate models is modelled based on R = α ּ Cη - QSγ, where R is rate, C is content complexity measure, QS is quantization stepsize, and α, β, γ are model parameters; and wherein each of the one or more distortion models is modelled based on D = μ ּּּּ Cη ּ QSε, where D is distortion, C is content complexity measure, QS is quantization stepsize, and μ, η, ε are model parameters. In a similar endeavor Zhao teaches wherein each of the one or more rate models is modelled based on R = α ּ Cη- QSγ, where R is rate, C is content complexity measure, QS is quantization stepsize, and α, β, γ are model parameters (See ¶0084 wherein the rate control model may be based on coding parameters of QP [quantization parameter step sizes] and complexity of macroblock image data, thus one of ordinary skill in the art understands that such an equation as claimed may be covered as such, additional parameters may be seen in at least the following equation on ¶0085); wherein each of the one or more distortion models is modelled based on D = μ ּּּּ Cη ּ QSε, where D is distortion, C is content complexity measure, QS is quantization stepsize, and μ, η, ε are model parameters (See ¶0049-0050 with regards to the distortion models). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Zhao into Vanam because it allows for the consideration of image complexity and quantization parameters, along with other parameters, as described in ¶0084-0085 for a form of bit allocation of rate control models. In regards to claim 17, Vanam teaches the method of claim 9, further comprising: encoding each of the frames and/or each of the coding tree units of the screen content video based on the rate control operation to facilitate generation of a bitstream of the screen content video (See col. 7, li. 9-32 and col. 3, li. 63 – col. 4, li. 17). In regards to claim 18, Vanam fails to teach the method of claim 17, further comprising: updating the model parameters in the rate and distortion models after encoding of each of the frames and/or each of the coding tree units. In a similar endeavor Zhao teaches updating the model parameters in the rate and distortion models after encoding of each of the frames and/or each of the coding tree units (See ¶0056-0057) It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Zhao into Vanam because it allows for the consideration of image complexity and quantization parameters, along with other parameters, as described in ¶0084-0085 for a form of bit allocation of rate control models. Claim(s) 10-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vanam et al. (“Vanam”) (U.S. Patent No. 11,778,224) in view of He et al. (“He”) (“Adaptive Quantization Parameter Selection For H.265/HEVC by Employing Inter-Frame Dependency,” – IEEE Transactions On Circuits and Systems For Video Technology, Vol. 28, No. 12, December 2018) and Sharman et al. (“Sharman”) (U.S. PG Publication No. 2022/0038690), in further view of Vetro et al. (“Vetro”) (U.S. PG Publication No. 2005/0175090), Zhao et al. (“Zhao”) (U.S. PG Publication No. 2020/0275104) and Zhou et al. (“Zhou”) (U.S. PG Publication No. 2020/0029093). In regards to claim 10, Vanam fails to teach the method of claim 8, wherein the one or more rate models comprise a frame-level rate model and a coding-tree-unit-level rate model; and wherein the one or more distortion models comprises a frame-level distortion model, and a coding-tree-unit-level distortion model. In a similar endeavor Zhao and Zhou together teach wherein the one or more rate models comprise a frame-level rate model and a coding-tree-unit-level rate model; and wherein the one or more distortion models comprises a frame-level distortion model, and a coding-tree-unit-level distortion model (See ¶0012, 0022, 0056, 0058, 0072 and 0084 of Zhao with regards to frame level model operations, while Zhou teaches in the Abstract the use of CTU level rate-distortion models). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Zhao and Zhou into Vanam because it allows for the consideration of image complexity and quantization parameters, along with other parameters, as described in ¶0084-0085 of Zhao for a form of bit allocation of rate control models, while Zhou provides CTU level model operations. In regards to claim 11, Vanam fails to teach the method of claim 10, wherein the frame-level rate model and the coding-tree-unit-level rate model are each modelled based on R = α ּ Cη- QSγ, where R is rate, C is content complexity measure, QS is quantization stepsize, and α, β, γ are model parameters; and wherein the frame-level distortion model and the coding-tree-unit-level distortion model are each modelled based on D = μ ּּּּ Cη ּ QSε , where D is distortion, C is content complexity measure, QS is quantization stepsize, and μ, η, ε are model parameters. In a similar endeavor Zhao teaches wherein the frame-level rate model and the coding-tree-unit-level rate model are each modelled based on R = α ּ Cη- QSγ, where R is rate, C is content complexity measure, QS is quantization stepsize, and α, β, γ are model parameters (See ¶0084 wherein the rate control model may be based on coding parameters of QP [quantization parameter step sizes] and complexity of macroblock image data, thus one of ordinary skill in the art understands that such an equation as claimed may be covered as such, additional parameters may be seen in at least the following equation on ¶0085); and wherein the frame-level distortion model and the coding-tree-unit-level distortion model are each modelled based on D = μ ּּּּ Cη ּ QSε , where D is distortion, C is content complexity measure, QS is quantization stepsize, and μ, η, ε are model parameters (See ¶0084 wherein the rate control model may be based on coding parameters of QP [quantization parameter step sizes] and complexity of macroblock image data, thus one of ordinary skill in the art understands that such an equation as claimed may be covered as such, additional parameters may be seen in at least the following equation on ¶0085); wherein each of the one or more distortion models is modelled based on D = μ ּּּּ Cη ּ QSε, where D is distortion, C is content complexity measure, QS is quantization stepsize, and p, q,c are model parameters (See ¶0049-0050 with regards to the distortion models). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Zhao into Vanam because it allows for the consideration of image complexity and quantization parameters, along with other parameters, as described in ¶0084-0085 for a form of bit allocation of rate control models. In regards to claim 12, Vanam fails to teach the method of claim 9, wherein the bits allocation step comprises: group-of-pictures-level bit allocation; frame-level bit allocation; and coding-tree-unit-level bit allocation. In a similar endeavor Zhao and Zhou together teach wherein the bits allocation step comprises: group-of-pictures-level bit allocation; frame-level bit allocation; and coding-tree-unit-level bit allocation (See ¶0012, 0022, 0056, 0058, 0072 and 0084 of Zhao with regards to frame level model operations, while Zhou teaches in the Abstract the use of CTU level rate-distortion models). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Zhao and Zhou into Vanam because it allows for the consideration of image complexity and quantization parameters, along with other parameters, as described in ¶0084-0085 of Zhao for a form of bit allocation of rate control models, while Zhou provides CTU level model operations. In regards to claim 13, Vanam fails to teach the method of claim 12, wherein the rate control operation further comprises: determining the coding parameters associated with each of the frames based on the allocated bits obtained in the frame-level bit allocation and the rate and distortion models; and determining the coding parameters associated with each of the coding tree units based on the allocated bits obtained in the coding-tree-unit-level bit allocation and the rate and distortion models. In a similar endeavor Zhao teaches wherein the rate control operation further comprises: determining the coding parameters associated with each of the frames based on the allocated bits obtained in the frame-level bit allocation and the rate and distortion models (See ¶0006, 0049-0051 and 0084-0087); and determining the coding parameters associated with each of the coding tree units based on the allocated bits obtained in the coding-tree-unit-level bit allocation and the rate and distortion models (See ¶0006, 0049-0051 and 0084-0087 wherein it may be performed at frame level and block levels, especially since block-level ultimately affects the frame level). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Zhao into Vanam because it allows for the consideration of image complexity and quantization parameters, along with other parameters, as described in ¶0084-0085 for a form of bit allocation of rate control models. In regards to claim 14, Vanam fails to teach the method of claim 13, wherein the coding parameters associated with each of the frames comprise quantization parameters and Lagrangian multipliers A associated with each of the frames; and wherein the coding parameters associated with each of the coding tree units comprise quantization parameters and Lagrangian multipliers A associated with each of the coding tree units. In a similar endeavor Zhao teaches wherein the coding parameters associated with each of the frames comprise quantization parameters and Lagrangian multipliers λ associated with each of the frames (See ¶0036, 0049 and 0053 wherein Lagrangian multiplyers are used for quantization parameter determinations, and may be used at a block or frame level as described); and wherein the coding parameters associated with each of the coding tree units comprise quantization parameters and Lagrangian multipliers λ associated with each of the coding tree units (See ¶0036, 0049 and 0053). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Zhao into Vanam because it allows for the consideration of image complexity and quantization parameters, along with other parameters, as described in ¶0084-0085 for a form of bit allocation of rate control models. Claim(s) 15 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vanam et al. (“Vanam”) (U.S. Patent No. 11,778,224) in view of He et al. (“He”) (“Adaptive Quantization Parameter Selection For H.265/HEVC by Employing Inter-Frame Dependency,” – IEEE Transactions On Circuits and Systems For Video Technology, Vol. 28, No. 12, December 2018) and Sharman et al. (“Sharman”) (U.S. PG Publication No. 2022/0038690), in further view of Vetro et al. (“Vetro”) (U.S. PG Publication No. 2005/0175090), Zhao et al. (“Zhao”) (U.S. PG Publication No. 2020/0275104), Zhou et al. (“Zhou”) (U.S. PG Publication No. 2020/0029093) and Yuen et al. (“Yuen”) (U.S. Patent No. 11,025,914). In regards to claim 15, Vanam fails to teach the method of claim 14, wherein the Lagrangian multipliers λ associated with each of the frames are determined based on λ = x ּ CYQSZ, where x = - μ c α γ , y = η - β, z = ε - γ. In a similar endeavor Yuen teaches wherein the Lagrangian multipliers A associated with each of the frames are determined based on λ = x ּ CYQSZ, where x = - μ c α γ , y = η - β, z = ε - γ (See col. 6, li. 54 – col. 7, li. 3). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Yuen into Vanam because it allows for the a shorter re-written variation of the equation for Lagrange multipliers for each CTU as described in col. 6, li. 54 – col. 7, li. 3. Although not used in the rejection, the examiner also points to Li et al. [“λ Domain Rate Control Algorithm for High Efficiency Video Coding”, IEEE Transactions on Image Processing, Vol. 23, No. 9, Sept. 2014] which shows in Equation 9 on page 5 a very similar variation on these equations. In regards to claim 16, Vanam fails to teach the method of claim 14, wherein the Lagrangian multipliers λ associated with each of the coding tree units are determined based on λ = x ּ CYQSZ, where x = - μ c α γ , y = η - β, z = ε - γ. In a similar endeavor Yuen teaches wherein the Lagrangian multipliers λ associated with each of the coding tree units are determined based on λ = x ּ CYQSZ, where x = - μ c α γ , y = η - β, z = ε - γ (See col. 6, li. 54 – col. 7, li. 3). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Yuen into Vanam because it allows for the a shorter re-written variation of the equation for Lagrange multipliers for each CTU as described in col. 6, li. 54 – col. 7, li. 3. Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vanam et al. (“Vanam”) (U.S. Patent No. 11,778,224) in view of He et al. (“He”) (“Adaptive Quantization Parameter Selection For H.265/HEVC by Employing Inter-Frame Dependency,” – IEEE Transactions On Circuits and Systems For Video Technology, Vol. 28, No. 12, December 2018) and Sharman et al. (“Sharman”) (U.S. PG Publication No. 2022/0038690) and Liu (U.S. PG Publication No. 2021/0044805). In regards to claim 21, Vanam fails to teach the method of claim 2, wherein each of the content complexity measures is determined by a sum of absolute transformed difference (SATD). In a similar endeavor Liu teaches wherein each of the content complexity measures is determined by a sum of absolute transformed difference (SATD) (See ¶0052). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Liu into Vanam because it allows for calculation of video frame complexity through prediction residuals as described in ¶0052, thus allowing for a determination of frame complexity in a simplified manner, i.e. through a simple representation of difference values. Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vanam et al. (“Vanam”) (U.S. Patent No. 11,778,224) in view of He et al. (“He”) (“Adaptive Quantization Parameter Selection For H.265/HEVC by Employing Inter-Frame Dependency,” – IEEE Transactions On Circuits and Systems For Video Technology, Vol. 28, No. 12, December 2018) and Sharman et al. (“Sharman”) (U.S. PG Publication No. 2022/0038690), in further view of Chen et al. (U.S. PG Publication No. 2023/0370608). In regards to claim 22, Vanam fails to teach the method of claim 2, wherein each of the temporal importance measures is represented by a scaling factor, wherein the scaling factor indicates similarity with future frames and coding tree units. In a similar endeavor Chen teaches wherein each of the temporal importance measures is represented by a scaling factor, wherein the scaling factor indicates similarity with future frames and coding tree units (See ¶0056). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Chen into Vanam because it allows for difference determination between frames as described in at least ¶0056, wherein this may play a role in determining scene switches by the comparison of such parameters [video frame similarity]. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDEMIO NAVAS JR whose telephone number is (571)270-1067. The examiner can normally be reached M-F, ~ 9 AM -6 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at 5712727383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. EDEMIO NAVAS JR Primary Examiner Art Unit 2483 /EDEMIO NAVAS JR/Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Oct 12, 2022
Application Filed
Apr 16, 2024
Non-Final Rejection — §103
Jul 17, 2024
Response Filed
Jul 31, 2024
Final Rejection — §103
Nov 05, 2024
Request for Continued Examination
Nov 07, 2024
Response after Non-Final Action
Apr 28, 2025
Non-Final Rejection — §103
Jul 28, 2025
Examiner Interview Summary
Jul 28, 2025
Applicant Interview (Telephonic)
Oct 31, 2025
Response Filed
Nov 28, 2025
Final Rejection — §103
Apr 02, 2026
Request for Continued Examination
Apr 08, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598398
Terminal Detection Platform
2y 5m to grant Granted Apr 07, 2026
Patent 12598283
METHOD AND DISPLAY APPARATUS FOR CORRECTING DISTORTION CAUSED BY LENTICULAR LENS
2y 5m to grant Granted Apr 07, 2026
Patent 12593141
INFORMATION MANAGEMENT DEVICE, INFORMATION MANAGEMENT METHOD, AND STORAGE MEDIUM FOR MANAGING INFORMATION PROVIDED TO A MOBILE OBJECT AND DEVICE USED BY A USER IN LOCATION DIFFERENT FROM THE MOBILE OBJECT
2y 5m to grant Granted Mar 31, 2026
Patent 12587686
SIGNALING FOR GENERAL CONSTRAINT INFORMATION IN VIDEO CODING
2y 5m to grant Granted Mar 24, 2026
Patent 12587643
IMAGE ENCODING/DECODING METHOD AND DEVICE, AND RECORDING MEDIUM IN WHICH BITSTREAM IS STORED FOR BLOCK DIVISION AT PICTURE BOUNDARY
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
71%
Grant Probability
96%
With Interview (+24.7%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 540 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month