Prosecution Insights
Last updated: April 19, 2026
Application No. 17/473,725

VIDEO PROCESSING METHOD AND DEVICE

Final Rejection §103
Filed
Sep 13, 2021
Examiner
LEE, JIMMY S
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Sz DJI Technology Co. Ltd.
OA Round
8 (Final)
56%
Grant Probability
Moderate
9-10
OA Rounds
3y 7m
To Grant
84%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
170 granted / 302 resolved
-1.7% vs TC avg
Strong +28% interview lift
Without
With
+28.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
33 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
71.5%
+31.5% vs TC avg
§102
8.8%
-31.2% vs TC avg
§112
12.8%
-27.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 302 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Objections Claims 1 and 20-21 objected to because of the following informalities: the claims recite “selecting motion information for the sub-image-block from the same motion information candidate list for the image block the same motion information candidate list being configured to generate the prediction result;” which appears to be a run-on phrase and is replete with grammatical errors that require attention. For the purposes of examination, the limitation above will be interpreted as “selecting motion information for the sub-image-block from the same motion information candidate list for the image block, the same motion information candidate list being configured to generate the prediction result;”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1,20-21,23-24 rejected under 35 U.S.C. 103 as being unpatentable over Seregin; Vadim et al. (US 20140294078 A1) in view of Zhou; Minhua (US 20130272415 A1) in view of GUILLEMOT; CHRISTINE et al. (US 20140098878 A1) Regarding claim 1, Seregin teaches, A video processing method, (¶26, “video coder may derive the motion vector and/or other motion information for a current video block”) comprising: constructing a motion information candidate list (¶26,88, and 91, video coder may “construct a motion information candidate list” that includes “motion information of spatial and temporal neighboring blocks as candidate motion information for a current video coding block”) for an image block (¶26,88, and 91, motion information candidate list as “candidate motion information for coding a current video block”) of a current frame, (¶26,88, and 101, constructing a motion information candidate list for “a current video block” where motion information is to indicate displacement of “PU of a video block within a current video frame or picture”) dividing the image block into a plurality of sub-image blocks; (¶98, “partitioned, e.g., according to a quadtree structure of LCUs and CUs” which divides the larger unit into “multiple video blocks”) performing prediction on each sub-image-block (¶98-100 and fig. 3, “prediction processing unit 141” performs “intra-predictive coding” or “inter-predictive coding” of the “current video block”) of the plurality of sub-image-blocks (¶98-100, “video data” partitioned into “video blocks”) to obtain a prediction result, (¶98-100, prediction processing unit 141 provides “resulting intra- or inter-coded block”) including, for each sub-image-block (¶98 and Fig. 3, partitioned data into “video blocks”) of the plurality of sub-image-blocks, (¶98, “multiple video blocks”) selecting motion information (¶91, “selecting a candidate” for use with “the current block”) for the sub-image-block (¶91, “the current block”) from the same motion information candidate list for the image block (¶91, “selecting a candidate from the candidate list” for coding the current block according to the merge mode or AMVP mode) and encoding or decoding (¶91 and 98-99, “video encoder 20” signal to the video decoder “an index of the selected candidate” and “video decoder 30” identifying “the candidate selected by the video encoder 20”) the image block (¶71,91, and 98-99, index used to identify which candidate in the candidate list is used “in coding the current block”) according to the prediction result. (¶91 and 98-99, video encoder 20 that encodes video blocks to “reconstruct the encoded block”) But does not explicitly teach, the motion information candidate list including at least one piece of dual motion-information each including two pieces of single motion-information belonging to a first list and a second list; selecting motion information for the sub-image-block from the same motion information candidate list for the image block the same motion information candidate list being configured to generate the prediction result; However, Zhou teaches additionally, the motion information candidate list (¶62, “merge candidate list”) including at least one piece of dual motion-information (¶62-63, merge candidates in the “merge candidate list” may be from “bi-predictive merging candidates” that accommodate “bi-directionally predicted PU”) each including two pieces of single motion-information (¶62-63, merge candidates accommodate indicating “bi-directional” prediction direction with “two prediction list utilization flags used to indicate reference picture lists”) belonging to a first list and a second list; (¶63, indicated two reference picture lists being “a forward reference picture list and a backward reference picture list”) selecting motion information (¶58,66-70, and Fig. 5, “selecting a best inter-prediction mode for a PU” as depicted in Fig. 5 constructing “list0 MVP candidate list” for the PU) for the sub-image-block (¶58,66-70 and Fig. 5, “best inter-prediction mode for a PU” where size of the “PU is a restricted size” and the “PU is not a P-slice” indicating then it is a “B-slice” as depicted in fig. 5) from the motion information candidate list (¶58,61, 66-70, and fig. 5, selecting inter-prediction mode for a PU” starting with “merge candidate list is constructed” for the PU) for the image block, (¶61, “merge candidate list is constructed” for the PU) including: Zhou teaches a technique that selects a prediction mode for a prediction unit from a merge candidate list which includes bi-predictive prediction information that can refer to both forward and backward prediction directions. This bi-predictive candidate also indicates to reference pictures from one of two lists of a forward reference picture list and backward reference picture list. It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou which can convert a bi-predictive candidate into a uni-predictive candidate. This helps in reducing the memory compensation bandwidth in the encoding and decoding process. but does not explicitly teach, selecting motion information for the sub-image-block from the same motion information candidate list for the image block the same motion information candidate list being configured to generate the prediction result; However, Guillemot teaches additionally, selecting motion information for the sub-image-block (¶60,78, and 71, module 42 “select a best candidate” for “the current block to be encoded” and module 62 “selecting a motion information predictor” for “the current block”) from the same motion information candidate list (¶60,78, and 71, “list of candidate motion information predictors” where “decoder can apply the same process as the encoder to determine a list of candidate motion information predictors”) for the image block (¶60,78, and 71, “current block”) the same motion information candidate list being configured to generate the prediction result; (¶59-60 and 78, module 42 “select a best candidate” motion information associated with a motion vector from the “list of motion information prediction candidates” and module 62 “selecting a motion information predictor for the current block in the list of candidate motion information predictors”) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot which selects a best candidate motion information from a list that corresponds to both the encoding and decoding of a multi-view video. This allows for listing that is capable of improving compression. Regarding claim 20, it is the device claim of method claim 1. Seregin teaches additionally, A video processing device, (¶25, “a video block may itself be predicted using motion information prediction techniques”) comprising: a memory storing program codes; (¶128, “Data storage media” that is a “tangible computer-readable storage media which is non-transitory”) and a processor configured to execute the program codes (¶128, “Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions” for implementation of the techniques) to: Refer to teaching of claim 1 to teach the limitations of claim 20. Regarding claim 21, it is the generation method claim of device claim 20. Refer to rejection of claim 20 to teach the additional limitations of claim 21. Regarding claim 23, Seregin with Zhou with Guillemot teach the limitations of claim 1, Seregin teaches additionally, motion information candidate list (¶90-91, “motion information candidate list” for coding current video block) includes a set of candidate motion information (¶88 and 90, “motion information candidate list” considers “set of candidate blocks” when constructing a motion information candidate lists) of the image block. (¶90-91, motion information candidate list for “coding current video block”) Regarding claim 24, Seregin with Zhou with Guillemot teach the limitations of claim 23, Seregin teaches additionally, for each sub-image-block, (¶98 and Fig. 3, partitioned data into “video blocks”) selecting motion information (¶91, “selecting a candidate” for use with “the current block”) for the sub-image-block (¶91, “the current block”) from the same motion information candidate list for the image block (¶91, “selecting a candidate from the candidate list” for coding the current block) further includes: selecting the motion information (¶91, “selecting a candidate”) for the sub-image-block directly from the same motion information candidate list (¶91, “selecting a candidate from the candidate list”) for the image block (¶91, list of motion information candidates” for use in coding current block) according to an index of the candidate motion information corresponding to the sub-image-block. (¶91, signal “an index of the selected candidate” after “selecting a candidate from the candidate list” from list of motion information candidates used for “coding current block”) Claim(s) 2 rejected under 35 U.S.C. 103 as being unpatentable over Seregin; Vadim et al. (US 20140294078 A1) in view of Zhou; Minhua (US 20130272415 A1) in view of GUILLEMOT; CHRISTINE et al. (US 20140098878 A1) in view of Chien; Wei-Jung et al. (US 20180091816 A1) Regarding claim 2, Seregin with Zhou with Guillemot teaches the limitations of claim 1, But does not explicitly disclose the additional limitation of claim 2, However, Chien teaches additionally, motion vector values (¶119, “motion vector candidate”) corresponding to at least part of motion information included in the motion information candidate list (¶119, “motion vector candidate” included in a “merge candidate list for the first block”) have undergone precision conversion. (¶119, “candidate list for the first block that includes at least one fractional precision motion vector candidate”) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the precision motion vectors of Chien that incudes fractional precision motion vectors. This allows for overall improvement of video coding quality. Claim(s) 3,7-8,19 rejected under 35 U.S.C. 103 as being unpatentable over Seregin; Vadim et al. (US 20140294078 A1) in view of Zhou; Minhua (US 20130272415 A1) in view of GUILLEMOT; CHRISTINE et al. (US 20140098878 A1) in view of CHEN; Huanbang et al. (US 20170374379 A1) Regarding claim 3, Seregin with Zhou with Guillemot teaches the limitations of claim 1, Zhou teaches additionally, wherein the preset rule is a first present rule; (¶67 and fig. 5, “bi-predictive merging candidate is converted to either a list0 or list1 merging candidate based on the values of the reference picture indices in the bi-predictive merging candidate”) the method further comprising: (¶61-70 and fig. 5, “merging candidate list is constructed 500 for the PU” depicted in Fig. 5) a preset rule (¶6-70 and fig. 5, determine 514 “the best inter-prediction mode for the PU”) But does not explicitly teach the additional limitations of claim 3, However, Chen teaches additionally, in response to the selected motion information being first dual motion-information selected (¶262 and 271, “merge motion information unit set including two motion information units” determined from “N candidate merged motion information unit sets, the merged motion information unit set i including the two motion information units” where “N is a positive integer”) from the at least one piece of dual motion-information, (¶271, “N candidate merged motion information unit sets, the merged motion information unit set i including the two motion information units” where “N is a positive integer”) processing the first dual motion-information (¶305 and 271, “performing scaling processing on the merged motion information unit set i” determined from the “N candidate merged motion information unit sets, the merged motion information unit set i including the two motion information units” where “N is a positive integer”) according to a preset rule (¶305, “performing scaling processing” so that the “merged motion information unit set i is scaled down to a reference frame of the current picture block”) before using the selected motion information for encoding or decoding the image block. (¶305, performing scaling processing occurs in preparation for “predicting the pixel value of the current picture block by using the affine motion model and a scaled merged motion information unit set I”) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the prediction method of Chen which determines a merged motion information set i. This merged unit set determination can help improve coding efficiency and reduce computational complexity of picture prediction. Regarding claim 7, Seregin with Zhou with Guillemot with Chen teaches the limitations of claim 3, Chen teaches additionally, the preset rule (¶343 and 305, “scaling processing performed”) includes: adjusting the first dual motion-information (¶343 and 305, “scaling processing performed” on the “candidate merged motion information unit set”) to be one piece of single motion-information as the selected motion information of the corresponding sub-image-block. (¶343 and 305, scaling processing performed on the “candidate merged motion information unit set” including “deletion” of a “motion vector in one or more motion information units in the candidate merged motion information unit set”) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the prediction method of Chen which determines a merged motion information set i. This merged unit set determination can help improve coding efficiency and reduce computational complexity of picture prediction. Regarding claim 8, Seregin with Zhou with Guillemot with Chen teaches the limitations of claim 7, Chen teaches additionally, adjusting the first dual motion- information (¶343 and 305, “scaling processing performed” on the “candidate merged motion information unit set”) to be the one piece of single motion-information (¶343 and 305, scaling processing performed on the “candidate merged motion information unit set” including “deletion” of a “motion vector in one or more motion information units in the candidate merged motion information unit set”) includes: selecting one piece of motion information (¶343 and 271, “scaling processing” of one of the “motion vector” that is “modified” of the “candidate merged motion information unit set” where the unit set is the “merged motion information unit set i including the two motion information units”) from the first dual motion-information (¶343, “candidate merged motion information unit set”) as the one piece of single motion-information. (¶343 and 271, scaling processing performs “deletion” of a first “motion vector” and “modification” of a second “motion vector” in the “merged motion information unit set i including the two motion information units”) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the prediction method of Chen which determines a merged motion information set i. This merged unit set determination can help improve coding efficiency and reduce computational complexity of picture prediction. Regarding claim 19, Seregin with Zhou with Guillemot teaches the limitations of claim 1, But does not explicitly teach the limitations of claim 19, However, Chen teaches additionally, constructing the motion information candidate list (¶274-275 and 271, “merged motion information unit set list” of identifiers “of the merged motion information unit set I” which is a list of “N candidate merged motion information unit sets” where the “N is positive integer”) includes: adjusting a piece of candidate dual motion-information (¶305 and 343, “performing scaling processing on the merged motion information unit set i”) to be one piece of single motion- information (¶343 and 274, “scaling processing” performed relating to “deletion” of a “motion vector” in “the candidate merged motion information unit set” which are “merged motion information unit set I including the two motion information units”) for constructing the motion information candidate list. (¶305 and 343, “scaling processing” performed relating to “deletion” of a “motion vector in one or more motion information units in the candidate merged motion information unit set”) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the prediction method of Chen which determines a merged motion information set i. This merged unit set determination can help improve coding efficiency and reduce computational complexity of picture prediction. Claim(s) 4-6 rejected under 35 U.S.C. 103 as being unpatentable over Seregin; Vadim et al. (US 20140294078 A1) in view of Zhou; Minhua (US 20130272415 A1) in view of GUILLEMOT; CHRISTINE et al. (US 20140098878 A1) in view of CHEN; Huanbang et al. (US 20170374379 A1) in view of Chien; Wei-Jung et al. (US 20180091816 A1) Regarding claim 4, Seregin with Zhou with Guillemot with Chen teaches the limitations of claim 3, Chen teaches additionally, in response to the first dual motion-information being first target dual motion- information, (¶262 and 271, “merge motion information unit set including two motion information units” determined from “N candidate merged motion information unit sets, the merged motion information unit set i including the two motion information units” where “N is a positive integer”) and in response to the dual motion-information (¶286 and 271, “N candidate merged motion information unit sets” where “N is a positive integer”) being second target dual motion-information (¶286 and 271, determining “the merged motion information unit set” which meets “a second condition” from the “N candidate merged motion information unit sets” where “N is a positive integer”) different from the first target dual motion-information, (¶286 and 271, determining “the merged motion information unit set” which does not “met” at least one of “a second condition” from the “N candidate merged motion information unit sets” where “N is a positive integer”) the preset rule includes adjusting the first dual motion-information (¶343 and 305, “scaling processing performed” on the “candidate merged motion information unit set”) to be one piece of single motion-information (¶343 and 305, scaling processing performed on the “candidate merged motion information unit set” including “deletion” of a “motion vector in one or more motion information units in the candidate merged motion information unit set”) as the selected motion information of the one sub-image-block. (¶343 and 271, “scaling processing” of one of the “motion vector” that is “modified” of the “candidate merged motion information unit set” where the unit set is the “merged motion information unit set i including the two motion information units”) While Chen does not explicitly disclose the exact combination of processing a motion information based on the condition of anther motion information, the prior art leaves open possibility of components and units being combined or integrated. This is indicated when Chen ¶684 discloses that the implementations components may be combined or integrated. With this in mind, Chen teaches the optional of implementation of a motion information unit set meeting a condition and also teaches implementing scale processing on candidate merged motion information unit sets. It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the prediction method of Chen that determines merged motion information sets. This merged unit set determination can help improve coding efficiency and reduce computational complexity of picture prediction. But does not explicitly disclose, the preset rule includes converting precision of a motion vector value included in the first dual motion-information; However, Chien teaches additionally, being first target dual motion-information, (¶119, “for merge/skip mode”) the preset rule (¶119, “performing motion compensation”) includes converting precision of a motion vector value included in the first dual motion-information; (¶119, “round a motion vector to an integer precision only when performing motion compensation” such that coder may “construct a merge candidate list for the first block that includes at least one fractional precision motion vector candidate”) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the prediction method of Chen with the precision motion vectors of Chien that incudes fractional precision motion vectors. This allows for overall improvement of video coding quality. Regarding claim 5, Seregin with Zhou with Guillemot with Chen teaches the limitations of claim 3, But does not explicitly disclose the additional limitation of claim 5, However, Chien teaches additionally, converting precision of a motion vector value (¶119, “round a motion vector to an integer precision”) included in the first dual motion- information. (¶119, “candidate list for the first block that includes at least one fractional precision motion vector candidate”) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the prediction method of Chen with the precision motion vectors of Chien that incudes fractional precision motion vectors. This allows for overall improvement of video coding quality. Regarding claim 6, Seregin with Zhou with Guillemot with Chen in view of Chien teaches the limitations of claim 5, Chien teaches additionally, converting the precision of the motion vector value to integer pixel precision. (¶119, “round a motion vector to an integer precision”) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the prediction method of Chen with the precision motion vectors of Chien that incudes fractional precision motion vectors. This allows for overall improvement of video coding quality. Claim(s) 10 rejected under 35 U.S.C. 103 as being unpatentable over Seregin; Vadim et al. (US 20140294078 A1) in view of Zhou; Minhua (US 20130272415 A1) in view of GUILLEMOT; CHRISTINE et al. (US 20140098878 A1) in view of CHEN; Huanbang et al. (US 20170374379 A1) in view of NAM; Junghak et al. (US 20160165259 A1) Regarding claim 10, Seregin with Zhou with Guillemot with Chen teaches the limitations of claim 8, But does not explicitly disclose the additional limitation of claim 10, However, Nam teaches additionally, a selection method for selecting the one piece of motion information from the first dual motion-information is updatable. (¶103, “a method for selecting one of the updated inter-view motion vectors, indices can be allocated to the inter-view motion vector in order of updating the inter-view motion vectors and an inter-view motion vector indicated by an inter-view motion vector selection index can be used”) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the prediction method of Chen with the motion vector selection of Nam which uses updated motion vectors. This allows for the use of updated motion vectors in decoding or warping. Claim(s) 11-12,17 rejected under 35 U.S.C. 103 as being unpatentable over Seregin; Vadim et al. (US 20140294078 A1) in view of Zhou; Minhua (US 20130272415 A1) in view of GUILLEMOT; CHRISTINE et al. (US 20140098878 A1) in view of CHEN; Huanbang et al. (US 20170374379 A1) in view of Nakamura; Hiroya et al. (US 20140153647 A1) Regarding claim 11, Seregin with Zhou with Guillemot with Chen teaches the limitations of claim 8, But does not explicitly teach the limitations of claim 11, However, Nakamura teaches additionally, writing an identifier of the one piece of single motion-information into a code stream. (¶217 and 222, “motion vector predictor selection unit 124 outputs the index i in the MVP list corresponding to the selected motion vector predictor mvp as an MVP index mvp_idx” defined in the “second syntax pattern”) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the prediction method of Chen with the scanning of Nakamura which selects the motion vector from a loop scanning. This helps reduce the size of the code size. Regarding claim 12, Seregin with Zhou with Guillemot with Chen teaches the limitations of claim 8, But does not explicitly teach the limitations of claim 12, However, Nakamura teaches additionally, obtaining an identifier of the one piece of single motion-information from a code stream for selecting the one piece of single motion-information from the first dual motion-information. (¶232, “the syntax element mvp_idx_lX[x0][y0], which denotes an index in an MVP list (a list of motion vector predictor candidates referred to), is decoded” such that “mvp_idx_lX[x0][y0] is an MVP index in a list LX for the prediction block”) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the prediction method of Chen with the scanning of Nakamura which selects the motion vector from a loop scanning. This helps reduce the size of the code size. Regarding claim 17, Seregin with Zhou with Guillemot with Chen teaches the limitations of claim 7, But does not explicitly teach the limitations of claim 17, However, Nakamura teaches additionally, selecting a target adjustment method from a plurality of adjustment methods; (¶142,217,156, Fig. 17 and 12, one of the “methods is selected and defined for use” such that “motion vector predictor selection unit 124 supplies the selected motion vector predictor mvp” that uses “the same reference list as the target block and that does not require scaling”) and adjusting the first dual motion-information to be the one piece of single motion- information based on the target adjustment method. (¶246 and 247, “derives a motion vector difference mvd by subtracting the selected motion vector predictor mvp from the motion vector mv and outputs the motion vector difference mvd” using the supplied “selected motion vector predictor mvp”) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the prediction method of Chen with the scanning of Nakamura which selects the motion vector from a loop scanning. This helps reduce the size of the code size. Claim(s) 13 rejected under 35 U.S.C. 103 as being unpatentable over Seregin; Vadim et al. (US 20140294078 A1) in view of Zhou; Minhua (US 20130272415 A1) in view of GUILLEMOT; CHRISTINE et al. (US 20140098878 A1) in view of CHEN; Huanbang et al. (US 20170374379 A1) in view of LEE; Bae Keun (US 20200112738 A1) Regarding claim 13, Seregin with Zhou with Guillemot with Chen teaches the limitations of claim 7, But does not explicitly disclose the additional limitation of claim 13, However, Lee teaches additionally, performing a merging process (¶185, “an average merge candidate may be derived”) on the first dual motion-information (¶185, based on a calculation of “motion vector of a first merge candidate” and “scaled motion vector of the second merge candidate”) to obtain the one piece of single motion-information. (¶185, “an average merge candidate”) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the prediction method of Chen with the merge candidate generation of Lee which can merge two motion vectors into one motion vector. This process can help improve performance. Claim(s) 16 rejected under 35 U.S.C. 103 as being unpatentable over Seregin; Vadim et al. (US 20140294078 A1) in view of Zhou; Minhua (US 20130272415 A1) in view of GUILLEMOT; CHRISTINE et al. (US 20140098878 A1) in view of CHEN; Huanbang et al. (US 20170374379 A1) in view of LEE; Bae Keun (US 20200112738 A1) in view of LEE; Jin-Ho et al. (US 20200322628 A1) (Lee2) Regarding claim 16, Seregin with Zhou with Guillemot with Chen with Lee teaches the limitations of claim 13, Seregin teaches additionally, determining a weight of each piece of motion information in the first dual motion- information; (¶83, “apply weighted bi-prediction to the two predictors with respective weights w0 and w1 for the first predictor and the second predictor”) wherein: the merging process (¶77 and 83, “converted bi-directional MVs” using “merge mode”) includes a weighting process (¶83, “apply weighted bi-prediction” to restrict the use of bi-prediction mode) using the weight of each piece of motion information (¶83, “apply weighted bi-prediction to the two predictors with respective weights w0 and w1”) in the first dual motion information; (¶83, apply weighted bi-prediction “with respective weights w0 and w1 for the first predictor and the second predictor that is a copy of the first predictor”) but does not explicitly teach the additional limitation of claim 16. However, Lee2 teaches additionally, influence factors of the weight (¶654, “ weighted combination based on”) of each piece of motion information in the first dual motion-information include at least one of a size of the image block, (¶654, ”weighted combination based on block size”) a pixel value of the image block, a pixel value of an adjacent area of the image block, size and/or quantity of motion information belonging to a first list among motion information added to the motion information candidate list, or size and/or quantity of motion information belonging to a second list among motion information added to the motion information candidate list. (¶654-658, “motion vector of the combined inter-prediction information may be the result of a weighted combination of neighbor motion vectors based on a block size (i.e. weighted combination based on block size)” where motion vector of the combination is based on two different motion vectors from different neighbor motion vectors) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the prediction method of Chen with the merge candidate generation of Lee with the weighting of Lee2 applied to a weighted combination of motion vectors. This allows for providing higher weights towards higher correlated blocks. Claim(s) 22 rejected under 35 U.S.C. 103 as being unpatentable over Seregin; Vadim et al. (US 20140294078 A1) in view of Zhou; Minhua (US 20130272415 A1) in view of GUILLEMOT; CHRISTINE et al. (US 20140098878 A1) in view of Grois; Dan et al. (US 20200213595 A1) Regarding claim 22, Seregin with Zhou with Guillemot teach the limitations of claim 1, But does not explicitly teach the additional limitations of claim 22, However, Grois teaches additionally, dividing the image block (¶56 and Fig. 5A, “CU 500 that is split using triangular prediction regions/ units”) into a plurality of triangular sub-image-blocks. (¶56 and Fig. 5A, “CU 500 that is split using triangular prediction regions/units” split into “triangular prediction region/unit 501 and triangular prediction region/unit 502” depicted in fig. 5A) It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to combine video coding of Seregin with the motion compensation of Zhou with the motion information prediction of Guillemot with the VVC coding of Grois which encodes triangular prediction units. This allows for enabling more accurate separation of pixels that can enable more accurate prediction. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIMMY S LEE whose telephone number is (571)270-7322. The examiner can normally be reached Monday thru Friday 10AM-8PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph G. Ustaris can be reached at (571) 272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDEMIO NAVAS JR/Primary Examiner, Art Unit 2483 /JIMMY S LEE/Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Sep 13, 2021
Application Filed
Sep 09, 2023
Non-Final Rejection — §103
Dec 14, 2023
Response Filed
Mar 16, 2024
Final Rejection — §103
May 21, 2024
Response after Non-Final Action
Jun 25, 2024
Response after Non-Final Action
Jun 26, 2024
Applicant Interview (Telephonic)
Jul 01, 2024
Request for Continued Examination
Jul 10, 2024
Response after Non-Final Action
Jul 27, 2024
Non-Final Rejection — §103
Oct 23, 2024
Response Filed
Nov 27, 2024
Final Rejection — §103
Dec 10, 2024
Response after Non-Final Action
Dec 20, 2024
Response after Non-Final Action
Jan 08, 2025
Request for Continued Examination
Jan 18, 2025
Response after Non-Final Action
Feb 04, 2025
Non-Final Rejection — §103
Apr 22, 2025
Response Filed
Apr 28, 2025
Final Rejection — §103
Jun 24, 2025
Interview Requested
Jul 01, 2025
Applicant Interview (Telephonic)
Jul 01, 2025
Examiner Interview Summary
Jul 02, 2025
Response after Non-Final Action
Jul 30, 2025
Request for Continued Examination
Aug 03, 2025
Response after Non-Final Action
Aug 18, 2025
Non-Final Rejection — §103
Nov 14, 2025
Response Filed
Feb 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604034
METHOD FOR PARTITIONING BLOCK AND DECODING DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12596190
MILLIMETER WAVE DISPLAY ARRANGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12581086
MERGE WITH MVD BASED ON GEOMETRY PARTITION
2y 5m to grant Granted Mar 17, 2026
Patent 12563112
SPATIALLY UNEQUAL STREAMING
2y 5m to grant Granted Feb 24, 2026
Patent 12554017
EBS/TOF/RGB CAMERA FOR SMART SURVEILLANCE AND INTRUDER DETECTION
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
56%
Grant Probability
84%
With Interview (+28.1%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 302 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month