Prosecution Insights
Last updated: April 19, 2026
Application No. 16/952,431

APPARATUS FOR DECODING MOTION INFORMATION IN MERGE MODE

Non-Final OA §103§DP
Filed
Nov 19, 2020
Examiner
RALIS, STEPHEN J
Art Unit
3992
Tech Center
3900
Assignee
Ibex Pt Holdings Co. Ltd.
OA Round
3 (Non-Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
4y 4m
To Grant
78%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
64 granted / 194 resolved
-27.0% vs TC avg
Strong +45% interview lift
Without
With
+45.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
19 currently pending
Career history
213
Total Applications
across all art units

Statute-Specific Performance

§101
2.4%
-37.6% vs TC avg
§103
33.4%
-6.6% vs TC avg
§102
16.0%
-24.0% vs TC avg
§112
33.5%
-6.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 194 resolved cases

Office Action

§103 §DP
NON-FINAL ACTION (REISSUE OF U.S. PATENT 8,654,855) TABLE OF CONTENTS I. ACKNOWLEDGEMENTS 2 II. REISSUE PROCEDURAL REMINDERS 3 III. OTHER PROCEEDINGS 4 IV. STATUS OF CLAIMS 4 V. AIA STATUS 5 VI. CLAIM INTERPRETATION – PHRASES INVOKING 35 U.S.C. § 112, SIXTH PARAGRAPH 5 VII. PRIOR ART CITED HEREIN 16 VIII. RESPONSE TO ARGUMENTS 16 IX. CLAIM REJECTIONS – 35 USC § 103 (OBVIOUSNESS) 19 X. CLAIM REJECTIONS – 35 USC § 251 (DEFECTIVE REISSUE DECLARATION) 30 XI. NON-STATUTORY DOUBLE PATENTING 31 XII. CONCLUSION 44 I. ACKNOWLEDGEMENTS This non-final Office action addresses U.S. reissue application No. 16/952,431 (“Instant Application”). Based upon a review of the instant application, the actual filing date is November 19, 2020 (“Actual Filing Date”). The Instant Application is a reissue application of U.S. Patent No. 8,654,855 (“Patent Under Reissue” or “’855 Patent”) titled “APPARATUS FOR DECODING MOTION INFORMATION IN MERGE MODE.” The Patent Under Reissue was filed on January 18, 2013 (“Non-Provisional Filing Date”), and assigned by the Office non-provisional U.S. patent application control number 13/745,288 (“Non-Provisional Application”) and issued on February 18, 2014, with claims 1-5 (“Originally Patented Claims”). On August 1, 2023, a non-final Office action was issued (“Aug 2023 Non-Final Action”). On January 23, 2024, Applicant submitted a response to the Aug 2023 Non-Final Action (“Jan 2023 Response”). On April 22, 2024, a non-final Office action was issued (“Apr 2024 Non-Final Action”). On June 6, 2024, Applicant submitted a response to the Apr 2024 Non-Final Action (“Jun 2024 Response”). This non-final action addresses the Jun 2024 Response. II. REISSUE PROCEDURAL REMINDERS Disclosure of other proceedings. Applicant is reminded of the continuing obligation under 37 CFR 1.178(b), to timely apprise the Office of any prior or concurrent proceed-ing in which the Patent Under Reissue is or was involved. These proceedings would include interferences, reissues, reexaminations, and litigation. Disclosure of material information. Applicant is further reminded of the continuing obligation under 37 CFR 1.56, to timely apprise the Office of any information which is mate-rial to patentability of the claims under consideration in this reissue appli-cation. These disclosure obligations rest with each individual associated with the filing and prosecution of this application for reissue. See also MPEP §§ 1404, 1442.01 and 1442.04. Manner of making amendments. Applicant is reminded that changes to the Instant Application must comply with 37 C.F.R. § 1.173, such that all amendments are made in respect to the Patent Under Reissue as opposed to any prior changes entered in the Instant Application. All added material must be underlined, and all omitted material must be enclosed in brackets, in accordance with Rule 173. Applicant may submit an appendix to any response in which claims are marked up to show changes with respect to a previous set of claims, however, such claims should be clearly denoted as “not for entry.” III. OTHER PROCEEDINGS Based upon Applicants’ statements as set forth in the Instant Application and the Examiner's independent review of the Patent Under Reissue itself and its prosecution history, the Examiner cannot locate any concurrent proceedings before the Office, ongoing litigation, previous reexaminations (ex parte or inter partes), supplemental examinations, or certificates of correction regarding the Patent Under Reissue. The following PTAB proceedings involving the Patent Under Reissue have been found: Inter Partes Review case IPR2018-00011; Inter Partes Review case IPR2018-00012; Inter Partes Review case IPR2017-00101; and Inter Partes Review case IPR2017-00102. IV. STATUS OF CLAIMS Claims 6-16 are currently pending (“Pending Claims”). Claims 6-16 are currently examined (“Examined Claims”). Claims 1-5 are canceled. Regarding the Examined Claims and as a result of this Office action: Claims 6-16 are rejected under 35 U.S.C. §103. Claims 6-16 are rejected under 35 U.S.C. §251. Claims 6-16 are rejected on the basis on non-statutory double patenting. V. AIA STATUS Because the Instant Application does not contain a claim having an effective date on or after March 16, 2013, the America Invents Act First Inventor to File (“AIA -FITF”) provisions do not apply. Instead, the pre-AIA “First to Invent” provisions will govern this proceeding. See 35 U.S.C. § 100 (note). In the event the determination of the status of the application as subject to pre-AIA 35 U.S.C. 102 and 103 is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. VI. CLAIM INTERPRETATION – PHRASES INVOKING 35 U.S.C. § 112, SIXTH PARAGRAPH The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: Functional Phrase #1 (claim 6) – a merge mode motion information decoding unit configured to decode motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode. Functional Phrase #2 (claim 6) – a prediction block generation unit configured to generate a prediction block of the current block using the decoded motion information. Functional Phrase #3 (claim 6) – a residual block decoding unit configured to generate a two-dimensional quantization block by inversely scanning residual signals, inversely quantize the two-dimensional quantization block using a quantization parameter, and generate a residual block by inversely transforming the inverse-quantized block. Functional Phrase #4 (claim 6) – a merge predictor index decoding unit configured to reconstruct a merge predictor index of a current block using a received merge codeword Functional Phrase #5 (claim 6) – a merge candidate generation unit configured to generate one or more merge candidates when the number of valid merge candidates of the current block is smaller than a predetermined number; Functional Phrase #6 (claim 6) – a merge predictor selection unit configured to generate a merge candidate list using the spatial merge candidates, the temporal merge candidates, and one or more merge candidates generated by the merge candidate generation unit and to select a merge predictor based on the merge predictor index Functional Phrase #7 (claim 6) – a motion vector derivation unit configured to determine a temporal merge candidate picture and determine a temporal merge candidate block within the temporal merge candidate picture in order to generate a motion vector of the temporal merge candidate, wherein the temporal merge candidate picture is determined differently depending on a slice type, and the motion vector derivation unit determines, on the basis of the slice type, whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header, wherein the motion vector derivation unit is configured to set a reference picture index of the temporal merge candidate as 0, and wherein a motion vector of the temporal merge candidate is selected among motion vectors of a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit. Functional Phrase #8 (claim 7) – a reference picture index derivation unit configured to set a reference picture index of one of blocks neighboring the current block or 0 as a reference picture index of the temporal merge candidate Because these claim limitations are being interpreted under pre-AIA 35 U.S.C. § 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. For computer-implemented means-plus-function limitations, a general purpose computer is only sufficient as the corresponding structure for performing a general computing function. When there is a specific function to be performed, it is required that an algorithm for performing the function be disclosed, and the corresponding structure becomes a general purpose computer transformed into a special purpose computer by programming the computer to perform the disclosed algorithm. The specification must explicitly disclose the algorithm for performing the claimed function, and simply reciting the claimed function in the specification will not be a sufficient disclosure for an algorithm which, by definition, must contain a sequence of steps. See MPEP § 2181(II)(B): An algorithm is defined, for example, as “a finite sequence of steps for solving a logical or mathematical problem or performing a task.” Microsoft Computer Dictionary, Microsoft Press, 5th edition, 2002. Applicant may express the algorithm in any understandable terms including as a mathematical formula, in prose, in a flow chart, or in any other manner that provides sufficient structure. [Citations and select quotations omitted.] Based upon a review of the Patent Under Reissue, the Examiner concludes that the corresponding structure for the above-identified Functional Phrases is disclosed in the Patent Under Reissue as follows: Functional Phrase #1 (claim 6) – a merge mode motion information decoding unit configured to decode motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode. – corresponds to merge mode motion information decoding unit 230 of FIG. 4, which is shown in more detail in FIG. 7. The Patent Under Reissue does not explicitly disclose the structure of the claimed unit, however, a person of ordinary skill would have understood that the written description of the Patent Under Reissue discloses coding and decoding processes to be implemented via a processor or computer. The corresponding structure for the merge mode motion information decoding unit is a processor programmed to implement the processes performed by blocks 431-438 of FIG. 7. Functional Phrase #2 (claim 6) – a prediction block generation unit configured to generate a prediction block of the current block using the decoded motion information. – corresponds to prediction block generation unit 250 of FIG. 4. The Patent Under Reissue does not explicitly disclose the structure of the claimed unit, however, a person of ordinary skill would have understood that the written description of the Patent Under Reissue discloses coding and decoding processes to be implemented via a processor or computer. The corresponding structure for the prediction block generation unit is a processor programmed to implement the algorithm disclosed in the specification for performing the claimed functions. The algorithm for performing the claimed function of the prediction block generation unit is disclosed at column 9:1-17 and involves generating the prediction block of the current block using motion information reconstructed by the merge mode motion information decoding unit 230 or the AMVP mode motion information deciding unit 240. Functional Phrase #3 (claim 6) – a residual block decoding unit configured to generate a two-dimensional quantization block by inversely scanning residual signals, inversely quantize the two-dimensional quantization block using a quantization parameter, and generate a residual block by inversely transforming the inverse-quantized block. – corresponds to residual block decoding unit 260 of FIG. 4. The Patent Under Reissue does not explicitly disclose the structure of the claimed unit, however, a person of ordinary skill would have understood that the written description of the Patent Under Reissue discloses coding and decoding processes to be implemented via a processor or computer. The corresponding structure for the residual block decoding unit is a processor programmed to implement the algorithm disclosed in the specification for performing the claimed functions. The algorithm for performing the claimed function of the residual block decoding unit is disclosed at column 9:18-37 and 63-65 and involves executing three functions: (i) generating a 2-D quantized coefficient block by inversely scanning entropy-decoded coefficients (column 9:19-22); (ii) quantizing a generated coefficient block using an inverse quantization matrix (column 9:35-37); and (iii) reconstructing a residual block by inversely transforming the inversely quantized coefficient block (column 9:63-65). Functional Phrase #4 (claim 6) – a merge predictor index decoding unit configured to reconstruct a merge predictor index of a current block using a received merge codeword – corresponds to merge predictor index decoding unit 431 of FIG. 7. The Patent Under Reissue does not explicitly disclose the structure of the claimed unit, however, a person of ordinary skill would have understood that the written description of the Patent Under Reissue discloses coding and decoding processes to be implemented via a processor or computer. The corresponding structure for the merge predictor index decoding unit (as well as each of the other recited “unit” limitations) is a processor programmed to implement the algorithm disclosed in the specification for performing the claimed functions. The algorithm for performing the claimed function of the merge predictor index decoding unit 431 (which is the same for unit 331 of FIG. 6) is disclosed at column 13:9-12 and involves reconstructing a merge predictor index, corresponding to a received merge predictor codeword, using a predetermined table corresponding to the number of merge candidates. Functional Phrase #5 (claim 6) – a merge candidate generation unit configured to generate one or more merge candidates when the number of valid merge candidates of the current block is smaller than a predetermined number; – corresponds to merge candidate generation unit 437 of FIG. 7 (which is labelled in the figure as “merge candidate index decoding unit 437”). The algorithm for performing the claimed function of the merge candidate generation unit 437 is disclosed at column 14:13-31 and involves generating a merge candidate when the number of merge candidates is smaller than a predetermined number using one or more of the optional processes described at column 14:13-20, column 14:20-27, column 14:27-30, and column 14:30-31). Functional Phrase #6 (claim 6) – a merge predictor selection unit configured to generate a merge candidate list using the spatial merge candidates, the temporal merge candidates, and one or more merge candidates generated by the merge candidate generation unit and to select a merge predictor based on the merge predictor index – corresponds to merge predictor selection unit 436 of FIG. 7. The algorithm for performing the claimed function of the merge predictor selection unit 436 is disclosed at column 14:32-51 and involves obtaining a list of merge candidates using a spatial merge candidate derived by the spatial merge candidate derivation unit 432, a temporal merge candidate generated by the temporal merge candidate configuration unit 435, and merge candidates generated by the merge candidate generation unit 437, and if a plurality of merge candidates has the same motion information, a merge candidate having a lower order of priority is deleted from the list, and a merge candidate is selected as the merge predictor of a current block. Functional Phrase #7 (claim 6) – a motion vector derivation unit configured to determine a temporal merge candidate picture and determine a temporal merge candidate block within the temporal merge candidate picture in order to generate a motion vector of the temporal merge candidate … wherein the temporal merge candidate picture is determined differently depending on a slice type, and the motion vector derivation unit determines, on the basis of the slice type, whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header, wherein the motion vector derivation unit is configured to set a reference picture index of the temporal merge candidate as 0 … wherein a motion vector of the temporal merge candidate is selected among motion vectors of a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit. – corresponds to motion vector derivation unit for temporal merge candidate 434 of FIG. 7. The algorithm for performing the claimed function of the motion information generation unit 438 (which is the same for unit 234 of FIG. 5) is disclosed at column 11:46 – 12:30 and involves determining a picture to which the temporal merge candidate belongs, and setting temporal merge candidate picture as a picture having a reference picture index of 0. The temporal merge candidate picture is set either (i) as the picture having index 0 in the case of the slice type being P, or (ii) as the first picture of a reference picture list indicated by a flag indicative of a temporal merge candidate list in a slice header in the case of the slice type being B. The algorithm involves the second “determining” function to be executed by setting the temporal merge candidate picture as the first picture included in a list 0 when the slice type is P and by setting the temporal merge candidate picture as the first picture of a reference picture list indicated by a flag that denotes a temporal merge candidate list in a slice header when the slip type is B, as described at columns 11:46–58. The algorithm also involves obtaining a temporal merge candidate block within the temporal merge candidate picture by assigning an order of priority to the plurality of corresponding blocks and identifying the first valid block as the temporal merge candidate block, as described at columns 11:59 – 12:22. The algorithm also involves setting the motion vector of a temporal merge candidate as the motion vector of the temporal merge candidate prediction block, as described at column 12:23-26. Functional Phrase #8 (claim 7) – a reference picture index derivation unit configured to set a reference picture index of one of blocks neighboring the current block or 0 as a reference picture index of the temporal merge candidate – corresponds to reference picture index derivation unit 433 of FIG. 7. The algorithm for performing the claimed function of the reference picture index derivation unit 437 (which is the same for unit 233 of FIG. 5) is disclosed at column 11:6-45 and involves obtaining the reference picture index of the temporal merge candidates of a current block by setting it as (i) the reference picture index of one or more of the valid blocks (i.e., prediction units) that spatially neighbor a current block, as described at column 11:13-44, or (ii) 0 if there is no valid reference picture index, as described at column 11:44-45. If Applicant wishes to provide further explanation or dispute the Examiner’s interpretation of the corresponding structure, Applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. VII. PRIOR ART CITED HEREIN The following prior art patents and printed publications are cited herein: “WD5: Working Draft 5 of High-Efficiency Video Coding,” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, November 2011 (“WD5”); and U.S. Patent Application Publication 2012/0236942 (“Lin”); U.S. Patent 9,161,043 (“’043 Patent”); U.S. Patent 9,036,709 (“’709 Patent”); and U.S. Patent 9,025,669 (“’669 Patent”). VIII. RESPONSE TO ARGUMENTS Applicant’s submission of a terminal disclaimer in respect to co-pending reissue application no. 17/101,457 and Reissued Patent RE49,907 has been received and is sufficient to overcome the previous obviousness-type double patenting rejections. Accordingly, those rejections are withdrawn. However, in light of a review of other patents in the same family as the Instant Application, Examiner determines that additional obviousness-type double patenting rejections are implicated. Those rejections are advanced below in Section XI. In addition, the reissue declaration submitted January 23, 2024, was previously considered sufficient to overcome the § 251 rejections tendered in the Aug 2023 Non-Final Action, however, it has been determined that the corrected declaration is also defective because it inaccurately characterizes the underlying patent invalid or inoperative because the patentee claimed less than the patentee was allowed to claim. Accordingly, the reissue declaration is found defective, as explained below in Section X. In the Apr 2024 Non-Final Action, claims 6-16 were indicated as containing allowable subject matter because WD5 was identified as not teaching the corresponding algorithm associated with Functional Phrase #7. In particular, Functional Phrase #7 recites, inter alia, “a motion vector derivation unit ... determines, on the basis of the slice type, whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header.” This phrase invokes 35 U.S.C. § 112, sixth paragraph, and the corresponding algorithm for the above-quoted function is disclosed in the ’855 Patent at column 11:46-58, which teaches that the “determining” function is executed by setting the temporal merge candidate picture as the first picture included in a list 0 when the slice type is P and by setting the temporal merge candidate picture as the first picture of a reference picture list indicated by a flag that denotes a temporal merge candidate list in a slice header when the slip type is B. For example, the temporal merge candidate picture can be set as a picture in a list 0 when the flag is 1 and as a picture in a list when the flag is 0. In the Apr 2024 Non-Final Action, it was determined that claim 6 was allowable over the combination of WD5 and Lin because the algorithm disclosed in the ‘855 for determining whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header was not taught by WD5. See Apr 2024 Non-Final Action at pp. 27-28. However, upon further reconsideration, WD5 does appear to teach substantially the same algorithm. In particular, WD5 discloses a similar algorithm for executing this determining function on p. 111 – “Depending on the values of slice_type and collocated_from_l0_flag, the variable colPic, specifying the picture that contains the co-located partition, is derived as follows. – If slice_type is equal to B and collocated_from_l0_flag is equal to 0, the variable colPic specifies the picture that contains the co-located partition as specified by RefPicList1[ 0 ]. – Otherwise (slice_type is equal to B and collocated_from_l0_flag is equal to 1 or slice_type is equal to P) , the variable colPic specifies the picture that contains the co-located partition as specified by RefPicList0[ 0 ].” In the above algorithm, the “colPic” variable corresponds to the claimed “temporal merge candidate picture,” the “collocated_from_10_flag” corresponds to the claimed flag, and the two lists “RefPicList0[ 0 ]” and “RefPicList1[ 0 ]” each correspond to “a reference picture list.” The algorithm corresponding to the claimed functional phrase requires the first picture to be included in a list 0 when the slice type is P. Likewise, in WD5, when the slice type is P, then the variable colPic in included in a list 0, i.e., RefPicList0[ 0 ]. The algorithm corresponding to the claimed functional phrase also requires setting the temporal merge candidate picture as the first picture of a reference picture list indicated by a flag that denotes a temporal merge candidate list in a slice header when the slip type is B. For example, the temporal merge candidate picture can be set as a picture in a list 0 when the flag is 1 and as a picture in a list 1 when the flag is 0. Likewise, in WD5, the temporal merge candidate picture “colPic” is set as the first picture of a reference picture list on the basis of a flag value when the slice type is B. The value of collocated_from_l0_flag determines whether colPic should be included in list 0 or list 1. That is, when the slice type is B and collocated_from_l0_flag is 0, then colPic is included in list 1, i.e., RefPicList1[ 0 ]. Otherwise, when the slice type is B and collocated_from_l0_flag is 1, then colPic is included in list 0, i.e., RefPicList0[ 0 ]. The functional phrase also indicates the “flag that denotes a temporal merge candidate list” is “in a slice header.” Likewise, WD5 discloses that the collocated_from_l0_flag is contained in a slice header at the top of p. 31, which indicates that the collocated_from_l0_flag is a part of the “Slice header syntax.” For these reasons, WD5 is determined to teach the algorithm associated with Functional Phrase #7, in which the motion vector derivation unit ... determines, on the basis of the slice type, whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header,” as claimed. New grounds of rejection based on this determination appear below in Section IX. IX. CLAIM REJECTIONS – 35 USC § 103 (OBVIOUSNESS) The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claims 6-16 are rejected under 35 U.S.C. § 103(a) as being unpatentable over WD5 and Lin. Regarding claim 6, WD5 discloses an apparatus for decoding motion information in merge mode, comprising: a merge mode motion information decoding unit configured to decode motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode (i.e., WD5 discloses processes that are performed by at least blocks 431-438 of FIG. 7 in the ‘855 Patent – Merge predictor index decoding unit 431 (see explanation below for “merge predictor index decoding unit”); Spatial merge candidate derivation unit 432 (i.e., WD5 defines five spatial merge candidate blocks in FIG. 8-3 (A1, B1, B0, A0, and B2) in the same positions as the five spatial merge candidates disclosed in FIG. 3 of the ‘855 Patent; the motion information for the B2 block is set as a spatial merge candidate when motion information of at least one of the other four blocks is not available (i.e., not valid) – see WD5 at §§ 8.4.2.1.1, 8.4.2.1.2, and 8.4.2.1.8); Reference picture index derivation unit 433 (i.e., WD5 sets the reference picture index of the temporal merge candidate to 0 – see WD5 at § 8.4.2.1.1, step 2); Motion vector derivation unit 434 (see explanation below for motion vector derivation unit); Temporal merge candidate configuration unit 435 (i.e., WD5 determines an obtained reference picture index (e.g., refIdxLX of left block A1) and obtained motion vector (mvLXCol) as the reference picture index and motion vector of the temporal merging candidate (Col) – see WD5 at § 8.4.2.1.1, step 2); Merge predictor selection unit 436 (see explanation below for merge predictor selection unit); Merge candidate index decoding unit 437 (see explanation below for merge candidate generation unit); and Motion information generation unit 438 (i.e., WD5 determines motion information (motion vector and reference picture index) of the selected merge predictor as the motion information of the current block – see WD5 at §§ 8.4.2.1 and 8.4.2.1.1). a prediction block generation unit configured to generate a prediction block of the current block using the decoded motion information (i.e., WD5 discloses an algorithm that includes the following steps that are performed by the prediction block generation unit 250 – if a motion vector (mvLX in WD5) has an integer pixel unit (i.e., the fractional part is (0,0)), generate the prediction block (predSamplesLXL) of the current block by copying (samples Ai,j from the reference picture) corresponding to a position that is indicated by a motion vector within a picture (refPicLX) indicated by a reference picture index (refIdxLX); and if a motion vector does not have an integer pixel unit (i.e., at least one component of the fractional part is not equal to 0), generate pixels of a prediction block (samples ai,j – ri,j) from integer pixels within a picture indicated by a reference picture index See WD5 at §§ 8.4.2.2 and 8.4.2.2.2. a residual block decoding unit configured to generate a two-dimensional quantization block by inversely scanning residual signals, inversely quantize the two-dimensional quantization block using a quantization parameter, and generate a residual block by inversely transforming the inverse-quantized block (i.e., WD5 discloses an algorithm that includes the following steps that are performed by the residual block decoding unit 260 – generate a 2-D quantized coefficient block (i.e., 2-D (nW)x(nH) array of quantized transform coefficients cij) by inversely scanning entropy-decoded coefficients (transCoeffLevel) using a diagonal raster inverse-scan method (scanIdx set to 3) – see WD5 at §§ 8.4.2, 8.4.3.1, 8.4.3.2, and 8.5.1–3); inversely quantize the 2-D (nW)x(nH) array ci,j using a quantization parameter (qP), resulting in a 2-D array of inverse quantized coefficients dij – see WD5 at §§ 8.5.1 and 8.5.3; and inversely transform the 2-D array of coefficients dij into a 2-D array of residual samples r – see WD5 at §§ 8.5.1 and 8.5.3. wherein the merge mode motion information decoding unit comprises: a merge predictor index decoding unit configured to reconstruct a merge predictor index of a current block using a received merge codeword (i.e., WD5 reconstructs a merging candidate index “merge_idx,” corresponding to a received bin string, using a predetermined table (unary binarization table 9-31) corresponding to the maximum number of merging candidates “MaxNumMergeCand” – see definition of “merge_idx” on p. 69 of WD5; see also § 7.3.7 on p. 39 in which the descriptor for merge_idx is given as “ae(v)”, which is defined in § 7.2 on p. 23, and the parsing process for the ae(v) descriptor is given in § 9.2 on pp. 153-154; the decoding process flow of the parsing process for reconstructing the merging candidate index merge_idx is given in § 9.2.3 on p. 172, and merge_idx maps to a binary codeword table as shown in table 9-30 at pp. 163-168); a merge candidate generation unit configured to generate one or more merge candidates when the number of valid merge candidates of the current block is smaller than a predetermined number (i.e., WD5 generates an additional merge candidate “combCandk”, with k=0, when the number of valid spatial and temporal merge candidates “numOrigMergeCand” is less than a predetermined number “MaxNumMergeCand” – see § 8.4.2.1.1, steps 6-7 on p. 100; see also § 8.4.2.1.3 on pp. 103-105); a merge predictor selection unit configured to generate a merge candidate list using the spatial merge candidates, the temporal merge candidates, and one or more merge candidates generated by the merge candidate generation unit and to select a merge predictor based on the merge predictor index (i.e., WD5 obtains a list of spatial merge candidates “A1, B1, B0, A0, and B2”, temporal merge candidate “Col”, and any additional merge candidates “combCandk”, removes candidates with a lower priority from the list if they have the same motion information as other merge candidates, and selects a merge candidate “N” corresponding to the reconstructed merge predictor index “merge_idx” – see § 8.4.2.1.1 on pp. 100-101 of WD5); and a motion vector derivation unit configured to determine a temporal merge candidate picture and determine a temporal merge candidate block within the temporal merge candidate picture in order to generate a motion vector of the temporal merge candidate (i.e., WD5 determines a temporal merge candidate picture “colPic” and a temporal merge candidate block “colPu” on p. 111, and then generates a motion vector “mvLXCol” of the temporal merge candidate block using the determined colPic and colPu on p. 112), wherein the temporal merge candidate picture is determined differently depending on a slice type (i.e., WD5 specifies the variable “colPic” differently based on whether the “slice_type” is equal to B or not – see p. 111), and wherein a motion vector of the temporal merge candidate is selected among motion vectors of a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit (i.e., § 8.4.2.1.8 on pp. 111-112 describes the derivation process for the temporal merge candidate motion vector). WD5’s disclosure does not appear to expressly disclose that each of the claimed units are implemented via a computer or a processor that is programmed to execute the algorithms corresponding to the claimed functions. However, it would have been apparent to those skilled in the art that the processes disclosed by WD5 were intended to be implemented via a processing device based on the state of the prior art at the time the invention was made. For instance, Lin relates to HEVC development and discloses deriving temporal motion vector prediction candidates in a merge mode. See Lin at paragraphs [0004], [0004], [0024], [0025], [0030], and [0038]–[0041]. Lin discloses that the decoding processes described therein “may be implemented in various hardware, software codes, or a combination of both,” and more specifically may be “program codes to be executed on a Digital Signal Processor (DSP)” or may involve “functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA).” Based on Lin’s teaching that HEVC decoding processes are conventionally implemented via hardware, software, or a combination of both, it would have been an obvious expedient to implement WD5’s HEVC decoding processes in the same manner. Regarding claim 7, the combination of WD5 and Lin teaches the apparatus of claim 6, further comprising: a reference picture index derivation unit configured to set a reference picture index of one of blocks neighboring the current block or 0 as a reference picture index of the temporal merge candidate (i.e., WD5 sets the reference picture index of the temporal merge candidate to 0 – see WD5 at § 8.4.2.1.1, step 2). Regarding claim 8, the combination of WD5 and Lin teaches the apparatus of claim 6, wherein the temporal merge candidate picture has a reference picture index of 0 (i.e., WD5 sets the reference picture index “refIdxLX” of the temporal merge candidate picture as 0 at step 2 on p. 100). Regarding claim 9, the combination of WD5 and Lin teaches the apparatus of claim 6, wherein: the temporal merge candidate block is a second candidate block, and the second candidate block is a block comprising a lower right pixel at a central position of the block corresponding to the current prediction unit within the temporal merge candidate picture. WD5 discloses that to derive the motion vector for the temporal merge candidate, a right-bottom candidate block and a center block are used and a motion vector of one of the two candidate blocks is selected based on a position of the current block within a largest coding unit (e.g., if (yP>>Log2MaxCuSize) is equal to (yPRb>>Log2MaxCuSize)), and the motion vector of the second merge candidate block (center block) is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit (e.g., if (yP>>Log2MaxCuSize) is not equal to (yPRb>>Log2MaxCuSize)). See WD5 at 111-112. A candidate block is a prediction unit PU (colPu) in a co-located reference picture (colPic). Two candidate blocks colPu (right-bottom and center) are considered in the derivation. The first candidate block is the PU located at the right-bottom position of the current PU (xPRb, yPRb)=(xP+nPSW, yP+nPSH) but inside the co-located reference picture (colPic), where nPSW and nPSH are the width and the height of the current prediction unit. See WD5 at 111 (section 8.4.2.1.8, step 1).) This right-bottom candidate is chosen if the co-located right-bottom PU is located inside the same LCU line as the current PU, i.e., its y component yPRb divided by the LCU size (Log2MaxCuSize) is equal to the y component of the current PU yP divided by the LCU size (Log2MaxCuSize): (yP>>Log2MaxCuSize) is equal to (yPRb>>Log2MaxCuSize). (Id.) 159. The second candidate block is the PU located at the center position of the current PU (xPCtr, yPCtr)=(xP+(nPSW>>1), yP+(nPSH>>1)), but inside the co-located reference picture. (Id. at 111 (section 8.4.2.1.8, step 2).) This second candidate is chosen if the condition in step 1, (yP>>Log2MaxCuSize) is equal to (yPRb>>Log2MaxCuSize), is false – i.e., the co-located right-bottom PU is not inside the current LCU line and thus the current block is adjacent to a lower boundary of the largest coding unit. In particular, when the condition in step 1 is false, “colPu is marked as unavailable.” (Id. at 111, step 1.) At step 2, “[i]f . . . colPu is unavailable,” the prediction unit PU (colPu) is then defined as the prediction unit covering this center position (xPCtr, yPCtr) (in other words, the second candidate block is selected). (Id. at 111, step 2.) Accordingly, WD5 teaches that the temporal merge candidate block can be a second candidate block, which is a block comprising a lower-right pixel at a central position of the block corresponding to the current prediction unit within the temporal merge candidate picture. Regarding claim 10, the combination of WD5 and Lin teaches the apparatus of claim 9, wherein the temporal merge candidate block is a valid block retrieved when the first candidate block and the second candidate block are searched for in due order or only the second candidate block is searched for depending on a position of the current block (see WD5, steps 1 and 2 of § 8.4.2.1.8 at p. 111 – only the second candidate is chosen if the condition in step 1, (yP>>Log2MaxCuSize) is equal to (yPRb>>Log2MaxCuSize), is false – i.e., the co-located right-bottom PU is not inside the current LCU line and thus the current block is adjacent to a lower boundary of the largest coding unit. In particular, when the condition in step 1 is false, “colPu is marked as unavailable.” (Id. at 111, step 1.) At step 2, “[i]f . . . colPu is unavailable,” the prediction unit PU (colPu) is then defined as the prediction unit covering this center position (xPCtr, yPCtr) (in other words, the second candidate block is selected)). Regarding claim 11, the combination of WD5 and Lin teaches the apparatus of claim 6, wherein the merge candidate generation unit generates the merge candidate by combining pieces of motion information on valid merge candidates or generates the merge candidate having a motion vector of 0 and a reference picture index of 0 (see § 8.4.2.1.3 at pp. 103-104 of WD5 – additional merge candidate “combCandk” is added at the end of the merge candidate list “mergeCandList” while the number of the merge candidate “numMergeCand” is equal to or less than the predetermined number “MaxNumMergeCand”). Regarding claim 12, the combination of WD5 and Lin teaches the apparatus of claim 11, wherein the merge candidate generation unit generates the merge candidate whose number is equal to or smaller than the predetermined number if the merge candidate is generated by combining pieces of motion information on predetermined valid merge candidates (see § 8.4.2.1.3 at pp. 103-104 of WD5 – additional merge candidate “combCandk” is added at the end of the merge candidate list “mergeCandList” while the number of the merge candidate “numMergeCand” is equal to or less than the predetermined number “MaxNumMergeCand”). Regarding claim 13, the combination of WD5 and Lin teaches the apparatus of claim 6, wherein, if a current coding unit is equal to or larger than a reference unit having a first reference size, a same quantization parameter is used for all residual blocks included in the current coding unit (see § 7.3.5 at p. 37 of WD5 – “if (cu_qp_delta_enabled_flag && log2CUSize >= log2MinCUDQPSize) IsCuQpDeltaCoded = 0” – this indicates that if the size of a current coding unit “log2CUSize” is equal to or larger than a reference unit size “log2MinCUDQPSize” then the current coding unit is not subjected to a variable quantization parameter, i.e., the same quantization parameter is used for all the residual blocks in the current coding unit; see also p. 56 – The variable log2MinCUDQPSize specifying the minimum coding unit size that can further modifies the value of QPY; see also § 7.4.9 – “If a coding unit with the split_coding_unit_flag[ x0 ][ y0 ] equal to 0 and the log2CUSize is greater than or equal to log2MinCUDQPSize, the quantization group includes this coding unit only”). Regarding claim 14, the combination of WD5 and Lin teaches the apparatus of claim 13, wherein if plural coding units are included in the first reference unit, only one quantization parameter is reconstructed for the plural coding unit (i.e., in WD5, smaller coding units resulting from splitting the current coding unit are inside the same quantization group and therefore share the same quantization parameter QPY – see § 7.4.9). Regarding claim 15, the combination of WD5 and Lin teaches the apparatus of claim 13, wherein the first reference size is predetermined per picture or per slice (i.e., log2MinCUDQPSize in WD5 is determined based on the LCU size Log2MaxCUSize and the syntax element max_cu_qp_delta_depth, which is signaled per picture in the picture parameter set (PPS) – see § 7.2.2.2, equation 7-7; because the PPS contains all the parameters associated with a coded picture and can change for each picture, and the reference size log2MinCUDQPSize depends on a syntax element that is part of the PPS, the reference size is predetermined per picture). Regarding claim 16, the combination of WD5 and Lin teaches the apparatus of claim 6, wherein a reference picture index of the temporal merge candidate is different from a reference picture index used for indicating a temporal merge candidate picture including the temporal merge candidate block (i.e., in WD5 the reference picture index used for indicating a temporal merge candidate picture (colPic) including the temporal merge candidate block (colPu) is always the reference index 0, -- either the first picture from reference picture list 0 (RefPicList0[0]) or reference picture 1 (RefPicList1[0]) – see WD5 at § 8.4.2.1.8; the reference picture index refIdxLX of the temporal merge candidate Col in WD5 is generally derived from the prediction unit (PU) located to the left of the current PU – see §8.4.2.1.1, step 2; unless the PU located to the left of the current PU is unavailable or coded with intra-picture prediction mode, the refIdxLX of the temporal merge candidate Col is not explicitly set to 0, but instead is set to the reference picture index of the left PU – see id.; therefore, a reference picture index (refIdxLX) of the temporal merge candidate (Col) may be 1, and thus different from a reference picture index used for indicating a temporal merge candidate picture (colPic) including the temporal merge candidate block (colPu), which is always equal to 0). X. CLAIM REJECTIONS – 35 USC § 251 (DEFECTIVE REISSUE DECLARATION) For reissue applications filed on or after September 16, 2012, all references to 35 U.S.C. 251 and 37 CFR 1.172, 1.175, and 3.73 are to the current provisions. Claims 6-16 are rejected as being based upon a defective reissue declaration under 35 U.S.C. § 251. The reissue declaration filed January 23, 2024, is defective because although it properly indicates that this proceeding is a narrowing reissue, it incorrectly characterizes the underlying patent as being wholly or partially invalid or inoperative for patentee claiming less than patentee was allowed to claim. This indicates that the scope of the claims is being expanded (i.e., broadened), however, by narrowing the claims, patentee is correcting an error based upon claiming more than patentee was allowed to claim, in terms of the scope of the claimed invention. Accordingly, the declaration should be corrected to reflect that underlying patent as being wholly or partially invalid or inoperative for patentee claiming more than patentee was allowed to claim. XI. NON-STATUTORY DOUBLE PATENTING The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 6 and 13-15 are rejected on the ground of nonstatutory double patenting as being not patentably distinct from claims 1-4 of U.S. Patent 9,036,709 (“’709 Patent”) in view of WD5. As shown in the chart below, claim 6 claims substantially all of the same limitations as claim 1 of the ‘709 Patent, except claim 6 of the Instant Application does not include: (1) wherein a reference picture index of the temporal merge candidate is set to 0, and (2) a diagonal raster inverse scan is used during the inverse-scanning process., as recited in claim 1 of the ‘709 Patent. 16/952,431 U.S. 9,036,709 (limitations re-ordered) 6. An apparatus for decoding motion information in merge mode, comprising: 1. An apparatus for decoding motion information in merge mode, comprising: a merge mode motion information decoding unit configured to decode motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode; a merge mode motion vector decoding unit configured to generate motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode; a prediction block generation unit configured to generate a prediction block of the current block using the decoded motion information; a prediction bock generating unit configured to generate a prediction block of the current block using motion information; and a residual block decoding unit configured to generate a two-dimensional quantization block by inversely scanning residual signals, inversely quantize the two-dimensional quantization block using a quantization parameter, and generate a residual block by inversely transforming the inverse-quantized block; a residual block generating unit configured to perform an entropy-decoding process and an inverse-scanning process on residual signals to generate a quantized block, and to perform an inverse-transforming process on the quantized block to generate a residual block, wherein the merge mode motion information decoding unit comprises: a merge predictor index decoding unit configured to reconstruct a merge predictor index of a current block using a received merge codeword; a merge candidate generation unit configured to generate one or more merge candidates when a number of valid merge candidates of the current block is smaller than a predetermined number; a merge predictor selection unit configured to generate a merge candidate list using the spatial merge candidates, the temporal merge candidates, and one or more merge candidates generated by the merge candidate generation unit and to select a merge predictor based on the merge predictor index; a motion vector derivation unit configured to determine a temporal merge candidate picture and determine a temporal merge candidate block within the temporal merge candidate picture in order to generate a motion vector of the temporal merge candidate, wherein the temporal merge candidate picture is determined differently depending on a slice type, and the motion vector derivation unit determines, on the basis of the slice type, whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header, and wherein a motion vector of the temporal merge candidate is selected among motion vectors of a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit a motion vector of the temporal merge candidate is selected among a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit, and wherein a reference picture index of the temporal merge candidate is set to 0, a diagonal raster inverse scan is used during the inverse-scanning process. Claim 6, when combined with the teachings of WD5, renders claim 1 of the ‘709 Patent obvious because the combination amounts to the application of a known technique to a known device ready for improvement to yield predictable results (KSR Rationale D), on the basis of the following factors: (1) a finding that the prior art contained a “base” device (method, or product) upon which the claimed invention can be seen as an “improvement;” (2) a finding that the prior art contained a known technique that is applicable to the base device (method, or product); (3) a finding that one of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in an improved system. - See MPEP § 2143(I)(C). For factor (1), the “base” device corresponds to the apparatus recited in claim 6. For factor (2), the prior art (i.e., WD5) teaches the known techniques of claim 6 of the ‘709 Patent that are missing in claim 6 of the Instant Application. Notably, WD5 teaches: a reference picture index of the temporal merge candidate is set to 0 (i.e., WD5 sets the reference picture index “refIdxLX” of the temporal merge candidate picture at step 2 on p. 100), and a diagonal raster inverse scan is used during the inverse-scanning process (see § 8.5.2 “Inverse scanning process for transform coefficients” on p. 125, which teaches the inverse scanning of coefficients can be diagonal, horizontal, or vertical). For factor (3), those skilled in the art would have recognized that applying these techniques taught by WD5 to the apparatus of claim 6 would have yielded predictable results and resulted in an improved system because the disclosure of WD5 corresponds to the development of a standardized merge-mode protocol for high efficiency video coding and decoding for use in a wide variety of applications (see WD5 at pp. 1-2 and § 8 “Decoding Process” at pp. 74-150), such that the inclusion of setting a reference picture index of a temporal merge candidate to 0 and inverse-scanning in a diagonal manner have been an obvious expedients in light of the fact setting of the reference picture index as 0 would have facilitated the derivation process for motion vector components and reference indices, as taught at pp. 98-100 of WD5, and inverse-scanning transform coefficients in a diagonal manner was a preferred method of scanning transform coefficients. Claims 13-15 recite additional limitations that are identical to the additional limitations of claims 2-4 of the ‘709 Patent. Claims 6 is rejected on the ground of nonstatutory double patenting as being not patentably distinct from claims 1 of U.S. Patent 9,161,043 (“’043 Patent”) in view of WD5. As shown in the chart below, claim 6 claims substantially all of the same limitations as claim 1 of the ‘043 Patent, except claim 6 of the Instant Application does not include: (1) a spatial merge candidate derivation unit configured to derive spatial merge candidates of the current block; (2) a temporal merge candidate configuration unit configured to generate a temporal merge candidate of the current block; and (3) wherein a reference picture index of the temporal merge candidate is set to 0, as recited in claim 1 of the ‘043 Patent. 16/952,431 U.S. 9,161,043 (limitations re-ordered) 6. An apparatus for decoding motion information in merge mode, comprising: 1. An apparatus for decoding motion information in merge mode, comprising: a merge mode motion information decoding unit configured to decode motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode; a prediction block generation unit configured to generate a prediction block of the current block using the decoded motion information; a prediction bock generating unit configured to generate a prediction block of the current block using motion information of the merge predictor; a residual block decoding unit configured to generate a two-dimensional quantization block by inversely scanning residual signals, inversely quantize the two-dimensional quantization block using a quantization parameter, and generate a residual block by inversely transforming the inverse-quantized block; wherein the merge mode motion information decoding unit comprises: a merge predictor index decoding unit configured to reconstruct a merge predictor index of a current block using a received merge codeword; a merge predictor index decoding unit configured to reconstruct a merge predictor index of a current block using a received merge codeword; a merge candidate generation unit configured to generate one or more merge candidates when a number of valid merge candidates of the current block is smaller than a predetermined number; a merge candidate generation unit configured to generate one or more merge candidates when a number of valid merge candidates of the current block is smaller than a predetermined number; a merge predictor selection unit configured to generate a merge candidate list using the spatial merge candidates, the temporal merge candidates, and one or more merge candidates generated by the merge candidate generation unit and to select a merge predictor based on the merge predictor index; a merge predictor selection unit configured to generate a merge candidate list using the merge candidates and to select a merge predictor based on the merge predictor index; and a motion vector derivation unit configured to determine a temporal merge candidate picture and determine a temporal merge candidate block within the temporal merge candidate picture in order to generate a motion vector of the temporal merge candidate, wherein the temporal merge candidate picture is determined differently depending on a slice type, and the motion vector derivation unit determines, on the basis of the slice type, whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header, and wherein a motion vector of the temporal merge candidate is selected among motion vectors of a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit wherein a motion vector of the temporal merge candidate is selected among a first merge candidate block and a second merge candidate block based on a position of the current block within a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit. wherein the temporal merge candidate configuration unit is configured to set a reference picture index of the temporal merge candidate as 0, and a spatial merge candidate derivation unit configured to derive spatial merge candidates of the current block; a temporal merge candidate configuration unit configured to derive a temporal merge candidate of the current block; Claim 6, when combined with the teachings of WD5, renders claim 1 of the ‘043 Patent obvious because the combination amounts to the application of a known technique to a known device ready for improvement to yield predictable results (KSR Rationale D), on the basis of the following factors: (1) a finding that the prior art contained a “base” device (method, or product) upon which the claimed invention can be seen as an “improvement;” (2) a finding that the prior art contained a known technique that is applicable to the base device (method, or product); (3) a finding that one of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in an improved system. - See MPEP § 2143(I)(C). For factor (1), the “base” device corresponds to the apparatus recited in claim 6. For factor (2), the prior art (i.e., WD5) teaches the known techniques of claim 6 of the ‘043 Patent that are missing in claim 6 of the Instant Application. Notably, WD5 teaches: a spatial merge candidate derivation unit configured to derive spatial merge candidates of the current block (i.e., WD5 defines five spatial candidate blocks in Fig. 8-3 on p. 109; the motion information for block B2 is set as a spatial merge candidate when motion information of at least one of the other four blocks is not available – see § 8.4.2.1.2 “Derivation process for spatial merging candidates” on pp. 101-102 of WD5). a temporal merge candidate configuration unit configured to generate a temporal merge candidate of the current block (i.e., WD5 determines an obtained reference picture index “refIdxLX” and a motion vector “mvLXCol” as the reference picture index and the motion vector of the temporal merge candidate “Col” – see § 8.4.2.1.1, steps 2-3 on p. 100 and § 8.4.2.1.8, equation 8-144 on p. 112 of WD5); the temporal merge candidate configuration unit is configured to set a reference picture index of the temporal merge candidate as 0 (i.e., WD5 sets the reference picture index “refIdxLX” of the temporal merge candidate as 0 at step 2 on p. 100). For factor (3), those skilled in the art would have recognized that applying these techniques taught by WD5 to the apparatus of claim 6 would have yielded predictable results and resulted in an improved system because the disclosure of WD5 corresponds to the development of a standardized merge-mode protocol for high efficiency video coding and decoding for use in a wide variety of applications (see WD5 at pp. 1-2 and § 8 “Decoding Process” at pp. 74-150), such that the inclusion of units that generate spatial and temporal merge candidates, and the inclusion of setting a reference picture index of a temporal merge candidate to 0 would have been an obvious expedients in light of the fact that claim 6 recites a merge mode motion information decoding unit that utilizes available spatial and temporal marge candidates and setting of the reference picture index as 0 would have facilitated the derivation process for motion vector components and reference indices, as taught at pp. 98-100 of WD5. Claims 6 and 13-15 are rejected on the ground of nonstatutory double patenting as being not patentably distinct from claims 1-4 of U.S. Patent 9,025,669 (“’669 Patent”) in view of WD5. As shown in the chart below, claim 6 claims substantially all of the same limitations as claim 1 of the ‘669 Patent, except claim 6 of the Instant Application does not include: wherein a reference picture index of the temporal merge candidate is set to 0, as recited in claim 1 of the ‘669 Patent. 16/952,431 U.S. 9,025,669 (limitations re-ordered) 6. An apparatus for decoding motion information in merge mode, comprising: 1. An apparatus for decoding motion information in merge mode, comprising: a merge mode motion information decoding unit configured to decode motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode; a merge mode motion vector decoding unit configured to generate motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode; a prediction block generation unit configured to generate a prediction block of the current block using the decoded motion information; a prediction bock generating unit configured to generate a prediction block of the current block using motion information; and a residual block decoding unit configured to generate a two-dimensional quantization block by inversely scanning residual signals, inversely quantize the two-dimensional quantization block using a quantization parameter, and generate a residual block by inversely transforming the inverse-quantized block; a residual block generating unit configured to perform an entropy-decoding process and an inverse-scanning process on residual signals to generate a quantized block, and to perform an inverse-transforming process on the quantized block to generate a residual block, wherein the merge mode motion information decoding unit comprises: a merge predictor index decoding unit configured to reconstruct a merge predictor index of a current block using a received merge codeword; a merge candidate generation unit configured to generate one or more merge candidates when a number of valid merge candidates of the current block is smaller than a predetermined number; a merge predictor selection unit configured to generate a merge candidate list using the spatial merge candidates, the temporal merge candidates, and one or more merge candidates generated by the merge candidate generation unit and to select a merge predictor based on the merge predictor index; a motion vector derivation unit configured to determine a temporal merge candidate picture and determine a temporal merge candidate block within the temporal merge candidate picture in order to generate a motion vector of the temporal merge candidate, wherein the temporal merge candidate picture is determined differently depending on a slice type, and the motion vector derivation unit determines, on the basis of the slice type, whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header, and wherein a motion vector of the temporal merge candidate is selected among motion vectors of a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit a motion vector of the temporal merge candidate is selected among a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit. wherein a reference picture index of the temporal merge candidate is set to 0, and Claim 6, when combined with the teachings of WD5, renders claim 1 of the ‘669 Patent obvious because the combination amounts to the application of a known technique to a known device ready for improvement to yield predictable results (KSR Rationale D), on the basis of the following factors: (1) a finding that the prior art contained a “base” device (method, or product) upon which the claimed invention can be seen as an “improvement;” (2) a finding that the prior art contained a known technique that is applicable to the base device (method, or product); (3) a finding that one of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in an improved system. - See MPEP § 2143(I)(C). For factor (1), the “base” device corresponds to the apparatus recited in claim 6. For factor (2), the prior art (i.e., WD5) teaches the known techniques of claim 6 of the ‘669 Patent that are missing in claim 6 of the Instant Application. Notably, WD5 teaches: a reference picture index of the temporal merge candidate is set to 0 (i.e., WD5 sets the reference picture index “refIdxLX” of the temporal merge candidate picture at step 2 on p. 100). For factor (3), those skilled in the art would have recognized that applying these techniques taught by WD5 to the apparatus of claim 6 would have yielded predictable results and resulted in an improved system because the disclosure of WD5 corresponds to the development of a standardized merge-mode protocol for high efficiency video coding and decoding for use in a wide variety of applications (see WD5 at pp. 1-2 and § 8 “Decoding Process” at pp. 74-150), such that the inclusion of setting a reference picture index of a temporal merge candidate to 0 would have been an obvious expedients in light of the fact setting of the reference picture index as 0 would have facilitated the derivation process for motion vector components and reference indices, as taught at pp. 98-100 of WD5. Claims 13-15 recite additional limitations that are identical to the additional limitations of claims 2-4 of the ‘669 Patent. XII. CONCLUSION Any inquiry concerning this communication or earlier communications from the Examiner should be directed to Colin LaRose whose telephone number is 571-272-7423. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, Hetul Patel can be reached at 571-272-4184. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of this proceeding may be obtained from the USPTO’s Patent Center. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (in USA or Canada) or 571-272-1000. General inquiries may also be directed to the Central Reexamination Unit customer service line at (571) 272-7705. /COLIN M LAROSE/Primary Examiner, Art Unit 3992 Conferees: /YUZHEN GE/Primary Examiner, Art Unit 3992 /H.B.P/ Hetul PatelSupervisory Patent Examiner, Art Unit 3992
Read full office action

Prosecution Timeline

Nov 19, 2020
Application Filed
Nov 19, 2020
Response after Non-Final Action
Aug 16, 2021
Response after Non-Final Action
Jul 25, 2023
Non-Final Rejection — §103, §DP
Jan 23, 2024
Response Filed
Apr 12, 2024
Non-Final Rejection — §103, §DP
Jun 06, 2024
Response Filed
Aug 01, 2024
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent RE50734
APPARATUS FOR MANAGING DISAGGREGATED MEMORY AND METHOD THEREOF
2y 5m to grant Granted Jan 06, 2026
Patent RE50619
EMERGENCY POWER SOURCE
2y 5m to grant Granted Oct 07, 2025
Patent RE50625
CHARGER PLUG WITH IMPROVED PACKAGE
2y 5m to grant Granted Oct 07, 2025
Patent RE49409
REFRIGERATOR AND MANUFACTURING METHOD OF THE SAME
2y 5m to grant Granted Feb 07, 2023
Patent RE48961
Vehicle with Multiple Light Detection and Ranging Devices (LIDARs)
2y 5m to grant Granted Mar 08, 2022
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
78%
With Interview (+45.0%)
4y 4m
Median Time to Grant
High
PTA Risk
Based on 194 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month