Prosecution Insights
Last updated: April 19, 2026
Application No. 17/101,457

APPARATUS FOR DECODING MOTION INFORMATION IN MERGE MODE

Non-Final OA §103§DP
Filed
Nov 23, 2020
Examiner
RALIS, STEPHEN J
Art Unit
3992
Tech Center
3900
Assignee
Ibex Pt Holdings Co. Ltd.
OA Round
3 (Non-Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
4y 4m
To Grant
78%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
64 granted / 194 resolved
-27.0% vs TC avg
Strong +45% interview lift
Without
With
+45.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
19 currently pending
Career history
213
Total Applications
across all art units

Statute-Specific Performance

§101
2.4%
-37.6% vs TC avg
§103
33.4%
-6.6% vs TC avg
§102
16.0%
-24.0% vs TC avg
§112
33.5%
-6.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 194 resolved cases

Office Action

§103 §DP
NON-FINAL ACTION (REISSUE OF U.S. PATENT 8,774,279) TABLE OF CONTENTS I. ACKNOWLEDGEMENTS 2 II. REISSUE PROCEDURAL REMINDERS 3 III. OTHER PROCEEDINGS 4 IV. STATUS OF CLAIMS 4 V. AIA STATUS 5 VI. CLAIM INTERPRETATION – PHRASES INVOKING 35 U.S.C. § 112, SIXTH PARAGRAPH 5 VII. PRIOR ART CITED HEREIN 20 VIII. RESPONSE TO ARGUMENTS 20 IX. CLAIM REJECTIONS – 35 USC § 103 (OBVIOUSNESS) 23 X. CLAIM REJECTIONS – 35 USC § 251 (DEFECTIVE REISSUE DECLARATION) 34 XI. NON-STATUTORY DOUBLE PATENTING 34 XII. CONCLUSION 48 I. ACKNOWLEDGEMENTS This non-final Office action addresses U.S. reissue application No. 17/101,457 (“Instant Application”). Based upon a review of the instant application, the actual filing date is November 23, 2020 (“Actual Filing Date”). The Instant Application is a reissue application of U.S. Patent No. 8,774,279 (“Patent Under Reissue” or “’279 Patent”) titled “APPARATUS FOR DECODING MOTION INFORMATION IN MERGE MODE.” The Patent Under Reissue was filed on January 16, 2013 (“Non-Provisional Filing Date”), and assigned by the Office non-provisional U.S. patent application control number 13/743,086 (“Non-Provisional Application”) and issued on July 8, 2014, with claims 1-5 (“Originally Patented Claims”). On August 1, 2023, a non-final Office action was issued (“Aug 2023 Non-Final Action”). On January 30, 2024, Applicant submitted a response to the Aug 2023 Non-Final Action (“Jan 2023 Response”). On April 22, 2024, a non-final Office action was issued (“Apr 2024 Non-Final Action”). On June 6, 2024, Applicant submitted a response to the Apr 2024 Non-Final Action (“Jun 2024 Response”). This non-final action addresses the Jun 2024 Response. II. REISSUE PROCEDURAL REMINDERS Disclosure of other proceedings. Applicant is reminded of the continuing obligation under 37 CFR 1.178(b), to timely apprise the Office of any prior or concurrent proceed-ing in which the Patent Under Reissue is or was involved. These proceedings would include interferences, reissues, reexaminations, and litigation. Disclosure of material information. Applicant is further reminded of the continuing obligation under 37 CFR 1.56, to timely apprise the Office of any information which is mate-rial to patentability of the claims under consideration in this reissue appli-cation. These disclosure obligations rest with each individual associated with the filing and prosecution of this application for reissue. See also MPEP §§ 1404, 1442.01 and 1442.04. Manner of making amendments. Applicant is reminded that changes to the Instant Application must comply with 37 C.F.R. § 1.173, such that all amendments are made in respect to the Patent Under Reissue as opposed to any prior changes entered in the Instant Application. All added material must be underlined, and all omitted material must be enclosed in brackets, in accordance with Rule 173. Applicant may submit an appendix to any response in which claims are marked up to show changes with respect to a previous set of claims, however, such claims should be clearly denoted as “not for entry.” III. OTHER PROCEEDINGS Based upon Applicants’ statements as set forth in the Instant Application and the Examiner's independent review of the Patent Under Reissue itself and its prosecution history, the Examiner cannot locate any concurrent proceedings before the Office, ongoing litigation, previous reexaminations (ex parte or inter partes), supplemental examinations, or certificates of correction regarding the Patent Under Reissue. The following PTAB proceedings involving the Patent Under Reissue have been found: Inter Partes Review case IPR2018-00093; and Inter Partes Review case IPR2018-00095. IV. STATUS OF CLAIMS Claims 6-16 are currently pending (“Pending Claims”). Claims 6-16 are currently examined (“Examined Claims”). Claims 1-5 are canceled. Regarding the Examined Claims and as a result of this Office action: Claims 6-16 are rejected under 35 U.S.C. §103. Claims 6-16 are rejected under 35 U.S.C. §251. Claims 6-16 are rejected on the basis on non-statutory double patenting. V. AIA STATUS Because the Instant Application does not contain a claim having an effective date on or after March 16, 2013, the America Invents Act First Inventor to File (“AIA -FITF”) provisions do not apply. Instead, the pre-AIA “First to Invent” provisions will govern this proceeding. See 35 U.S.C. § 100 (note). In the event the determination of the status of the application as subject to pre-AIA 35 U.S.C. 102 and 103 is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. VI. CLAIM INTERPRETATION – PHRASES INVOKING 35 U.S.C. § 112, SIXTH PARAGRAPH The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Functional Phrase #1 (claim 6) – a merge predictor index decoding unit configured to reconstruct a merge predictor index of a current block using a received merge codeword. Functional Phrase #2 (claim 6) – a spatial merge candidate derivation unit configured to derive spatial merge candidates of the current block. Functional Phrase #3 (claim 6) – a temporal merge candidate configuration unit configured to generate a temporal merge candidate of the current block … wherein the temporal merge candidate configuration unit is configured to set a reference picture index of the temporal merge candidate as 0 Functional Phrase #4 (claim 6) – a merge candidate generation unit configured to generate one or more merge candidates when the number of valid merge candidates of the current block is smaller than a predetermined number. Functional Phrase #5 (claim 6) – a merge predictor selection unit configured to generate a list of merge candidates using the spatial merge candidates derived by the spatial merge candidate derivation unit, the temporal merge candidate generated by the temporal merge candidate configuration unit, and the one or more merge candidates generated by the merge candidate generation unit and to select a merge predictor based on the merge predictor index. Functional Phrase #6 (claim 6) – a prediction block generation unit configured to generate a prediction block of the current block using motion information of the merge predictor. Functional Phrase #7 (claim 6) – a residual block generating unit configured to perform an entropy-decoding process and an inverse-scanning process on residual signals to generate a quantized block, and to perform an inverse-quantizing process and an inverse-transforming process on the quantized block to generate a residual block. Functional Phrase #8 (claim 6) – a motion vector derivation unit configured to determine a temporal merge candidate picture and determine a temporal merge candidate block within the temporal merge candidate picture in order to generate a motion vector of the temporal merge candidate … wherein the temporal merge candidate picture is determined differently depending on a slice type, wherein the temporal merge candidate picture is determined differently depending on a slice type, and the motion vector derivation unit determines, on the basis of the slice type, whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header … wherein the motion vector derivation unit is configured to set a reference picture index of the temporal merge candidate as 0… wherein a motion vector of the temporal merge candidate is selected among motion vectors of a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit. Functional Phrase #9 (claim 7) – a reference picture index derivation unit configured to set a reference picture index of one of blocks neighboring the current block or 0 as a reference picture index of the temporal merge candidate. Functional Phrase #10 (claim 11) – the merge candidate generation unit generates the merge candidate by combining pieces of motion information on valid merge candidates or generates the merge candidate having a motion vector of 0 and a reference picture index of 0. Functional Phrase #11 (claim 12) – the merge candidate generation unit generates the merge candidate whose number is equal to or smaller than the predetermined number if the merge candidate is generated by combining pieces of motion information on predetermined valid merge candidates. Functional Phrase #12 (claim 13) – the temporal merge candidate configuration unit determines a temporal merge candidate picture and determines a temporal merge candidate block within the temporal merge candidate picture, in order to generate the motion vector of the temporal merge candidate. Functional Phrase #13 (claim 15) – the prediction block generating unit generates a prediction pixel of luminance component using an 8-tap interpolation filter and generates a prediction pixel of chrominance component using a 4-tap interpolation filter. Because these claim limitations are being interpreted under pre-AIA 35 U.S.C. § 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. For computer-implemented means-plus-function limitations, a general purpose computer is only sufficient as the corresponding structure for performing a general computing function. When there is a specific function to be performed, it is required that an algorithm for performing the function be disclosed, and the corresponding structure becomes a general purpose computer transformed into a special purpose computer by programming the computer to perform the disclosed algorithm. The specification must explicitly disclose the algorithm for performing the claimed function, and simply reciting the claimed function in the specification will not be a sufficient disclosure for an algorithm which, by definition, must contain a sequence of steps. See MPEP § 2181(II)(B): An algorithm is defined, for example, as “a finite sequence of steps for solving a logical or mathematical problem or performing a task.” Microsoft Computer Dictionary, Microsoft Press, 5th edition, 2002. Applicant may express the algorithm in any understandable terms including as a mathematical formula, in prose, in a flow chart, or in any other manner that provides sufficient structure. [Citations and select quotations omitted.] Based upon a review of the Patent Under Reissue, the Examiner concludes that the corresponding structure for the above-identified Functional Phrases is disclosed in the Patent Under Reissue as follows: Functional Phrase #1 (claim 6) – a merge predictor index decoding unit configured to reconstruct a merge predictor index of a current block using a received merge codeword. – corresponds to merge predictor index decoding unit 431 of FIG. 7. The Patent Under Reissue does not explicitly disclose the structure of the claimed unit, however, a person of ordinary skill would have understood that the written description of the Patent Under Reissue discloses coding and decoding processes to be implemented via a processor or computer. The corresponding structure for the merge predictor index decoding unit (as well as each of the other recited “unit” limitations) is a processor programmed to implement the algorithm disclosed in the specification for performing the claimed functions. The algorithm for performing the claimed function of the merge predictor index decoding unit 431 (which is the same for unit 331 of FIG. 6) is disclosed at column 13:20-23 and involves reconstructing a merge predictor index, corresponding to a received merge predictor codeword, using a predetermined table corresponding to the number of merge candidates. Functional Phrase #2 (claim 6) – a spatial merge candidate derivation unit configured to derive spatial merge candidates of the current block. – corresponds to spatial merge candidate derivation unit 432 of FIG. 7. The algorithm for performing the claimed function of the spatial merge candidate derivation unit 432 (which is the same for unit 232 of FIG. 5) is disclosed at column 10:38 – 11:15 and involves setting valid motion information of a block that is adjacent to a current block as a spatial merge candidate; the algorithm can be carried out via one or more of the optional processes described at column 10:38-48, column 10:49-56, column 10:57-63, and column 11:1-15. Functional Phrase #3 (claim 6) – a temporal merge candidate configuration unit configured to generate a temporal merge candidate of the current block … wherein the temporal merge candidate configuration unit is configured to set a reference picture index of the temporal merge candidate as 0. – corresponds to temporal merge candidate configuration unit 435 of FIG. 7. The algorithm for performing the claimed function of the temporal merge candidate configuration unit 435 (which is the same for unit 235 of FIG. 5) is disclosed at column 12:41-46 and involves determining a reference picture index obtained by the reference picture index derivation unit 433 and a motion vector obtained by the motion vector derivation unit 434 as the reference picture index and the motion vector of a temporal merge candidate, respectively. Functional Phrase #4 (claim 6) – a merge candidate generation unit configured to generate one or more merge candidates when the number of valid merge candidates of the current block is smaller than a predetermined number. – corresponds to merge candidate generation unit 437 of FIG. 7 (which is labelled in the figure as “merge candidate index decoding unit 437”). The algorithm for performing the claimed function of the merge candidate generation unit 437 is disclosed at column 14:24-42 and involves generating a merge candidate when the number of merge candidates is smaller than a predetermined number using one or more of the optional processes described at column 14:24-31, column 14:31-38, column 14:38-41, and column 14:41-42). Functional Phrase #5 (claim 6) – a merge predictor selection unit configured to generate a list of merge candidates using the spatial merge candidates derived by the spatial merge candidate derivation unit, the temporal merge candidate generated by the temporal merge candidate configuration unit, and the one or more merge candidates generated by the merge candidate generation unit and to select a merge predictor based on the merge predictor index. – corresponds to merge predictor selection unit 436 of FIG. 7. The algorithm for performing the claimed function of the merge predictor selection unit 436 is disclosed at column 14:43-62 and involves obtaining a list of merge candidates using a spatial merge candidate derived by the spatial merge candidate derivation unit 432, a temporal merge candidate generated by the temporal merge candidate configuration unit 435, and merge candidates generated by the merge candidate generation unit 437, and if a plurality of merge candidates has the same motion information, a merge candidate having a lower order of priority is deleted from the list, and a merge candidate is selected as the merge predictor of a current block. Functional Phrase #6 (claim 6) – a prediction block generation unit configured to generate a prediction block of the current block using motion information of the merge predictor. – corresponds to prediction block generation unit 250 of FIG. 4. The Patent Under Reissue does not explicitly disclose the structure of the claimed unit, however, a person of ordinary skill would have understood that the written description of the Patent Under Reissue discloses coding and decoding processes to be implemented via a processor or computer. The corresponding structure for the prediction block generation unit is a processor programmed to implement the algorithm disclosed in the specification for performing the claimed functions. The algorithm for performing the claimed function of the prediction block generation unit is disclosed at column 9:7-24 and involves generating the prediction block of the current block using motion information reconstructed by the merge mode motion information decoding unit 230 or the AMVP mode motion information deciding unit 240. Functional Phrase #7 (claim 6) – a residual block generating unit configured to perform an entropy-decoding process and an inverse-scanning process on residual signals to generate a quantized block, and to perform an inverse-quantizing process and an inverse-transforming process on the quantized block to generate a residual block. – corresponds to residual block decoding unit 260 of FIG. 4. The Patent Under Reissue does not explicitly disclose the structure of the claimed unit, however, a person of ordinary skill would have understood that the written description of the Patent Under Reissue discloses coding and decoding processes to be implemented via a processor or computer. The corresponding structure for the residual block decoding unit is a processor programmed to implement the algorithm disclosed in the specification for performing the claimed functions. The algorithm for performing the claimed function of the residual block decoding unit is disclosed at columns 9:25-42 and 10:4-6 and involves executing three functions: (i) performing entropy decoding on a residual signal and generating a 2-D quantized coefficient block by inversely scanning entropy-decoded coefficients (column 9:25-29); (ii) quantizing a generated coefficient block using an inverse quantization matrix (column 9:42-44); and (iii) reconstructing a residual block by inversely transforming the inversely quantized coefficient block (column 10:4-6). Functional Phrase #8 (claim 6) – a motion vector derivation unit configured to determine a temporal merge candidate picture and determine a temporal merge candidate block within the temporal merge candidate picture in order to generate a motion vector of the temporal merge candidate … wherein the temporal merge candidate picture is determined differently depending on a slice type, and the motion vector derivation unit determines, on the basis of the slice type, whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header … wherein the motion vector derivation unit is configured to set a reference picture index of the temporal merge candidate as 0… wherein a motion vector of the temporal merge candidate is selected among motion vectors of a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit. – corresponds to motion vector derivation unit for temporal merge candidate 434 of FIG. 7. The algorithm for performing the claimed function of the motion information generation unit 438 (which is the same for unit 234 of FIG. 5) is disclosed at column 11:55 – 12:40 and involves determining a picture to which the temporal merge candidate belongs, and setting temporal merge candidate picture as a picture having a reference picture index of 0. The temporal merge candidate picture is set either (i) as the picture having index 0 in the case of the slice type being P, or (ii) as the first picture of a reference picture list indicated by a flag indicative of a temporal merge candidate list in a slice header in the case of the slice type being B. The algorithm involves the second “determining” function to be executed by setting the temporal merge candidate picture as the first picture included in a list 0 when the slice type is P and by setting the temporal merge candidate picture as the first picture of a reference picture list indicated by a flag that denotes a temporal merge candidate list in a slice header when the slip type is B, as described at columns 11:44–56. The algorithm also involves obtaining a temporal merge candidate block within the temporal merge candidate picture by assigning an order of priority to the plurality of corresponding blocks and identifying the first valid block as the temporal merge candidate block, as described at column 12:1-32. The algorithm also involves setting the motion vector of a temporal merge candidate as the motion vector of the temporal merge candidate prediction block, as described at column 12:33-36. Functional Phrase #9 (claim 7) – a reference picture index derivation unit configured to set a reference picture index of one of blocks neighboring the current block or 0 as a reference picture index of the temporal merge candidate. – corresponds to reference picture index derivation unit 433 of FIG. 7. The algorithm for performing the claimed function of the reference picture index derivation unit 437 (which is the same for unit 233 of FIG. 5) is disclosed at column 11:6-45 and involves obtaining the reference picture index of the temporal merge candidates of a current block by setting it as (i) the reference picture index of one or more of the valid blocks (i.e., prediction units) that spatially neighbor a current block, as described at column 11:13-44, or (ii) 0 if there is no valid reference picture index, as described at column 11:44-45. Functional Phrase #10 (claim 11) – the merge candidate generation unit generates the merge candidate by combining pieces of motion information on valid merge candidates or generates the merge candidate having a motion vector of 0 and a reference picture index of 0. – corresponds to merge candidate generation unit 437 of FIG. 7 (which is labelled in the figure as “merge candidate index decoding unit 437”). The algorithm for performing the claimed function of the merge candidate generation unit 437 is disclosed at column 14:13-31 and involves (i) generating a merge candidate when the number of merge candidates is smaller than a predetermined number by combining motion information of two valid merge candidates, such as combining the reference picture index of a temporal merge candidate and the valid spatial motion vector of a spatial merge candidate as described at column 14:13-20, or (ii) if the number of merge candidates to be generated is insufficient, a merge candidate having a motion vector of 0 and a reference picture index of 0 is generated, as described at column 14:27-31. Functional Phrase #11 (claim 12) – the merge candidate generation unit generates the merge candidate whose number is equal to or smaller than the predetermined number if the merge candidate is generated by combining pieces of motion information on predetermined valid merge candidates. – corresponds to merge candidate generation unit 437 of FIG. 7 (which is labelled in the figure as “merge candidate index decoding unit 437”). The algorithm for performing the claimed function of the merge candidate generation unit 437 is disclosed at column 14:13-31 and involves generating a merge candidate when the number of merge candidates is smaller than a predetermined number by combining motion information of two valid merge candidates, such as combining the reference picture index of a temporal merge candidate and the valid spatial motion vector of a spatial merge candidate as described at column 14:13-20. Functional Phrase #12 (claim 13) – the temporal merge candidate configuration unit determines a temporal merge candidate picture and determines a temporal merge candidate block within the temporal merge candidate picture, in order to generate the motion vector of the temporal merge candidate. – corresponds to temporal merge candidate configuration unit 435 of FIG. 7. The algorithm for performing the claimed function of the temporal merge candidate configuration unit 435 (which is the same for unit 235 of FIG. 5) is disclosed at column 12:41-46, which does not include a teaching of how to perform the claimed function. Instead, it appears that the motion vector derivation unit 434 executes the claimed function, and the algorithm for such is disclosed at column 11:55 – 12:36 and involves: (i) Determining a temporal merge candidate picture to which the temporal merge candidate block belongs. The temporal merge candidate picture can be set as a picture having a reference picture index of 0. If a slice type is P, the temporal merge candidate picture is set as the first picture included in a list 0 (i.e., a picture having an index of 0). If a slice type is B, the temporal merge candidate picture is set as the first picture of a reference picture list indicated by a flag, the flag being indicative of a reference picture list in a slice header. See column 11:58-65). (ii) Obtaining a temporal merge candidate block within the temporal merge candidate picture. One of a plurality of corresponding blocks corresponding to a current block within the temporal merge candidate picture can be selected as the temporal merge candidate block. The order of priority can be assigned to the plurality of corresponding blocks, and a first valid corresponding block can be selected as the temporal merge candidate block based on the order of priority. See column 12:1-9. (iii) The motion vector of a temporal merge candidate is set as the motion vector of the selected temporal merge candidate prediction block. See column 12:33-36. Functional Phrase #13 (claim 15) – the prediction block generating unit generates a prediction pixel of luminance component using an 8-tap interpolation filter and generates a prediction pixel of chrominance component using a 4-tap interpolation filter. – corresponds to prediction block generation unit 250 of FIG. 4. The Patent Under Reissue does not explicitly disclose the structure of the claimed unit, however, a person of ordinary skill would have understood that the written description of the Patent Under Reissue discloses coding and decoding processes to be implemented via a processor or computer. The corresponding structure for the prediction block generation unit is a processor programmed to implement the algorithm disclosed in the specification for performing the claimed functions. The algorithm for performing the claimed function of the prediction block generation unit is disclosed at column 9:7-24 and involves generating the prediction block of the current block using motion information reconstructed by the merge mode motion information decoding unit 230 or the AMVP mode motion information deciding unit 240. If Applicant wishes to provide further explanation or dispute the Examiner’s interpretation of the corresponding structure, Applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. VII. PRIOR ART CITED HEREIN The following prior art patents and printed publications are cited herein: “WD5: Working Draft 5 of High-Efficiency Video Coding,” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, November 2011 (“WD5”); U.S. Patent Application Publication 2012/0236942 (“Lin”); U.S. Patent 9,161,043 (“’043 Patent”); U.S. Patent 9,036,709 (“’709 Patent”); and U.S. Patent 9,025,669 (“’669 Patent”). VIII. RESPONSE TO ARGUMENTS Applicant’s submission of a terminal disclaimer in respect to co-pending reissue application no. 16/952,431 and Reissued Patent RE49,907 has been received and is sufficient to overcome the previous obviousness-type double patenting rejections. Accordingly, those rejections are withdrawn. However, in light of a review of other patents in the same family as the Instant Application, Examiner determines that additional obviousness-type double patenting rejections are implicated. Those rejections are advanced below in Section XI. In addition, the reissue declaration submitted January 30, 2024, was previously considered sufficient to overcome the § 251 rejections tendered in the Aug 2023 Non-Final Action, however, it has been determined that the corrected declaration is also defective because it inaccurately characterizes the underlying patent invalid or inoperative because the patentee claimed less than the patentee was allowed to claim. Accordingly, the reissue declaration is found defective, as explained below in Section X. In the Apr 2024 Non-Final Action, claims 6-16 were indicated as containing allowable subject matter because WD5 was identified as not teaching the corresponding algorithm associated with Functional Phrase #8. In particular, Functional Phrase #8 recites, inter alia, “a motion vector derivation unit ... determines, on the basis of the slice type, whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header.” This phrase invokes 35 U.S.C. § 112, sixth paragraph, and the corresponding algorithm for the above-quoted function is disclosed in the ’279 Patent at column 11:55-67, which teaches that the “determining” function is executed by setting the temporal merge candidate picture as the first picture included in a list 0 when the slice type is P and by setting the temporal merge candidate picture as the first picture of a reference picture list indicated by a flag that denotes a temporal merge candidate list in a slice header when the slip type is B. For example, the temporal merge candidate picture can be set as a picture in a list 0 when the flag is 1 and as a picture in a list 1 when the flag is 0. In the Apr 2024 Non-Final Action, it was determined that claim 6 was allowable over the combination of WD5 and Lin because the algorithm disclosed in the ‘279 for determining whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header was not taught by WD5. See Apr 2024 Non-Final Action at pp. 31-32. However, upon further reconsideration, WD5 does appear to teach substantially the same algorithm. In particular, WD5 discloses a similar algorithm for executing this determining function on p. 111 – “Depending on the values of slice_type and collocated_from_l0_flag, the variable colPic, specifying the picture that contains the co-located partition, is derived as follows. – If slice_type is equal to B and collocated_from_l0_flag is equal to 0, the variable colPic specifies the picture that contains the co-located partition as specified by RefPicList1[ 0 ]. – Otherwise (slice_type is equal to B and collocated_from_l0_flag is equal to 1 or slice_type is equal to P) , the variable colPic specifies the picture that contains the co-located partition as specified by RefPicList0[ 0 ].” In the above algorithm, the “colPic” variable corresponds to the claimed “temporal merge candidate picture,” the “collocated_from_10_flag” corresponds to the claimed flag, and the two lists “RefPicList0[ 0 ]” and “RefPicList1[ 0 ]” each correspond to “a reference picture list.” The algorithm corresponding to the claimed functional phrase requires the first picture to be included in a list 0 when the slice type is P. Likewise, in WD5, when the slice type is P, then the variable colPic in included in a list 0, i.e., RefPicList0[ 0 ]. The algorithm corresponding to the claimed functional phrase also requires setting the temporal merge candidate picture as the first picture of a reference picture list indicated by a flag that denotes a temporal merge candidate list in a slice header when the slip type is B. For example, the temporal merge candidate picture can be set as a picture in a list 0 when the flag is 1 and as a picture in a list when the flag is 0. Likewise, in WD5, the temporal merge candidate picture “colPic” is set as the first picture of a reference picture list on the basis of a flag value when the slice type is B. The value of collocated_from_l0_flag determines whether colPic should be included in list 0 or list 1. That is, when the slice type is B and collocated_from_l0_flag is 0, then colPic is included in list 1, i.e., RefPicList1[ 0 ]. Otherwise, when the slice type is B and collocated_from_l0_flag is 1, then colPic is included in list 0, i.e., RefPicList0[ 0 ]. The functional phrase also indicates the “flag that denotes a temporal merge candidate list” is “in a slice header.” Likewise, WD5 discloses that the collocated_from_l0_flag is contained in a slice header at the top of p. 31, which indicates that the collocated_from_l0_flag is a part of the “Slice header syntax.” For these reasons, WD5 is determined to teach the algorithm associated with Functional Phrase #8, in which the motion vector derivation unit ... determines, on the basis of the slice type, whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header,” as claimed. New grounds of rejection based on this determination appear below in Section IX. IX. CLAIM REJECTIONS – 35 USC § 103 (OBVIOUSNESS) The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claims 6-16 are rejected under 35 U.S.C. § 103(a) as being unpatentable over WD5 and Lin. Regarding claim 6, WD5 an apparatus for decoding motion information in merge mode, comprising: a merge predictor index decoding unit configured to reconstruct a merge predictor index of a current block using a received merge codeword (i.e., WD5 reconstructs a merging candidate index “merge_idx,” corresponding to a received bin string, using a predetermined table (unary binarization table 9-31) corresponding to the maximum number of merging candidates “MaxNumMergeCand” – see definition of “merge_idx” on p. 69 of WD5; see also § 7.3.7 on p. 39 in which the descriptor for merge_idx is given as “ae(v)”, which is defined in § 7.2 on p. 23, and the parsing process for the ae(v) descriptor is given in § 9.2 on pp. 153-154; the decoding process flow of the parsing process for reconstructing the merging candidate index merge_idx is given in § 9.2.3 on p. 172, and merge_idx maps to a binary codeword table as shown in table 9-30 at pp. 163-168); a spatial merge candidate derivation unit configured to derive spatial merge candidates of the current block (i.e., WD5 defines five spatial candidate blocks in Fig. 8-3 on p. 109; the motion information for block B2 is set as a spatial merge candidate when motion information of at least one of the other four blocks is not available – see § 8.4.2.1.2 “Derivation process for spatial merging candidates” on pp. 101-102 of WD5). a temporal merge candidate configuration unit configured to generate a temporal merge candidate of the current block (i.e., WD5 determines an obtained reference picture index “refIdxLX” and a motion vector “mvLXCol” as the reference picture index and the motion vector of the temporal merge candidate “Col” – see § 8.4.2.1.1, steps 2-3 on p. 100 and § 8.4.2.1.8, equation 8-144 on p. 112 of WD5); a merge candidate generation unit configured to generate one or more merge candidates when the number of valid merge candidates of the current block is smaller than a predetermined number (i.e., WD5 generates an additional merge candidate “combCandk”, with k=0, when the number of valid spatial and temporal merge candidates “numOrigMergeCand” is less than a predetermined number “MaxNumMergeCand” – see § 8.4.2.1.1, steps 6-7 on p. 100; see also § 8.4.2.1.3 on pp. 103-105); a merge predictor selection unit configured to generate a merge candidate list using the spatial merge candidates derived by the spatial merge candidate derivation unit, the temporal merge candidate generated by the temporal merge candidate configuration unit, and the one or more merge candidates generated by the merge candidate generation unit and to select a merge predictor based on the merge predictor index (i.e., WD5 obtains a list of spatial merge candidates “A1, B1, B0, A0, and B2”, temporal merge candidate “Col”, and any additional merge candidates “combCandk”, removes candidates with a lower priority from the list if they have the same motion information as other merge candidates, and selects a merge candidate “N” corresponding to the reconstructed merge predictor index “merge_idx” – see § 8.4.2.1.1 on pp. 100-101 of WD5); a prediction block generation unit configured to generate a prediction block of the current block using motion information of the merge predictor (i.e., WD5 discloses an algorithm that includes the following steps that are performed by the prediction block generation unit 250 – if a motion vector (mvLX in WD5) has an integer pixel unit (i.e., the fractional part is (0,0)), generate the prediction block (predSamplesLXL) of the current block by copying (samples Ai,j from the reference picture) corresponding to a position that is indicated by a motion vector within a picture (refPicLX) indicated by a reference picture index (refIdxLX); and if a motion vector does not have an integer pixel unit (i.e., at least one component of the fractional part is not equal to 0), generate pixels of a prediction block (samples ai,j – ri,j) from integer pixels within a picture indicated by a reference picture index See WD5 at §§ 8.4.2.2 and 8.4.2.2.2. a residual block generating unit configured to perform an entropy-decoding process and an inverse-scanning process on residual signals to generate a quantized block, and to perform an inverse-quantizing process and an inverse-transforming process on the quantized block to generate a residual block (i.e., WD5 discloses an algorithm that includes the following steps that are performed by the residual block decoding unit 260 – perform entropy decoding on a residual signal and generate a 2-D quantized coefficient block (i.e., 2-D (nW)x(nH) array of quantized transform coefficients cij) by inversely scanning entropy-decoded coefficients (transCoeffLevel) using a diagonal raster inverse-scan method (scanIdx set to 3) – see WD5 at §§ 8.4.2, 8.4.3.1, 8.4.3.2, and 8.5.1–3); inversely quantize the 2-D (nW)x(nH) array ci,j using a quantization parameter (qP), resulting in a 2-D array of inverse quantized coefficients dij – see WD5 at §§ 8.5.1 and 8.5.3; and inversely transform the 2-D array of coefficients dij into a 2-D array of residual samples r – see WD5 at §§ 8.5.1 and 8.5.3. a motion vector derivation unit configured to determine a temporal merge candidate picture and determine a temporal merge candidate block within the temporal merge candidate picture in order to generate a motion vector of the temporal merge candidate (i.e., WD5 determines a temporal merge candidate picture “colPic” and a temporal merge candidate block “colPu” on p. 111, and then generates a motion vector “mvLXCol” of the temporal merge candidate block using the determined colPic and colPu on p. 112), wherein the temporal merge candidate picture is determined differently depending on a slice type (i.e., WD5 specifies the variable “colPic” differently based on whether the “slice_type” is equal to B or not – see p. 111), and the motion vector derivation unit determines, on the basis of the slice type, whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header (see p. 111 of WD5, which discloses substantially the same algorithm associated with this functional phrase – “Depending on the values of slice_type and collocated_from_l0_flag, the variable colPic, specifying the picture that contains the co-located partition, is derived as follows: – If slice_type is equal to B and collocated_from_l0_flag is equal to 0, the variable colPic specifies the picture that contains the co-located partition as specified by RefPicList1[ 0 ]; – Otherwise (slice_type is equal to B and collocated_from_l0_flag is equal to 1 or slice_type is equal to P) , the variable colPic specifies the picture that contains the co-located partition as specified by RefPicList0[ 0 ]) wherein the motion vector derivation unit is configured to set a reference picture index of the temporal merge candidate as 0 (i.e., WD5 sets the reference picture index “refIdxLX” of the temporal merge candidate as 0 at step 2 on p. 100), and wherein a motion vector of the temporal merge candidate is selected among motion vectors of a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit (i.e., § 8.4.2.1.8 on pp. 111-112 describes the derivation process for the temporal merge candidate motion vector). WD5’s disclosure does not appear to expressly disclose that each of the claimed units are implemented via a computer or a processor that is programmed to execute the algorithms corresponding to the claimed functions. However, it would have been apparent to those skilled in the art that the processes disclosed by WD5 were intended to be implemented via a processing device based on the state of the prior art at the time the invention was made. For instance, Lin relates to HEVC development and discloses deriving temporal motion vector prediction candidates in a merge mode. See Lin at paragraphs [0004], [0004], [0024], [0025], [0030], and [0038]–[0041]. Lin discloses that the decoding processes described therein “may be implemented in various hardware, software codes, or a combination of both,” and more specifically may be “program codes to be executed on a Digital Signal Processor (DSP)” or may involve “functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA).” Based on Lin’s teaching that HEVC decoding processes are conventionally implemented via hardware, software, or a combination of both, it would have been an obvious expedient to implement WD5’s HEVC decoding processes in the same manner. Regarding claim 7, the combination of WD5 and Lin teaches the apparatus of claim 6, further comprising: a reference picture index derivation unit configured to set a reference picture index of one of blocks neighboring the current block or 0 as a reference picture index of the temporal merge candidate (see § 8.4.2.1.1, step 2 on p. 100 of WD5, in which the reference picture index “refIdxLX” is derived). Regarding claim 8, the combination of WD5 and Lin teaches the apparatus of claim 6, wherein the temporal merge candidate picture has a reference picture index of 0 (i.e., WD5 sets the reference picture index “refIdxLX” of the temporal merge candidate picture as 0 at step 2 on p. 100). Regarding claim 9, the combination of WD5 and Lin teaches the apparatus of claim 6, wherein: the temporal merge candidate block is a second candidate block; and the second candidate block is a block comprising a lower right pixel at a central position of the block corresponding to the current prediction unit within the temporal merge candidate picture. WD5 discloses that to derive the motion vector for the temporal merge candidate, a right-bottom candidate block and a center block are used and a motion vector of one of the two candidate blocks is selected based on a position of the current block within a largest coding unit (e.g., if (yP>>Log2MaxCuSize) is equal to (yPRb>>Log2MaxCuSize)), and the motion vector of the second merge candidate block (center block) is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit (e.g., if (yP>>Log2MaxCuSize) is not equal to (yPRb>>Log2MaxCuSize)). See WD5 at 111-112. A candidate block is a prediction unit PU (colPu) in a co-located reference picture (colPic). Two candidate blocks colPu (right-bottom and center) are considered in the derivation. The first candidate block is the PU located at the right-bottom position of the current PU (xPRb, yPRb)=(xP+nPSW, yP+nPSH) but inside the co-located reference picture (colPic), where nPSW and nPSH are the width and the height of the current prediction unit. See WD5 at 111 (section 8.4.2.1.8, step 1).) This right-bottom candidate is chosen if the co-located right-bottom PU is located inside the same LCU line as the current PU, i.e., its y component yPRb divided by the LCU size (Log2MaxCuSize) is equal to the y component of the current PU yP divided by the LCU size (Log2MaxCuSize): (yP>>Log2MaxCuSize) is equal to (yPRb>>Log2MaxCuSize). (Id.) 159. The second candidate block is the PU located at the center position of the current PU (xPCtr, yPCtr)=(xP+(nPSW>>1), yP+(nPSH>>1)), but inside the co-located reference picture. (Id. at 111 (section 8.4.2.1.8, step 2).) This second candidate is chosen if the condition in step 1, (yP>>Log2MaxCuSize) is equal to (yPRb>>Log2MaxCuSize), is false – i.e., the co-located right-bottom PU is not inside the current LCU line and thus the current block is adjacent to a lower boundary of the largest coding unit. In particular, when the condition in step 1 is false, “colPu is marked as unavailable.” (Id. at 111, step 1.) At step 2, “[i]f . . . colPu is unavailable,” the prediction unit PU (colPu) is then defined as the prediction unit covering this center position (xPCtr, yPCtr) (in other words, the second candidate block is selected). (Id. at 111, step 2.) Accordingly, WD5 teaches that the temporal merge candidate block can be a second candidate block, which is a block comprising a lower-right pixel at a central position of the block corresponding to the current prediction unit within the temporal merge candidate picture. Regarding claim 10, the combination of WD5 and Lin teaches the apparatus of claim 9, wherein the temporal merge candidate block is a valid block retrieved when the first candidate block and the second candidate block are searched for in due order or only the second candidate block is searched for depending on a position of the current block (see WD5, steps 1 and 2 of § 8.4.2.1.8 at p. 111 – only the second candidate is chosen if the condition in step 1, (yP>>Log2MaxCuSize) is equal to (yPRb>>Log2MaxCuSize), is false – i.e., the co-located right-bottom PU is not inside the current LCU line and thus the current block is adjacent to a lower boundary of the largest coding unit. In particular, when the condition in step 1 is false, “colPu is marked as unavailable.” (Id. at 111, step 1.) At step 2, “[i]f . . . colPu is unavailable,” the prediction unit PU (colPu) is then defined as the prediction unit covering this center position (xPCtr, yPCtr) (in other words, the second candidate block is selected)). Regarding claim 11, the combination of WD5 and Lin teaches the apparatus of claim 6, wherein the merge candidate generation unit generates the merge candidate by combining pieces of motion information on valid merge candidates or generates the merge candidate having a motion vector of 0 and a reference picture index of 0 (see § 8.4.2.1.3 at pp. 103-104 of WD5 – additional merge candidate “combCandk” is generated by combining list 0 “L0” motion information of the valid merge candidate “l0Cand” at the first position “l0CandIdx”=0 with list 1 “L1” motion information of the valid merge candidate “l1Cand” at the second list position “l1CandIdx”=1). Regarding claim 12, the combination of WD5 and Lin teaches the apparatus of claim 11, wherein the merge candidate generation unit generates the merge candidate whose number is equal to or smaller than the predetermined number if the merge candidate is generated by combining pieces of motion information on predetermined valid merge candidates (see § 8.4.2.1.3 at pp. 103-104 of WD5 – additional merge candidate “combCandk” is added at the end of the merge candidate list “mergeCandList” while the number of the merge candidate “numMergeCand” is equal to or less than the predetermined number “MaxNumMergeCand”). Regarding claim 13, the combination of WD5 and Lin teaches the apparatus of claim 6, wherein the temporal merge candidate configuration unit determines a temporal merge candidate picture and determines a temporal merge candidate block within the temporal merge candidate picture, in order to generate a motion vector of the temporal merge candidate (i.e., WD5 discloses the three processes associates with the algorithm for this functional limitation – (i) determining a temporal merge candidate (co-located reference picture colPic in WD5) to which the temporal merge candidate (co-located prediction unit colPu in WD5) belongs; (ii) obtaining the temporal merge candidate block (colPu) within the temporal merge candidate picture (colPic); and (iii) setting the motion vector (mvCol) of a temporal merge candidate (Col) as the motion vector (mvLXCol) of the selected temporal merge candidate prediction block (colPu). See WD5 at § 8.4.2.1.1, step 3 at p. 100, and § 8.4.2.1.8 at pp. 111-112.) Regarding claim 14, the combination of WD5 and Lin teaches the apparatus of claim 6, wherein the temporal merge candidate block is a valid bock retrieved when the first candidate block and the second candidate block are searched for in due order, depending a position of the current block (i.e., WD5 discloses that the right-bottom temporal merge candidate is checked first and the center temporal merge candidate is checked afterwards if the first candidate “is coded in an intra prediction mode” or is “unavailable” (i.e., they are searched in “due order”), and the temporal merge candidate selected depends on the position of the current block. See WD5 at § 8.4.2.1.8, steps 1–3 at p. 11. Regarding claim 15, the combination of WD5 and Lin teaches the apparatus of claim 6, wherein the prediction block generating unit generates a prediction pixel of luminance component using an 8-tap interpolation filter and generates a prediction pixel of chrominance component using a 4-tap interpolation filter (i.e., WD5 discloses that a prediction pixel of luminance component is assigned a sample value generated using an 8-tap interpolation filter depending on the fractional sample location – see WD5 at § 8.4.2.2.2.1 and Table 8-12; WD5 discloses that a prediction pixel of chrominance component is assigned a sample value generated using a 4-tap interpolation filter depending on the fractional sample location – see WD5 at § 8.4.2.2.2.2 and Table 8-13). Regarding claim 16, the combination of WD5 and Lin teaches the apparatus of claim 6, wherein a diagonal raster inverse scan is used during the inverse-scanning process (i.e., WD5 discloses generating a 2-D quantized coefficient block by inversely scanning entropy-decoded coefficients to produce a 1-D list of quantized transform coefficients transCoeffLevel (see § 8.4.3 and § 8.5.1); the mapping is done using an inverse scan defined by ScanOrder with scanIdx set to 3, which WD5 specifies as a diagonal raster scan (see § 6.6 and § 3.84)). X. CLAIM REJECTIONS – 35 USC § 251 (DEFECTIVE REISSUE DECLARATION) For reissue applications filed on or after September 16, 2012, all references to 35 U.S.C. 251 and 37 CFR 1.172, 1.175, and 3.73 are to the current provisions. Claims 6-16 are rejected as being based upon a defective reissue declaration under 35 U.S.C. § 251. The reissue declaration filed January 23, 2024, is defective because although it properly indicates that this proceeding is a narrowing reissue, it incorrectly characterizes the underlying patent as being wholly or partially invalid or inoperative for patentee claiming less than patentee was allowed to claim. This indicates that the scope of the claims is being expanded (i.e., broadened), however, by narrowing the claims, patentee is correcting an error based upon claiming more than patentee was allowed to claim, in terms of the scope of the claimed invention. Accordingly, the declaration should be corrected to reflect that underlying patent as being wholly or partially invalid or inoperative for patentee claiming more than patentee was allowed to claim. XI. NON-STATUTORY DOUBLE PATENTING The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim 6 is rejected on the ground of nonstatutory double patenting as being not patentably distinct from claim 1 of U.S. Patent No. 9,025,669 (“’669 Patent”) in view of WD5. As shown in the chart below, claim 6 claims substantially all of the same limitations as claim 4 of the ‘669 Patent, except claim 6 of the Instant Application does not include: a merge mode motion vector decoding unit configured to generate motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode, as recited in claim 1 of the ‘669 Patent. 17/101,457 U.S. 9,025,669 (limitations re-ordered) 6. An apparatus for decoding motion information in merge mode, comprising: 1. An apparatus for decoding motion information in merge mode, comprising: a merge predictor index decoding unit configured to reconstruct a merge predictor index of a current block using a received merge codeword; a spatial merge candidate derivation unit configured to derive spatial merge candidates of the current block; a temporal merge candidate configuration unit configured to generate a temporal merge candidate of the current block; a merge candidate generation unit configured to generate one or more merge candidates when a number of valid merge candidates of the current block is smaller than a predetermined number; a merge predictor selection unit configured to generate a merge candidate list using the spatial merge candidates derived by the spatial merge candidate derivation unit, the temporal merge candidate generated by the temporal merge candidate configuration unit, and the one or more merge candidates generated by the merge candidate generation unit and to select a merge predictor based on the merge predictor index a prediction block generation unit configured to generate a prediction block of the current block using motion information of the merge predictor; a prediction bock generating unit configured to generate a prediction block of the current block using motion information; and a residual block generating unit configured to perform an entropy-decoding process and an inverse-scanning process on residual signals to generate a quantized block, and to perform an inverse-quantizing process and an inverse-transforming process on the quantized block to generate a residual block a residual block generating unit configured to perform an entropy-decoding process and an inverse-scanning process on residual signals to generate a quantized block, and to perform an inverse-transforming process on the quantized block to generate a residual block, a motion vector derivation unit configured to determine a temporal merge candidate picture and determine a temporal merge candidate block within the temporal merge candidate picture in order to generate a motion vector of the temporal merge candidate wherein the temporal merge candidate picture is determined differently depending on a slice type, and the motion vector derivation unit determines, on the basis of the slice type, whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header, wherein the motion vector derivation unit is configured to set a reference picture index of the temporal merge candidate as 0, and wherein a reference picture index of the temporal merge candidate is set to 0, and wherein a motion vector of the temporal merge candidate is selected among motion vectors of a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit a motion vector of the temporal merge candidate is selected among a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit. a merge mode motion vector decoding unit configured to generate motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode Claim 6, when combined with the teachings of WD5, renders claim 1 of the ‘669 Patent obvious because the combination amounts to the application of a known technique to a known device ready for improvement to yield predictable results (KSR Rationale D), on the basis of the following factors: (1) a finding that the prior art contained a “base” device (method, or product) upon which the claimed invention can be seen as an “improvement;” (2) a finding that the prior art contained a known technique that is applicable to the base device (method, or product); (3) a finding that one of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in an improved system. - See MPEP § 2143(I)(C). For factor (1), the “base” device corresponds to the apparatus recited in claim 6. For factor (2), the prior art (i.e., WD5) teaches the known techniques of claim 1 of the ‘669 Patent that are missing in claim 6. Notably, WD5 teaches: a merge mode motion vector decoding unit configured to generate motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode (see § 8.4 of WD5, which provides details of a decoding unit that utilizes spatial and temporal merge candidates to generate motion information when operating in a merge mode – see §§ 8.4.2.1.1 – 8.4.2.1.8). For factor (3), those skilled in the art would have recognized that applying these techniques taught by WD5 to the apparatus of claim 6 would have yielded predictable results and resulted in an improved system because the disclosure of WD5 corresponds to the development of a standardized merge-mode protocol for high efficiency video coding and decoding for use in a wide variety of applications (see WD5 at pp. 1-2 and § 8 “Decoding Process” at pp. 74-150), such that the inclusion of a merge mode motion vector decoding unit configured to generate motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode would have facilitated the decoding process for inter-predicted coding units, as taught at § 8.4 of WD5. Claims 6 is rejected on the ground of nonstatutory double patenting as being not patentably distinct from claim 1of U.S. Patent No. 9,036,709 (“’709 Patent”) in view of WD5. As shown in the chart below, claim 6 claims substantially all of the same limitations as claim 4 of the ‘709 Patent, except claim 6 of the Instant Application does not include: a merge mode motion vector decoding unit configured to generate motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode, a reference picture index of the temporal merge candidate is set to 0, and a diagonal raster inverse scan is used during the inverse-scanning process.as recited in claim 1 of the ‘709 Patent. 17/101,457 U.S. 9,036,709 (limitations re-ordered) 6. An apparatus for decoding motion information in merge mode, comprising: 1. An apparatus for decoding motion information in merge mode, comprising: a merge predictor index decoding unit configured to reconstruct a merge predictor index of a current block using a received merge codeword; a spatial merge candidate derivation unit configured to derive spatial merge candidates of the current block; a temporal merge candidate configuration unit configured to generate a temporal merge candidate of the current block; a merge candidate generation unit configured to generate one or more merge candidates when a number of valid merge candidates of the current block is smaller than a predetermined number; a merge predictor selection unit configured to generate a merge candidate list using the spatial merge candidates derived by the spatial merge candidate derivation unit, the temporal merge candidate generated by the temporal merge candidate configuration unit, and the one or more merge candidates generated by the merge candidate generation unit and to select a merge predictor based on the merge predictor index a prediction block generation unit configured to generate a prediction block of the current block using motion information of the merge predictor; a prediction bock generating unit configured to generate a prediction block of the current block using motion information; and a residual block generating unit configured to perform an entropy-decoding process and an inverse-scanning process on residual signals to generate a quantized block, and to perform an inverse-quantizing process and an inverse-transforming process on the quantized block to generate a residual block a residual block generating unit configured to perform an entropy-decoding process and an inverse-scanning process on residual signals to generate a quantized block, and to perform an inverse-transforming process on the quantized block to generate a residual block, a motion vector derivation unit configured to determine a temporal merge candidate picture and determine a temporal merge candidate block within the temporal merge candidate picture in order to generate a motion vector of the temporal merge candidate wherein the temporal merge candidate picture is determined differently depending on a slice type, and the motion vector derivation unit determines, on the basis of the slice type, whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header, wherein the motion vector derivation unit is configured to set a reference picture index of the temporal merge candidate as 0, and wherein a motion vector of the temporal merge candidate is selected among motion vectors of a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit a motion vector of the temporal merge candidate is selected among a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit, a merge mode motion vector decoding unit configured to generate motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode; wherein a reference picture index of the temporal merge candidate is set to 0 and a diagonal raster inverse scan is used during the inverse-scanning process. Claim 6, when combined with the teachings of WD5, renders claim 1 of the ‘709 Patent obvious because the combination amounts to the application of a known technique to a known device ready for improvement to yield predictable results (KSR Rationale D), on the basis of the following factors: (1) a finding that the prior art contained a “base” device (method, or product) upon which the claimed invention can be seen as an “improvement;” (2) a finding that the prior art contained a known technique that is applicable to the base device (method, or product); (3) a finding that one of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in an improved system. - See MPEP § 2143(I)(C). For factor (1), the “base” device corresponds to the apparatus recited in claim 6. For factor (2), the prior art (i.e., WD5) teaches the known techniques of claim 1 of the ‘709 Patent that are missing in claim 6. Notably, WD5 teaches: a merge mode motion vector decoding unit configured to generate motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode (see § 8.4 of WD5, which provides details of a decoding unit that utilizes spatial and temporal merge candidates to generate motion information when operating in a merge mode – see §§ 8.4.2.1.1 – 8.4.2.1.8); the reference picture index of the temporal merge candidate as 0 (i.e., WD5 sets the reference picture index “refIdxLX” of the temporal merge candidate as 0 at step 2 on p. 100); and a diagonal raster inverse scan is used during the inverse-scanning process (see § 8.5.2 “Inverse scanning process for transform coefficients” on p. 125, which teaches the inverse scanning of coefficients can be diagonal, horizontal, or vertical). . For factor (3), those skilled in the art would have recognized that applying these techniques taught by WD5 to the apparatus of claim 6 would have yielded predictable results and resulted in an improved system because the disclosure of WD5 corresponds to the development of a standardized merge-mode protocol for high efficiency video coding and decoding for use in a wide variety of applications (see WD5 at pp. 1-2 and § 8 “Decoding Process” at pp. 74-150), such that the inclusion of a merge mode motion vector decoding unit configured to generate motion information using available spatial and temporal merge candidates when a motion information encoding mode of a current block indicates a merge mode would have facilitated the decoding process for inter-predicted coding units, as taught at § 8.4 of WD5, the setting of the reference picture index as 0 would have facilitated the derivation process for motion vector components and reference indices, as taught at pp. 98-100 of WD5, and inverse-scanning transform coefficients in a diagonal manner was a preferred method of scanning transform coefficients. Claim 6 is rejected on the ground of nonstatutory double patenting as being not patentably distinct from claim 1 of U.S. Patent No. 9,161,043 (“’043 Patent”) in view of WD5. As shown in the chart below, claim 6 claims substantially all of the same limitations as claim 1 of the ‘043 Patent, except claim 6 of the Instant Application does not include: the temporal merge candidate configuration unit is configured to set a reference picture index of the temporal merge candidate as 0, as recited in claim 1 of the ‘043 Patent. 17/101,457 U.S. 9,161,043 (limitations re-ordered) 6. An apparatus for decoding motion information in merge mode, comprising: 1. An apparatus for decoding motion information in merge mode, comprising: a merge predictor index decoding unit configured to reconstruct a merge predictor index of a current block using a received merge codeword; a merge predictor index decoding unit configured to reconstruct a merge predictor index of a current block using a received merge codeword; a spatial merge candidate derivation unit configured to derive spatial merge candidates of the current block; a spatial merge candidate derivation unit configured to derive spatial merge candidates of the current block; a temporal merge candidate configuration unit configured to generate a temporal merge candidate of the current block; a temporal merge candidate configuration unit configured to derive a temporal merge candidate of the current block; a merge candidate generation unit configured to generate one or more merge candidates when a number of valid merge candidates of the current block is smaller than a predetermined number; a merge candidate generation unit configured to generate one or more merge candidates when a number of valid merge candidates of the current block is smaller than a predetermined number; a merge predictor selection unit configured to generate a merge candidate list using the spatial merge candidates derived by the spatial merge candidate derivation unit, the temporal merge candidate generated by the temporal merge candidate configuration unit, and the one or more merge candidates generated by the merge candidate generation unit and to select a merge predictor based on the merge predictor index a merge predictor selection unit configured to generate a merge candidate list using the merge candidates and to select a merge predictor based on the merge predictor index; and a prediction block generation unit configured to generate a prediction block of the current block using motion information of the merge predictor; a prediction bock generating unit configured to generate a prediction block of the current block using motion information of the merge predictor; a residual block generating unit configured to perform an entropy-decoding process and an inverse-scanning process on residual signals to generate a quantized block, and to perform an inverse-quantizing process and an inverse-transforming process on the quantized block to generate a residual block a motion vector derivation unit configured to determine a temporal merge candidate picture and determine a temporal merge candidate block within the temporal merge candidate picture in order to generate a motion vector of the temporal merge candidate wherein the temporal merge candidate picture is determined differently depending on a slice type, and the motion vector derivation unit determines, on the basis of the slice type, whether the temporal merge candidate picture is set in a reference picture list indicated by a flag indicative of a list for the temporal merge candidate picture in a slice header, wherein the motion vector derivation unit is configured to set a reference picture index of the temporal merge candidate as 0, and wherein a motion vector of the temporal merge candidate is selected among motion vectors of a first merge candidate block and a second merge candidate block based on a position of the current block within a slice or a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit wherein a motion vector of the temporal merge candidate is selected among a first merge candidate block and a second merge candidate block based on a position of the current block within a largest coding unit, and the motion vector of the second merge candidate block is selected as the motion vector of the temporal merge candidate if the current block is adjacent to a lower boundary of the largest coding unit. wherein the temporal merge candidate configuration unit is configured to set a reference picture index of the temporal merge candidate as 0 Claim 6, when combined with the teachings of WD5, renders claim 1 of the ‘043 Patent obvious because the combination amounts to the application of a known technique to a known device ready for improvement to yield predictable results (KSR Rationale D), on the basis of the following factors: (1) a finding that the prior art contained a “base” device (method, or product) upon which the claimed invention can be seen as an “improvement;” (2) a finding that the prior art contained a known technique that is applicable to the base device (method, or product); (3) a finding that one of ordinary skill in the art would have recognized that applying the known technique would have yielded predictable results and resulted in an improved system. - See MPEP § 2143(I)(C). For factor (1), the “base” device corresponds to the apparatus recited in claim 6. For factor (2), the prior art (i.e., WD5) teaches the known techniques of claim 1 of the ‘043 Patent that are missing in claim 6. Notably, WD5 teaches: the temporal merge candidate configuration unit is configured to set a reference picture index of the temporal merge candidate as 0 (i.e., WD5 sets the reference picture index “refIdxLX” of the temporal merge candidate as 0 at step 2 on p. 100). For factor (3), those skilled in the art would have recognized that applying these techniques taught by WD5 to the apparatus of claim 6 would have yielded predictable results and resulted in an improved system because the disclosure of WD5 corresponds to the development of a standardized merge-mode protocol for high efficiency video coding and decoding for use in a wide variety of applications (see WD5 at pp. 1-2 and § 8 “Decoding Process” at pp. 74-150), such that the setting of the reference picture index as 0 would have facilitated the derivation process for motion vector components and reference indices, as taught at pp. 98-100 of WD5. XII. CONCLUSION Any inquiry concerning this communication or earlier communications from the Examiner should be directed to Colin LaRose whose telephone number is 571-272-7423. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, Hetul Patel can be reached at 571-272-4184. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of this proceeding may be obtained from the USPTO’s Patent Center. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (in USA or Canada) or 571-272-1000. General inquiries may also be directed to the Central Reexamination Unit customer service line at (571) 272-7705. /COLIN M LAROSE/Primary Examiner, Art Unit 3992 Conferees: /YUZHEN GE/Primary Examiner, Art Unit 3992 /H.B.P/Hetul Patel Supervisory Patent Examiner, Art Unit 3992
Read full office action

Prosecution Timeline

Nov 23, 2020
Application Filed
Nov 23, 2020
Response after Non-Final Action
Aug 16, 2021
Response after Non-Final Action
Jul 26, 2023
Non-Final Rejection — §103, §DP
Jan 30, 2024
Response Filed
Apr 12, 2024
Non-Final Rejection — §103, §DP
Jun 06, 2024
Response Filed
Aug 01, 2024
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent RE50734
APPARATUS FOR MANAGING DISAGGREGATED MEMORY AND METHOD THEREOF
2y 5m to grant Granted Jan 06, 2026
Patent RE50619
EMERGENCY POWER SOURCE
2y 5m to grant Granted Oct 07, 2025
Patent RE50625
CHARGER PLUG WITH IMPROVED PACKAGE
2y 5m to grant Granted Oct 07, 2025
Patent RE49409
REFRIGERATOR AND MANUFACTURING METHOD OF THE SAME
2y 5m to grant Granted Feb 07, 2023
Patent RE48961
Vehicle with Multiple Light Detection and Ranging Devices (LIDARs)
2y 5m to grant Granted Mar 08, 2022
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
78%
With Interview (+45.0%)
4y 4m
Median Time to Grant
High
PTA Risk
Based on 194 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month