Prosecution Insights
Last updated: April 19, 2026
Application No. 19/019,722

MOVING PICTURE CODING METHOD, MOVING PICTURE CODING APPARATUS, MOVING PICTURE DECODING METHOD, MOVING PICTURE DECODING APPARATUS AND MOVING PICTURE CODING AND DECODING APPARATUS

Non-Final OA §103§DP
Filed
Jan 14, 2025
Examiner
WONG, ALLEN C
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Sun Patent Trust
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
669 granted / 805 resolved
+25.1% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
27 currently pending
Career history
832
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 805 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Information Disclosure Statement The information disclosure statement (IDS) submitted on 1/14/25 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2 and 4-6 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Tsai (US 2012/0008688) and Cieplinski (US 2009/0220004) in view of Tsai (US 2011/0176613, now referred to as Tsai ‘613). Regarding claim 1, Tsai discloses a moving picture coding apparatus that codes a current block (paragraph [36], Tsai discloses an apparatus for coding a current block in a picture of a group of moving pictures with motion vector prediction, and paragraph [187], Tsai discloses implementing a circuit device for video compression), the moving picture coding apparatus comprising: a processor (paragraph [187], Tsai discloses computer processor that executes machine readable software code that comprises executable instructions for performing the task of processing video compression applications); and the processor performing processes (paragraph [187], Tsai discloses computer processor that executes machine readable software code that comprises executable instructions for performing the task of processing video compression applications), wherein the processes including: deriving a first motion vector candidate (paragraph [36], fig.3, Tsai discloses motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j, wherein there are motion vectors for coding a first block, and motion vector candidates for coding a second block that is different from motion vector candidates for coding the first block; paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j), the first motion vector candidate being a first motion vector that has been used to code a first block (paragraph [36], fig.3, Tsai discloses motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j, wherein there are motion vectors for coding a first block, and motion vector candidates for coding a second block that is different from motion vector candidates for coding the first block; paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j, thus Tsai discloses first motion vector candidate, second motion vector candidate and two new motion vector candidates that are different from the first and second motion vector candidates); deriving a second motion vector candidate (paragraph [36], fig.3, Tsai discloses motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j, wherein there are motion vectors for coding a first block, and motion vector candidates for coding a second block that is different from motion vector candidates for coding the first block; paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j), the second motion vector candidate being a second motion vector that has been used to code a second block that is different from the first block (paragraph [36], fig.3, Tsai discloses motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j, wherein there are motion vectors for coding a first block, and motion vector candidates for coding a second block that is different from motion vector candidates for coding the first block; paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j, thus Tsai discloses first motion vector candidate, second motion vector candidate and two new motion vector candidates that are different from the first and second motion vector candidates); generating a new candidate (paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j, thus Tsai discloses first motion vector candidate, second motion vector candidate and two new motion vector candidates that are different from the first and second motion vector candidates); selecting one candidate from the first motion vector candidate, the second motion vector candidate, and the new candidate (paragraph [36], fig.3, Tsai discloses that a motion vector predictor candidate can be selected according to a priority order, wherein motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j can be one of the selected motion vector candidate from the list of motion vector candidates; paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j, thus Tsai discloses first motion vector candidate, second motion vector candidate and two new motion vector candidates that are different from the first and second motion vector candidates). Tsai does not disclose “a non-transitory storage, the processor performing, using the non-transitory storage” and “generating a new candidate, the new candidate being a combination of the first motion vector candidate and the second motion vector candidate”. However, Cieplinski teaches a non-transitory storage (paragraph [98], Cieplinski discloses a computer based system with a processor that executes the computer software stored in data storage means or memory), the processor (paragraph [98], Cieplinski discloses a computer based system with a processor) performing, using the non-transitory storage (paragraph [98], Cieplinski discloses a computer based system with a processor that executes the computer software stored in data storage means or memory), and generating a new candidate (paragraph [51], Cieplinski discloses obtaining a new motion vector by utilizing the concept of combining motion vectors by implementation of an averaging process wherein motion vector candidate V0 is obtained by taking the average of candidate motion vectors VE1 – VE6 & VB0, and in paragraph [54], Cieplinski discloses the concept of combining motion vectors to obtain new motion vector, wherein the candidate motion vector is obtained by combining the selected MV (motion vector) in the current layer with the MV in the base layer by taking the average of the selected MV in the current layer with the MV in the base layer), the new candidate being a combination of the first motion vector candidate and the second motion vector candidate (paragraph [51], Cieplinski discloses obtaining a new motion vector by utilizing the concept of combining motion vectors by implementation of an averaging process wherein motion vector candidate V0 is obtained by taking the average of candidate motion vectors VE1 – VE6 & VB0, and in paragraph [54], Cieplinski discloses the concept of combining motion vectors to obtain new motion vector, wherein the candidate motion vector is obtained by combining the selected MV (motion vector) in the current layer with the MV in the base layer by taking the average of the selected MV in the current layer with the MV in the base layer). Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Tsai and Cieplinski together as a whole for accurately predict motion vectors so as to produce high quality images for display. Tsai and Cieplinski do not disclose coding an index to identify the selected candidate; and coding the current block by using the selected candidate. However, Tsai ‘613 teaches coding an index to identify the selected candidate (paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [35], fig.3, Tsai discloses video encoder 300 comprises entropy coding module 310 to encode the video bitstream that includes video information along with prediction information that includes data involving motion vectors such as index of selected candidate); and coding the current block by using the selected candidate (paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [35], fig.3, Tsai discloses video encoder 300 comprises entropy coding module 310 to encode the video bitstream that includes video information along with prediction information that includes data involving motion vectors such as index of selected candidate, and paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [33], Tsai discloses that current unit belongs to a current frame, and paragraph [9], Tsai discloses that current unit belongs to a block 112 of a current frame 102). Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Tsai, Cieplinski and Tsai ‘613 together as a whole for accurately compressing video data in an efficient manner during data transmission. Regarding claim 2, Tsai discloses wherein the first motion vector candidate is included in a first candidate list (paragraph [36], fig.3, Tsai discloses motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j , wherein there are motion vector candidates mvL0l and mvL0j belong to list 0 and motion vector candidates mvL1l and mvL1j belong to list 1, thus list 0 is the first list and list 1 is the second list), and the second motion vector candidate is included in a second candidate list (paragraph [36], fig.3, Tsai discloses motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j , wherein there are motion vector candidates mvL0l and mvL0j belong to list 0 and motion vector candidates mvL1l and mvL1j belong to list 1, thus list 0 is the first list and list 1 is the second list). Regarding claim 4, Tsai discloses wherein the first block is adjacent to the current block (paragraph [36], fig.3, Tsai discloses that block 320 is adjacent or next to current block 310 of current frame in that block 320 is co-located in adjacent frame j), and the second block is adjacent to the current block (paragraph [36], fig.3, Tsai discloses that block 330 is adjacent or next to current block 310 of current frame in that block 330 is co-located in adjacent frame l). Regarding claim 5, Tsai discloses a moving picture coding method for coding a current block (paragraph [36], Tsai discloses an apparatus for processing a method of coding a current block in a picture of a group of moving pictures with motion vector prediction, and paragraph [187], Tsai discloses implementing a circuit device for video compression), the moving picture coding method comprising: deriving a first motion vector candidate (paragraph [36], fig.3, Tsai discloses motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j, wherein there are motion vectors for coding a first block, and motion vector candidates for coding a second block that is different from motion vector candidates for coding the first block; paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j), the first motion vector candidate being a first motion vector that has been used to code a first block (paragraph [36], fig.3, Tsai discloses motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j, wherein there are motion vectors for coding a first block, and motion vector candidates for coding a second block that is different from motion vector candidates for coding the first block; paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j, thus Tsai discloses first motion vector candidate, second motion vector candidate and two new motion vector candidates that are different from the first and second motion vector candidates); deriving a second motion vector candidate (paragraph [36], fig.3, Tsai discloses motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j, wherein there are motion vectors for coding a first block, and motion vector candidates for coding a second block that is different from motion vector candidates for coding the first block; paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j), the second motion vector candidate being a second motion vector that has been used to code a second block that is different from the first block (paragraph [36], fig.3, Tsai discloses motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j, wherein there are motion vectors for coding a first block, and motion vector candidates for coding a second block that is different from motion vector candidates for coding the first block; paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j, thus Tsai discloses first motion vector candidate, second motion vector candidate and two new motion vector candidates that are different from the first and second motion vector candidates); generating a new candidate (paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j, thus Tsai discloses first motion vector candidate, second motion vector candidate and two new motion vector candidates that are different from the first and second motion vector candidates); selecting one candidate from the first motion vector candidate, the second motion vector candidate, and the new candidate (paragraph [36], fig.3, Tsai discloses that a motion vector predictor candidate can be selected according to a priority order, wherein motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j can be one of the selected motion vector candidate from the list of motion vector candidates; paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j, thus Tsai discloses first motion vector candidate, second motion vector candidate and two new motion vector candidates that are different from the first and second motion vector candidates). Tsai does not disclose generating a new candidate, the new candidate being a combination of the first motion vector candidate and the second motion vector candidate. However, Cieplinski teaches generating a new candidate (paragraph [51], Cieplinski discloses obtaining a new motion vector by utilizing the concept of combining motion vectors by implementation of an averaging process wherein motion vector candidate V0 is obtained by taking the average of candidate motion vectors VE1 – VE6 & VB0, and in paragraph [54], Cieplinski discloses the concept of combining motion vectors to obtain new motion vector, wherein the candidate motion vector is obtained by combining the selected MV (motion vector) in the current layer with the MV in the base layer by taking the average of the selected MV in the current layer with the MV in the base layer), the new candidate being a combination of the first motion vector candidate and the second motion vector candidate (paragraph [51], Cieplinski discloses obtaining a new motion vector by utilizing the concept of combining motion vectors by implementation of an averaging process wherein motion vector candidate V0 is obtained by taking the average of candidate motion vectors VE1 – VE6 & VB0, and in paragraph [54], Cieplinski discloses the concept of combining motion vectors to obtain new motion vector, wherein the candidate motion vector is obtained by combining the selected MV (motion vector) in the current layer with the MV in the base layer by taking the average of the selected MV in the current layer with the MV in the base layer). Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Tsai and Cieplinski together as a whole for accurately predict motion vectors so as to produce high quality images for display. Tsai and Cieplinski do not disclose coding an index to identify the selected candidate; and coding the current block by using the selected candidate. However, Tsai ‘613 teaches coding an index to identify the selected candidate (paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [35], fig.3, Tsai discloses video encoder 300 comprises entropy coding module 310 to encode the video bitstream that includes video information along with prediction information that includes data involving motion vectors such as index of selected candidate); and coding the current block by using the selected candidate (paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [35], fig.3, Tsai discloses video encoder 300 comprises entropy coding module 310 to encode the video bitstream that includes video information along with prediction information that includes data involving motion vectors such as index of selected candidate, and paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [33], Tsai discloses that current unit belongs to a current frame, and paragraph [9], Tsai discloses that current unit belongs to a block 112 of a current frame 102). Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Tsai, Cieplinski and Tsai ‘613 together as a whole for accurately compressing video data in an efficient manner during data transmission. Regarding claim 6, Tsai discloses a computer with a processor that executes machine readable executable instructions (paragraph [187], Tsai discloses computer processor that executes machine readable software code that comprises executable instructions for performing the task of processing video compression applications), which when executed (paragraph [187], Tsai discloses computer processor that executes machine readable software code that comprises executable instructions for performing the task of processing video compression applications), cause a processor to perform a moving picture coding method (paragraph [36], Tsai discloses an apparatus for coding a current block in a picture of a group of moving pictures with motion vector prediction, and paragraph [187], Tsai discloses implementing a circuit device for video compression) including: deriving a first motion vector candidate (paragraph [36], fig.3, Tsai discloses motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j, wherein there are motion vectors for coding a first block, and motion vector candidates for coding a second block that is different from motion vector candidates for coding the first block; paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j), the first motion vector candidate being a first motion vector that has been used to code a first block (paragraph [36], fig.3, Tsai discloses motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j, wherein there are motion vectors for coding a first block, and motion vector candidates for coding a second block that is different from motion vector candidates for coding the first block; paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j, thus Tsai discloses first motion vector candidate, second motion vector candidate and two new motion vector candidates that are different from the first and second motion vector candidates); deriving a second motion vector candidate (paragraph [36], fig.3, Tsai discloses motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j, wherein there are motion vectors for coding a first block, and motion vector candidates for coding a second block that is different from motion vector candidates for coding the first block; paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j), the second motion vector candidate being a second motion vector that has been used to code a second block that is different from the first block (paragraph [36], fig.3, Tsai discloses motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j, wherein there are motion vectors for coding a first block, and motion vector candidates for coding a second block that is different from motion vector candidates for coding the first block; paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j, thus Tsai discloses first motion vector candidate, second motion vector candidate and two new motion vector candidates that are different from the first and second motion vector candidates); generating a new candidate (paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j, thus Tsai discloses first motion vector candidate, second motion vector candidate and two new motion vector candidates that are different from the first and second motion vector candidates); selecting one candidate from the first motion vector candidate, the second motion vector candidate, and the new candidate (paragraph [36], fig.3, Tsai discloses that a motion vector predictor candidate can be selected according to a priority order, wherein motion vectors mvL0 and mvL1, along with motion vector candidates mvL0l, mvL1l, mvL0j and mvL1j can be one of the selected motion vector candidate from the list of motion vector candidates; paragraph [37], fig.4, Tsai discloses four motion vector candidates of fig.3, in that mvL0l, mvL1l, mvL0j and mvL1j, thus Tsai discloses first motion vector candidate, second motion vector candidate and two new motion vector candidates that are different from the first and second motion vector candidates). Tsai does not disclose “a non-transitory computer readable recording medium having stored thereon executable instructions”, and “generating a new candidate, the new candidate being a combination of the first motion vector candidate and the second motion vector candidate”. However, Cieplinski teaches a non-transitory computer readable recording medium having stored thereon executable instructions (paragraph [98], Cieplinski discloses a computer based system with a processor that executes the computer software stored in data storage means or memory), and generating a new candidate (paragraph [51], Cieplinski discloses obtaining a new motion vector by utilizing the concept of combining motion vectors by implementation of an averaging process wherein motion vector candidate V0 is obtained by taking the average of candidate motion vectors VE1 – VE6 & VB0, and in paragraph [54], Cieplinski discloses the concept of combining motion vectors to obtain new motion vector, wherein the candidate motion vector is obtained by combining the selected MV (motion vector) in the current layer with the MV in the base layer by taking the average of the selected MV in the current layer with the MV in the base layer), the new candidate being a combination of the first motion vector candidate and the second motion vector candidate (paragraph [51], Cieplinski discloses obtaining a new motion vector by utilizing the concept of combining motion vectors by implementation of an averaging process wherein motion vector candidate V0 is obtained by taking the average of candidate motion vectors VE1 – VE6 & VB0, and in paragraph [54], Cieplinski discloses the concept of combining motion vectors to obtain new motion vector, wherein the candidate motion vector is obtained by combining the selected MV (motion vector) in the current layer with the MV in the base layer by taking the average of the selected MV in the current layer with the MV in the base layer). Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Tsai and Cieplinski together as a whole for accurately predict motion vectors so as to produce high quality images for display. Tsai and Cieplinski do not disclose coding an index to identify the selected candidate; and coding the current block by using the selected candidate. However, Tsai ‘613 teaches coding an index to identify the selected candidate (paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [35], fig.3, Tsai discloses video encoder 300 comprises entropy coding module 310 to encode the video bitstream that includes video information along with prediction information that includes data involving motion vectors such as index of selected candidate); and coding the current block by using the selected candidate (paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [35], fig.3, Tsai discloses video encoder 300 comprises entropy coding module 310 to encode the video bitstream that includes video information along with prediction information that includes data involving motion vectors such as index of selected candidate, and paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [33], Tsai discloses that current unit belongs to a current frame, and paragraph [9], Tsai discloses that current unit belongs to a block 112 of a current frame 102). Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Tsai, Cieplinski and Tsai ‘613 together as a whole for accurately compressing video data in an efficient manner during data transmission. Claim 3 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Tsai (US 2012/0008688), Cieplinski (US 2009/0220004) and Tsai (US 2011/0176613, now referred to as Tsai ‘613) in view of Heng (US 2009/0180032). Regarding claim 3, Tsai, Cieplinski and Tsai ‘613 do not disclose wherein the first motion vector candidate and the second motion vector candidate refer to a same reference frame. However, Heng teaches wherein the first motion vector candidate and the second motion vector candidate refer to a same reference frame (paragraph [46], fig.4, Heng discloses both first motion vector candidate 412a and second motion vector candidate 412b refer to the same reference frame 402a and same current frame 402b). Therefore, it would have been obvious to one of ordinary skill in the art to combine the teachings of Tsai, Cieplinski, Tsai ‘613 and Heng together as a whole for providing greater sub-pixel accuracy in order to estimate motion in a plurality of motion image frames during video compression (Heng’s paragraph [16]). Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-6 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-6 of U.S. Patent No. 12,238,326 in view of Tsai (US 2011/0176613, now referred to as Tsai ‘613). Regarding claim 1, claim 1 of present Application ‘722 is similar to claim 1 of Patent ‘326 in that claim 1 of Patent ‘326 discloses most of the limitations of claim 1 of present Application ‘722. Peruse the table below. Claim 1 of Patent ‘326 does not disclose coding an index to identify the selected candidate. However, Tsai ‘613 teaches coding an index to identify the selected candidate (paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [35], fig.3, Tsai discloses video encoder 300 comprises entropy coding module 310 to encode the video bitstream that includes video information along with prediction information that includes data involving motion vectors such as index of selected candidate). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 1 of Patent ‘326 and Tsai ‘613 together as a whole for accurately compressing video data in an efficient manner during data transmission. Claim 2 of present Application ‘722 is similar to claim 2 of Patent ‘326. Thus, claim 2 of present Application ‘722 is anticipated by claim 2 of Patent ‘326. Claim 3 of present Application ‘722 is similar to claim 3 of Patent ‘326. Thus, claim 3 of present Application ‘722 is anticipated by claim 3 of Patent ‘326. Claim 4 of present Application ‘722 is similar to claim 4 of Patent ‘326. Thus, claim 4 of present Application ‘722 is anticipated by claim 4 of Patent ‘326. Regarding claim 5, claim 5 of present Application ‘722 is similar to claim 5 of Patent ‘326 in that claim 5 of Patent ‘326 discloses most of the limitations of claim 5 of present Application ‘722. Peruse the table below. Claim 5 of Patent ‘326 does not disclose coding an index to identify the selected candidate. However, Tsai ‘613 teaches coding an index to identify the selected candidate (paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [35], fig.3, Tsai discloses video encoder 300 comprises entropy coding module 310 to encode the video bitstream that includes video information along with prediction information that includes data involving motion vectors such as index of selected candidate). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 5 of Patent ‘326 and Tsai ‘613 together as a whole for accurately compressing video data in an efficient manner during data transmission. Regarding claim 6, claim 6 of present Application ‘722 is similar to claim 6 of Patent ‘326 in that claim 6 of Patent ‘326 discloses most of the limitations of claim 6 of present Application ‘722. Peruse the table below. Claim 6 of Patent ‘326 does not disclose coding an index to identify the selected candidate. However, Tsai ‘613 teaches coding an index to identify the selected candidate (paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [35], fig.3, Tsai discloses video encoder 300 comprises entropy coding module 310 to encode the video bitstream that includes video information along with prediction information that includes data involving motion vectors such as index of selected candidate). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 6 of Patent ‘326 and Tsai ‘613 together as a whole for accurately compressing video data in an efficient manner during data transmission. Claims 1-6 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-6 of U.S. Patent No. 11,917,186 in view of Tsai (US 2011/0176613, now referred to as Tsai ‘613). Regarding claim 1, claim 1 of present Application ‘722 is similar to claim 1 of Patent ‘186 in that claim 1 of Patent ‘186 discloses most of the limitations of claim 1 of present Application ‘722. Peruse the table below. Claim 1 of Patent ‘186 does not disclose selecting one candidate from the first motion vector candidate, the second motion vector candidate and the new candidate, and coding an index to identify the selected candidate. However, Tsai ‘613 teaches selecting one candidate from the candidate set (paragraph [34], lines 8-9, Tsai discloses selecting final candidate from the candidate set), coding an index to identify the selected candidate (paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [35], fig.3, Tsai discloses video encoder 300 comprises entropy coding module 310 to encode the video bitstream that includes video information along with prediction information that includes data involving motion vectors such as index of selected candidate). Since claim 1 of Patent ‘186 discloses “deriving a first motion vector candidate”, “deriving a second motion vector candidate”, “generating a new candidate” and “coding the current block by using a candidate from among a plurality of candidates including the new candidate”, and Tsai discloses “selecting one candidate from the candidate set” and “coding an index to identify the selected candidate” in that the selection process of obtaining the motion vector has already taken place prior to encoding the selected candidate, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 1 of Patent ‘186 and Tsai ‘613 together as a whole for ascertaining the limitation “…selecting one candidate from the first motion vector candidate, the second motion vector candidate and the new candidate”, by substitution, in order to accurately compress video data in an efficient manner during data transmission. Claim 2 of present Application ‘722 is similar to claim 2 of Patent ‘186. Thus, claim 2 of present Application ‘722 is anticipated by claim 2 of Patent ‘186. Claim 3 of present Application ‘722 is similar to claim 3 of Patent ‘186. Thus, claim 3 of present Application ‘722 is anticipated by claim 3 of Patent ‘186. Claim 4 of present Application ‘722 is similar to claim 4 of Patent ‘186. Thus, claim 4 of present Application ‘722 is anticipated by claim 4 of Patent ‘186. Regarding claim 5, claim 5 of present Application ‘722 is similar to claim 5 of Patent ‘186 in that claim 5 of Patent ‘186 discloses most of the limitations of claim 5 of present Application ‘722. Peruse the table below. Claim 5 of Patent ‘186 does not disclose selecting one candidate from the first motion vector candidate, the second motion vector candidate and the new candidate, and coding an index to identify the selected candidate. However, Tsai ‘613 teaches selecting one candidate from the candidate set (paragraph [34], lines 8-9, Tsai discloses selecting final candidate from the candidate set), coding an index to identify the selected candidate (paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [35], fig.3, Tsai discloses video encoder 300 comprises entropy coding module 310 to encode the video bitstream that includes video information along with prediction information that includes data involving motion vectors such as index of selected candidate). Since claim 5 of Patent ‘186 discloses “deriving a first motion vector candidate”, “deriving a second motion vector candidate”, “generating a new candidate” and “coding the current block by using a candidate from among a plurality of candidates including the new candidate”, and Tsai discloses “selecting one candidate from the candidate set” and “coding an index to identify the selected candidate” in that the selection process of obtaining the motion vector has already taken place prior to encoding the selected candidate, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 5 of Patent ‘186 and Tsai ‘613 together as a whole for ascertaining the limitation “…selecting one candidate from the first motion vector candidate, the second motion vector candidate and the new candidate”, by substitution, in order to accurately compress video data in an efficient manner during data transmission. Regarding claim 6, claim 6 of present Application ‘722 is similar to claim 6 of Patent ‘186 in that claim 6 of Patent ‘186 discloses most of the limitations of claim 6 of present Application ‘722. Peruse the table below. Claim 6 of Patent ‘186 does not disclose selecting one candidate from the first motion vector candidate, the second motion vector candidate and the new candidate, and coding an index to identify the selected candidate. However, Tsai ‘613 teaches selecting one candidate from the candidate set (paragraph [34], lines 8-9, Tsai discloses selecting final candidate from the candidate set), coding an index to identify the selected candidate (paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [35], fig.3, Tsai discloses video encoder 300 comprises entropy coding module 310 to encode the video bitstream that includes video information along with prediction information that includes data involving motion vectors such as index of selected candidate). Since claim 6 of Patent ‘186 discloses “deriving a first motion vector candidate”, “deriving a second motion vector candidate”, “generating a new candidate” and “coding the current block by using a candidate from among a plurality of candidates including the new candidate”, and Tsai discloses “selecting one candidate from the candidate set” and “coding an index to identify the selected candidate” in that the selection process of obtaining the motion vector has already taken place prior to encoding the selected candidate, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 6 of Patent ‘186 and Tsai ‘613 together as a whole for ascertaining the limitation “…selecting one candidate from the first motion vector candidate, the second motion vector candidate and the new candidate”, by substitution, in order to accurately compress video data in an efficient manner during data transmission. Claims 1, 5 and 6 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3 of U.S. Patent No. 11,356,694 in view of Tsai (US 2011/0176613, now referred to as Tsai ‘613). Regarding claim 1, claim 1 of present Application ‘722 is similar to claim 1 of Patent ‘694 in that claim 1 of Patent ‘694 discloses most of the limitations of claim 1 of present Application ‘722. Peruse the table below. Claim 1 of Patent ‘694 does not disclose coding an index to identify the selected candidate. However, Tsai ‘613 teaches coding an index to identify the selected candidate (paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [35], fig.3, Tsai discloses video encoder 300 comprises entropy coding module 310 to encode the video bitstream that includes video information along with prediction information that includes data involving motion vectors such as index of selected candidate). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 1 of Patent ‘694 and Tsai ‘613 together as a whole for accurately compressing video data in an efficient manner during data transmission. Regarding claim 5, claim 5 of present Application ‘722 is similar to claim 2 of Patent ‘694 in that claim 2 of Patent ‘694 discloses most of the limitations of claim 5 of present Application ‘722. Peruse the table below. Claim 2 of Patent ‘694 does not disclose coding an index to identify the selected candidate. However, Tsai ‘613 teaches coding an index to identify the selected candidate (paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [35], fig.3, Tsai discloses video encoder 300 comprises entropy coding module 310 to encode the video bitstream that includes video information along with prediction information that includes data involving motion vectors such as index of selected candidate). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 2 of Patent ‘694 and Tsai ‘613 together as a whole for accurately compressing video data in an efficient manner during data transmission. Regarding claim 6, claim 6 of present Application ‘722 is similar to claim 3 of Patent ‘694 in that claim 3 of Patent ‘694 discloses most of the limitations of claim 6 of present Application ‘722. Peruse the table below. Claim 3 of Patent ‘694 does not disclose coding an index to identify the selected candidate. However, Tsai ‘613 teaches coding an index to identify the selected candidate (paragraph [34], lines 26-29, Tsai discloses encoder generates an index of final motion vector predictor selected from candidate set for the "current unit", wherein paragraph [35], fig.3, Tsai discloses video encoder 300 comprises entropy coding module 310 to encode the video bitstream that includes video information along with prediction information that includes data involving motion vectors such as index of selected candidate). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 3 of Patent ‘694 and Tsai ‘613 together as a whole for accurately compressing video data in an efficient manner during data transmission. Peruse the table below. Present Application 19/019,722 US Patent No. 12,238,326 Claim 1. A moving picture coding apparatus that codes a current block, the moving picture coding apparatus comprising: a processor; and a non-transitory storage, the processor performing, using the non-transitory storage, processes including: deriving a first motion vector candidate, the first motion vector candidate being a first motion vector that has been used to code a first block; deriving a second motion vector candidate, the second motion vector candidate being a second motion vector that has been used to code a second block that is different from the first block; generating a new candidate, the new candidate being a combination of the first motion vector candidate and the second motion vector candidate; selecting one candidate from the first motion vector candidate, the second motion vector candidate, and the new candidate; coding an index to identify the selected candidate; and coding the current block by using the selected candidate. Claim 1. A moving picture coding apparatus that codes a current block, the moving picture coding apparatus comprising: a processor; and a non-transitory storage, the processor performing, using the non-transitory storage, processes including: deriving a first motion vector candidate, the first motion vector candidate being a first motion vector that has been used to code a first block; deriving a second motion vector candidate, the second motion vector candidate being a second motion vector that has been used to code a second block that is different from the first block; generating a new candidate, the new candidate being a combination of the first motion vector candidate and the second motion vector candidate; selecting one candidate from the first motion vector candidate, the second motion vector candidate, and the new candidate; and coding the current block by using the selected candidate. Claim 2. The moving picture coding apparatus of Claim 1, wherein the first motion vector candidate is included in a first candidate list, and the second motion vector candidate is included in a second candidate list. Claim 2. The moving picture coding apparatus of claim 1, wherein the first motion vector candidate is included in a first candidate list, and the second motion vector candidate is included in a second candidate list. Claim 3. The moving picture coding apparatus of Claim 1, wherein the first motion vector candidate and the second motion vector candidate refer to a same reference frame. Claim 3. The moving picture coding apparatus of claim 1, wherein the first motion vector candidate and the second motion vector candidate refer to a same reference frame. Claim 4. The moving picture coding apparatus of Claim 1, wherein the first block is adjacent to the current block, and the second block is adjacent to the current block. Claim 4. The moving picture coding apparatus of claim 1, wherein the first block is adjacent to the current block, and the second block is adjacent to the current block. Claim 5. A moving picture coding method for coding a current block, the moving picture coding method comprising: deriving a first motion vector candidate, the first motion vector candidate being a first motion vector that has been used to code a first block; deriving a second motion vector candidate, the second motion vector candidate being a second motion vector that has been used to code a second block that is different from the first block; generating a new candidate, the new candidate being a combination of the first motion vector candidate and the second motion vector candidate; selecting one candidate from the first motion vector candidate, the second motion vector candidate, and the new candidate; coding an index to identify the selected candidate; and coding the current block by using the selected candidate. Claim 5. A moving picture coding method for coding a current block, the moving picture coding method comprising: deriving a first motion vector candidate, the first motion vector candidate being a first motion vector that has been used to code a first block; deriving a second motion vector candidate, the second motion vector candidate being a second motion vector that has been used to code a second block that is different from the first block; generating a new candidate, the new candidate being a combination of the first motion vector candidate and the second motion vector candidate; selecting one candidate from the first motion vector candidate, the second motion vector candidate, and the new candidate; and coding the current block by using the selected candidate. Claim 6. A non-transitory computer readable recording medium having stored thereon executable instructions, which when executed, cause a processor to perform a moving picture coding method including: deriving a first motion vector candidate, the first motion vector candidate being a first motion vector that has been used to code a first block; deriving a second motion vector candidate, the second motion vector candidate being a second motion vector that has been used to code a second block that is different from the first block; generating a new candidate, the new candidate being a combination of the first motion vector candidate and the second motion vector candidate; selecting one candidate from the first motion vector candidate, the second motion vector candidate, and the new candidate; coding an index to identify the selected candidate; and coding the current block by using the selected candidate. Claim 6. A non-transitory computer readable recording medium having stored thereon executable instructions, which when executed, cause a processor to perform a moving picture coding method including: deriving a first motion vector candidate, the first motion vector candidate being a first motion vector that has been used to code a first block; deriving a second motion vector candidate, the second motion vector candidate being a second motion vector that has been used to code a second block that is different from the first block; generating a new candidate, the new candidate being a combination of the first motion vector candidate and the second motion vector candidate; selecting one candidate from the first motion vector candidate, the second motion vector candidate, and the new candidate; and coding the current block by using the selected candidate. Present Application 19/019,722 US Patent No. 11,917,186 US Patent No. 11,356,694 Claim 1. A moving picture coding apparatus that codes a current block, the moving picture coding apparatus comprising: a processor; and a non-transitory storage, the processor performing, using the non-transitory storage, processes including: deriving a first motion vector candidate, the first motion vector candidate being a first motion vector that has been used to code a first block; deriving a second motion vector candidate, the second motion vector candidate being a second motion vector that has been used to code a second block that is different from the first block; generating a new candidate, the new candidate being a combination of the first motion vector candidate and the second motion vector candidate; selecting one candidate from the first motion vector candidate, the second motion vector candidate, and the new candidate; coding an index to identify the selected candidate; and coding the current block by using the selected candidate. Claim 1. A moving picture coding apparatus that codes a current block, the moving picture coding apparatus comprising: a processor; and a non-transitory storage, the processor performing, using the non-transitory storage, processes including: deriving a first motion vector candidate, the first motion vector candidate being a first motion vector that has been used to code a first block; deriving a second motion vector candidate, the second motion vector candidate being a second motion vector that has been used to code a second block that is different from the first block; generating a new candidate, the new candidate being a combination of the first motion vector candidate and the second motion vector candidate; and coding the current block by using a candidate from among a plurality of candidates including the new candidate. Claim 1. A moving picture coding apparatus that codes a current block, the moving picture coding apparatus comprising: a processor; and a non-transitory storage, the processor performing, using the non-transitory storage, processes including: deriving one or more first motion vector candidates from a first block that has been coded, the one or more first motion vector candidates being used for the coding of the first block, and one of the one or more first motion vector candidates corresponding to a first prediction direction; deriving one or more second motion vector candidates from a second block that is different from the first block and has been coded, the one or more second motion vector candidates being used for the coding of the second block, and one of the one or more second motion vector candidates corresponding to a second prediction direction that is different from the first prediction direction; and coding the current block that is different from the first block and the second block, by using a combination, which is selected from (1) the one or more first motion vector candidates, (2) the one or more second motion vector candidates, and (3) a combination of the one of the one or more first motion vector candidates corresponding to the first prediction direction and the one of the one or more second motion vector candidates corresponding to the second prediction direction. Claim 2. The moving picture coding apparatus of Claim 1, wherein the first motion vector candidate is included in a first candidate list, and the second motion vector candidate is included in a second candidate list. Claim 2. The moving picture coding apparatus of claim 1, wherein the first motion vector candidate is included in a first candidate list, and the second motion vector candidate is included in a second candidate list. Claim 3. The moving picture coding apparatus of Claim 1, wherein the first motion vector candidate and the second motion vector candidate refer to a same reference frame. Claim 3. The moving picture coding apparatus of claim 1, wherein the first motion vector candidate and the second motion vector candidate refer to a same reference frame. Claim 4. The moving picture coding apparatus of Claim 1, wherein the first block is adjacent to the current block, and the second block is adjacent to the current block. Claim 4. The moving picture coding apparatus of claim 1, wherein the first block is adjacent to the current block, and the second block is adjacent to the current block. Claim 5. A moving picture coding method for coding a current block, the moving picture coding method comprising: deriving a first motion vector candidate, the first motion vector candidate being a first motion vector that has been used to code a first block; deriving a second motion vector candidate, the second motion vector candidate being a second motion vector that has been used to code a second block that is different from the first block; generating a new candidate, the new candidate being a combination of the first motion vector candidate and the second motion vector candidate; selecting one candidate from the first motion vector candidate, the second motion vector candidate, and the new candidate; coding an index to identify the selected candidate; and coding the current block by using the selected candidate. Claim 5. A moving picture coding method for coding a current block, the moving picture coding method comprising: deriving a first motion vector candidate, the first motion vector candidate being a first motion vector that has been used to code a first block; deriving a second motion vector candidate, the second motion vector candidate being a second motion vector that has been used to code a second block that is different from the first block; generating a new candidate, the new candidate being a combination of the first motion vector candidate and the second motion vector candidate; and coding the current block by using a candidate from among a plurality of candidates including the new candidate. Claim 2. A moving picture coding method for coding a current block, the moving picture coding method comprising: deriving one or more first motion vector candidates from a first block that has been coded, the one or more first motion vector candidates being used for the coding of the first block, and one of the one or more first motion vector candidates corresponding to a first prediction direction; deriving one or more second motion vector candidates from a second block that is different from the first block and has been coded, the one or more second motion vector candidates being used for the coding of the second block, and one of the one or more second motion vector candidates corresponding to a second prediction direction that is different from the first prediction direction; and coding the current block that is different from the first block and the second block, by using a combination, which is selected from (1) the one or more first motion vector candidates, (2) the one or more second motion vector candidates, and (3) a combination of the one of the one or more first motion vector candidates corresponding to the first prediction direction and the one of the one or more second motion vector candidates corresponding to the second prediction direction. Claim 6. A non-transitory computer readable recording medium having stored thereon executable instructions, which when executed, cause a processor to perform a moving picture coding method including: deriving a first motion vector candidate, the first motion vector candidate being a first motion vector that has been used to code a first block; deriving a second motion vector candidate, the second motion vector candidate being a second motion vector that has been used to code a second block that is different from the first block; generating a new candidate, the new candidate being a combination of the first motion vector candidate and the second motion vector candidate; selecting one candidate from the first motion vector candidate, the second motion vector candidate, and the new candidate; coding an index to identify the selected candidate; and coding the current block by using the selected candidate. Claim 6. A non-transitory computer readable recording medium having stored thereon executable instructions, which when executed, cause a processor to perform a moving picture coding method including: deriving a first motion vector candidate, the first motion vector candidate being a first motion vector that has been used to code a first block; deriving a second motion vector candidate, the second motion vector candidate being a second motion vector that has been used to code a second block that is different from the first block; generating a new candidate, the new candidate being a combination of the first motion vector candidate and the second motion vector candidate; and coding the current block by using a candidate from among a plurality of candidates including the new candidate. Claim 3. A non-transitory computer readable recording medium having stored thereon executable instructions, which when executed, cause a processor to perform a moving picture coding method including: deriving one or more first motion vector candidates from a first block that has been coded, the one or more first motion vector candidates being used for the coding of the first block, and one of the one or more first motion vector candidates corresponding to a first prediction direction; deriving one or more second motion vector candidates from a second block that is different from the first block and has been coded, the one or more second motion vector candidates being used for the coding of the second block, and one of the one or more second motion vector candidates corresponding to a second prediction direction that is different from the first prediction direction; and coding the current block that is different from the first block and the second block, by using a combination, which is selected from (1) the one or more first motion vector candidates, (2) the one or more second motion vector candidates, and (3) a combination of the one of the one or more first motion vector candidates corresponding to the first prediction direction and the one of the one or more second motion vector candidates corresponding to the second prediction direction. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALLEN C WONG whose telephone number is (571)272-7341. The examiner can normally be reached on Flex Monday-Thursday 9:30am-7:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath V Perungavoor can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALLEN C WONG/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Jan 14, 2025
Application Filed
Mar 05, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604009
IMAGE ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12598321
ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12587671
VIDEO ENCODING APPARATUS AND A VIDEO DECODING APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12581134
FEATURE ENCODING/DECODING METHOD AND DEVICE, AND RECORDING MEDIUM STORING BITSTREAM
2y 5m to grant Granted Mar 17, 2026
Patent 12581091
METHODS AND APPARATUS OF ENCODING/DECODING VIDEO PICTURE DATA
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
95%
With Interview (+11.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 805 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month