DETAILED ACTION
1. This office action is in response to U.S. Patent Application No.: 18/920,509 filed on 10/18/2024 with effective filing date 9/12/2018. Claims 1-3 are pending.
Double Patenting
2. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
3. Claim 1-3 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-3 of U.S. Patent No. 12143623, claim 1-3 of U.S. Patent No. 11716485, and claim 1-3 of U.S. Patent No. 11212547. Although the claims at issue are not identical, they are not patentably distinct from each other.
Current Application
US 12143623
1. A method of decoding motion information, the method comprising: obtaining, from a bitstream, information about a precision of a pixel unit in a predetermined mode; obtaining, from the bitstream, information indicating a disparity distance for a current block; obtaining, from the bitstream, information indicating a disparity direction for the current block; if the precision of the pixel unit is a predetermined precision, determining a first disparity distance based on the obtained information indicating the disparity distance; if the precision of the pixel unit is not the predetermined precision, determining a second disparity distance based on the obtained information indicating the disparity distance; if the current block is bi-predicted: determining a first motion vector for a first list of the current block by changing a first base motion vector for the first list by a first offset, wherein the first offset has a size corresponding to the first disparity distance or the second disparity distance, and a sign of the first offset corresponds to the information indicating the disparity direction; determining a second offset by scaling the first offset based on a picture order count (POC) of a current picture, a POC of a first reference picture in the first list, and a POC of a second reference picture in a second list; and determining a second motion vector for the second list of the current block by changing a second base motion vector for the second list by the second offset, wherein the second disparity distance is larger than the first disparity distance,
and wherein the information indicating the disparity direction indicates one of a x-axis direction and a y-axis direction, and one of (+) sign and (−) sign.
1. A method of decoding motion information, the method comprising: obtaining, from a bitstream, information about a precision of a pixel unit in a predetermined mode; obtaining, from the bitstream, information indicating a disparity distance for a current block; obtaining, from the bitstream, information indicating a disparity direction for the current block; if the precision of the pixel unit is a predetermined precision, determining a first disparity distance based on the obtained information indicating the disparity distance; if the precision of the pixel unit is not the predetermined precision, determining a second disparity distance based on the obtained information indicating the disparity distance; if the current block is bi-predicted: determining a first motion vector for a first list of the current block by changing a first base motion vector for the first list by a first offset, wherein the first offset has a size corresponding to the first disparity distance or the second disparity distance, and a sign of the first offset corresponds to the information indicating the disparity direction; determining a second offset by scaling the first offset based on a picture order count (POC) of a current picture, a POC of a first reference picture in the first list, and a POC of a second reference picture in a second list; and determining a second motion vector for the second list of the current block by changing a second base motion vector for the second list by the second offset, wherein the second disparity distance is larger than the first disparity distance,
and wherein at least one of the first disparity distance and the second disparity distance has a value of a power of 2.
A method of encoding motion information, the method comprising: determining a precision of a pixel unit in a predetermined mode; if a current block is bi-predicted, determining a first base motion vector for a first list and a second base motion vector for a second list; determining a first offset corresponding to a difference between the first base motion vector and a first motion vector for the first list of the current block; generating a bitstream comprising information about the precision of the pixel unit, information indicating a disparity distance corresponding to a size of the first offset and information indicating a disparity direction corresponding to a sign of the first offset, wherein if the precision of the pixel unit is a predetermined precision, the information indicating the disparity distance indicates a first disparity distance, and if the precision of the pixel unit is not the predetermined precision, the information indicating the disparity distance indicates a second disparity distance, wherein a second offset corresponding to a difference between the second base motion vector and a second motion vector for the second list of the current block is determined by scaling the first offset based on a picture order count (POC) of a current picture, a POC of a first reference picture in the first list, and a POC of a second reference picture in the second list, wherein the second disparity distance is larger than the first disparity distance,
and wherein the information indicating the disparity direction indicates one of a x-axis direction and a y-axis direction, and one of (+) sign and (−) sign.
2. A method of encoding motion information, the method comprising: determining a precision of a pixel unit in a predetermined mode; if a current block is bi-predicted, determining a first base motion vector for a first list and a second base motion vector for a second list; determining a first offset corresponding to a difference between the first base motion vector and a first motion vector for the first list of the current block; generating a bitstream comprising information about the precision of the pixel unit, information indicating a disparity distance corresponding to a size of the first offset and information indicating a disparity direction corresponding to a sign of the first offset, wherein if the precision of the pixel unit is a predetermined precision, the information indicating the disparity distance indicates a first disparity distance, and if the precision of the pixel unit is not the predetermined precision, the information indicating the disparity distance indicates a second disparity distance, wherein a second offset corresponding to a difference between the second base motion vector and a second motion vector for the second list of the current block is determined by scaling the first offset based on a picture order count (POC) of a current picture, a POC of a first reference picture in the first list, and a POC of a second reference picture in the second list, wherein the second disparity distance is larger than the first disparity distance,
and wherein at least one of the first disparity distance and the second disparity distance has a value of a power of 2.
Allowable Subject Matter
After analyzing the current application examiner concluded that the novelty of the current application involves acquiring motion information. Predict moving vector of a current block is determined to show distance. Side area corresponding to the information is obtained based on comparison result of basis pixel-by-pixel and minimum pixel-by-pixel. Distance is indicated in a scale of a motion vector of the current block. The motion vector of the current block is determined by using the predict moving vector. Predict moving vector candidate is determined. The scaled side is changed among the predict moving vector candidate from the basic movement vector of the current block.
The prior art of record in particular, Chen US 2015/0195572 A1 in view of Jeong et al. US 2017/0339425 A1 does not disclose, with respect to claim 1, and if the precision of the pixel unit is not the predetermined precision, the information indicating the disparity distance indicates a second disparity distance, wherein a second offset corresponding to a difference between the second base motion vector and a second motion vector for the second list of the current block is determined by scaling the first offset based on a picture order count (POC) of a current picture, a POC of a first reference picture in the first list, and a POC of a second reference picture in the second list, wherein the second disparity distance is larger than the first disparity distance,
and wherein the information indicating the disparity direction indicates one of a x-axis direction and a y-axis direction, and one of (+) sign and (−) sign as claimed.
Rather, Chen discloses the method involves determining that current texture layer of video data is dependent on a depth layer of the video data based on direct dependent layers signaled in a video parameter set. The current texture layer is processed using the depth layer. A block of the current texture layer is predicted using a depth oriented NBDV process or a backward-warping view synthesis prediction process using information obtained from the depth layer.
Similarly, Jeong et al. discloses the method involves determining motion vector of a current block and an index pointing prediction candidate from a bitstream. A prediction candidate list is determined upon the prediction mode information. Motion vector indicated by the index from the list is determined when the prediction mode information shows predetermined prediction mode. A predict moving vector of the current block is determined based on one of the motion estimation information associated with the motion vector. The motion vector of the current block is determined upon the predict moving vector.
The same reasoning applies to claim 2.
Conclusion
5. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRFAN HABIB whose telephone number is (571)270-7325. The examiner can normally be reached Mon-Th 9AM-7PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 5712722988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Irfan Habib/Examiner, Art Unit 2485