DETAILED ACTION
1. This communication is being filed in response to the submission having a mailing date of (10/04/2024) in which a three (3) month Shortened Statutory Period for Response has been set.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Acknowledgements
3. Upon entry, claims (1 -20) appear pending for examination, of which (1, 18, 19 and 20) being the four (4) parallel running independent claims on record.
Information Disclosure Statement
4. The Information Disclosure Statement (IDS) that was/were submitted on (10/04/2024) is/are in compliance with the provisions of 37 CFR 1.97, being considered by the Examiner.
Specifications
5. The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant's cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Drawings
6. The submitted Drawings on date (10/04/2024) have been accepted and considered under the 37 CFR 1.121 (d).
Claim Interpretation
7.3. Further, to be given patentable weight, the recording medium and the datastream (i.e. descriptive material) in the claims, must be in a functional relationship. A functional relationship can be found where the descriptive material performs some function with respect to the recording medium to which it is associated. See MPEP 2111.05(I)(A). When a claimed “computer-readable medium merely serves as a support for information or data, no functional relationship exists”. MPEP §2111.05(III). The storage medium storing the claimed datastream in claim 12 merely services as a support for the storage of the datastream and provides no functional relationship between the stored bitstream and storage medium. Therefor the structure datastream, which scope is implied by the method steps, is non-functional descriptive material and given no patentable weight. MPEP §2111.05(III). Thus, the claim scope is just a storage medium storing data.
7.4. Examiner suggests that Applicants rewrite the cited above claim (20) in a correct independent form, in accordance with the MPEP 2111.05.
Double Patent
5. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g. In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
5.1. A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
5.2. Individuals associated with the filing and prosecution of the instant patent application have a duty to disclose information within their knowledge as to other copending United States applications which are "material to patentability" of the application in question. See MPEP §2001.06(b) for more details.
5.3. Claims (1, 18, 19, 20) of instant Application 18/906,098, directed to an Apparatus, CRM and Method of the same, being provisionally rejected on the ground of nonstatutory obvious-type double patenting as being unpatentable over the analogous Claims of parent Application 18/907,218. Although the conflicting claims are not identical, they are not patentably distinct from each other, because the claims use similar scope of the invention, and/or similar variations of the same claim language.
Instant Appl. 18/907,218
Reference: 18/906,098
Claim 1. A method for video processing, comprising: generating, for a conversion between a current video block of a video and a bitstream of the video, a motion candidate list for the current video block, an adjusting process being applied on a plurality of samples of the current video block; and performing the conversion based on the motion candidate list.
Claim 18. An apparatus for video processing comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to perform acts comprising: generating, for a conversion between a current video block of a video and a bitstream of the video, a motion candidate list for the current video block, an adjusting process being applied on a plurality of samples of the current video block; and performing the conversion based on the motion candidate list.
Claim 19. A non-transitory computer-readable storage medium storing instructions that cause a processor to perform acts comprising: generating, for a conversion between a current video block of a video and a bitstream of the video, a motion candidate list for the current video block, an adjusting process being applied on a plurality of samples of the current video block; and performing the conversion based on the motion candidate list.
Claim 20. A non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by an apparatus for video processing, wherein the method comprises: generating a motion candidate list for a current video block of the video, an adjusting process being applied on a plurality of samples of the current video block; and generating the bitstream based on the motion candidate list.
Claim 1. A method for video processing, comprising: performing a conversion between a current video block of a video and a bitstream of the video, wherein a first syntax element is comprised in the bitstream and indicates whether an adjusting process is applied on a plurality of samples of the current video block.
Claim 18. An apparatus for video processing comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to perform acts comprising: performing a conversion between a current video block of a video and a bitstream of the video, wherein a first syntax element is comprised in the bitstream and indicates whether an adjusting process is applied on a plurality of samples of the current video block.
Claim 19. A non-transitory computer-readable storage medium storing instructions that cause a processor to perform acts comprising: performing a conversion between a current video block of a video and a bitstream of the video, wherein a first syntax element is comprised in the bitstream and indicates whether an adjusting process is applied on a plurality of samples of the current video block.
Claim 20. A non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by an apparatus for video processing, wherein the method comprises: performing a conversion between a current video block of the video and the bitstream, wherein a first syntax element is comprised in the bitstream and indicates whether an adjusting process is applied on a plurality of samples of the current video block.
5.4. It would have been obvious to one of ordinary skill in the art, at the time the invention was made/filed, to combine the instant 18/907,218, with the above reference 18/906,098, because although the conflicting claims are not identical, they are not patentably distinct from each other, the claim language uses similar scope of the invention, and/or a similar variation of the same claim language.
35 USC 102
8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
8.1. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless - (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
8.2. Claim (20) is rejected under 35 U.S.C. 102(a)(1) as being anticipated by the standard papers of the Versatile video coding, ITU-T H.266; -edition 1.0; hereafter “VVC”).
Claim 20. VCC/Chen discloses - A non-transitory computer-readable recording medium (e.g. see transcoding and “storage media” methodology of the same, in accordance with the VVC codec format; [7.4.2.1]) storing a bitstream of a video which is generated by a method performed by an apparatus for video processing, wherein the method comprises: (e.g. see encoder and decoder methodology, in accordance with the VVC codec format; [Summary])
generating a motion candidate list for a current video block of the video, an adjusting process being applied on a plurality of samples of the current video block; (e.g. see target block processing (i.e. adjusting, based on information of a video block, based on “Syntax semantics”; [Chap. 7.3; 7.4]);
and generating the bitstream based on the motion candidate list; (e.g. see bitstream including prediction data (CU, PU) list construction in at least [Chap. 7.4. and 8.5.]).
35 USC 103
8.3. Claim (1 -19) is rejected under 35 U.S.C. 103 as being unpatentable over “Versatile video coding, ITU-T H.266; - edition 1.0”; hereafter “VVC”), in view of Chen; et al. (Intra Block Copy Mirror Mode for Screen Content Coding in VVC; hereafter “Chen”).
Claim 1. VVC discloses the invention substantially as claimed - A method for video processing, comprising: (e.g. see details for encoder [VCC; 7.4.2.3; A.1] and decoder [VCC; C.2 and C.5] techniques of the same.)
generating, for a conversion between a current video block of a video and a bitstream of the video, (e.g. see encoder and decoder methodology, in accordance with the VVC codec format; [Summary]) a motion candidate list for the current video block, (e.g. see prediction (PU) and candidate list construction in at least [Chap. 7.4. and 8.5.]).
an adjusting process being applied on a plurality of samples of the current video block; (e.g. see target block processing (i.e. adjusting, based on information of a video block of the video, including coding unit (CU), prediction unit (PU), and/or information of a transform unit (TU) of the video) based on “Syntax semantics”; [Chap. 7.3; 7.4]); and performing the conversion based on the motion candidate list; (e.g. see candidate list construction in at least [Chap. 7.4. and 8.5.]).
Claim 2. VCC discloses - The method of claim 1, wherein (e.g. see details for encoder [VCC; 7.4.2.3; A.1] and decoder [VCC; C.5] techniques of the same.)
the motion candidate list is different from an intra block copy (IBC) merge motion candidate list (e.g. see construction of candidate lists, including prediction tools “IBC mode”, “IBC merge mode” [Chap. 7.4. and 8.5.]).
However, this version of the VVC papers doesn’t disclose “IBC-AMVP” prediction tool. For the purpose of additional clarification and in the same field of endeavor Chen in details teaches IBC, IBC merge and IBC-AMVP for VVC codec, as described in at least [Chen; page 1; sect 5].)
Chen specifically teaches - and an IBC advanced motion vector prediction (AMVP) motion candidate list for the current video block; (e.g. see IBC AMVP mode, using the legacy and new addition to the standard; [page 1, section III].
Therefore, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention, to modify the papers of VVC, with the IBC AMVP mode of Chen, in order to lower complexity and improve coding efficiency in the process; [Chen; page 1].)
Claim 3. VCC/Chen discloses - The method of claim 1, wherein a motion candidate in the motion candidate list is selected from a plurality of motion candidates, (e.g. candidate list constructed using plurality of prediction tools “IBC mode”, “IBC merge mode” [VCC; Chap. 7.4. and 8.5.]);
samples of each of the plurality of motion candidates are adjusted in the same way as the plurality of samples of the current video block, (e.g. adjustment based on information of a video block of the video, based on syntax; [VCC; Chap. 7.3; 7.4]);
or wherein a motion candidate in the motion candidate list is selected from a plurality of motion candidates, (e.g. candidate list constructed using plurality of prediction tools “IBC mode”, “IBC merge mode” [Chap. 7.4. and 8.5.]);
the adjusting process is applied on samples of each of the plurality of motion candidates, or wherein the motion candidate list is generated independently from the adjusting process; (e.g. adjustment based on information of a video block of the video, based on syntax; [VCC; Chap. 7.3; 7.4]);
Claim 4. VCC/Chen discloses - The method of claim 1, wherein the motion candidate list comprises a first motion candidate non-adjacent to the current video block. (The same rationale and motivation apply as given to Claims (1 -2) above. See also construction of candidate lists, including plurality of prediction tools “IBC mode”, “IBC merge mode”, etc; [VCC; Chap. 7.4.; 8.5.]).
Claim 5. VCC/Chen discloses - The method of claim 4, wherein the adjusting process is applied on the first motion candidate, or wherein samples of the first motion candidate are adjusted in the same way as the plurality of samples of the current video block, (e.g. adjustment based on information of a video block of the video, based on syntax; [VCC; Chap. 7.3; 7.4]); or wherein the first motion candidate is added into the motion candidate list independently from whether the adjusting process is applied on samples of the first motion candidate; (The same rationale and motivation apply as given to Claims (1 -2) above. See also construction of candidate lists, including at least one of the prediction tools “IBC mode”, “IBC merge mode”, etc; [VCC; Chap. 7.4. and 8.5.]).
Claim 6. VCC/Chen discloses - The method of claim 1, wherein the motion candidate list comprises a motion candidate generated in accordance with a rule based on an averaging process, a clipping process or a scaling process, (e.g. see scaling factor derivation; [VCC; Chap. 8.7.4]); or wherein the motion candidate list is an IBC merge motion candidate list or an IBC AMVP motion candidate list for the current video block. (The same rationale and motivation apply as given to Claims (1 -2) above. See also scaling steps of the same, in accordance with prediction mode implemented [VCC; Chap. 8.7.4; pag. 324]).
Claim 7. VCC/Chen discloses - The method of claim 1, wherein a further motion candidate list is generated for a further video block of the video (e.g. candidate list constructed using plurality of prediction tools “IBC mode”, “IBC merge mode” [VCC; Chap. 7.4. and 8.5.]); based on information regarding how to adjust samples of a video block in the adjusting process, and the further video block is different from the current video block. (e.g. adjustment based on information of a video block of the video, based on syntax; [VCC; Chap. 7.3; 7.4]);
Claim 8. VCC/Chen discloses - The method of claim 7, wherein the further motion candidate list comprises the information associated with a motion candidate in the further motion candidate list,
or wherein if the adjusting process is applied on samples of the further video block, samples of each motion candidate in the further motion candidate list are adjusted in the same way as the samples of the further video block,
or wherein if the adjusting process is applied on samples of the further video block, the adjusting process is applied (e.g. adjustment based on information of a video block of the video, based on syntax; [VCC; Chap. 7.3; 7.4]); on samples of each motion candidate in the further motion candidate list, (e.g. candidate list constructed using plurality of prediction tools “IBC mode”, “IBC merge mode” [VCC; Chap. 7.4. and 8.5.]);
or wherein if the adjusting process is not applied on samples of the further video block, the adjusting process is not applied on samples of each motion candidate in the further motion candidate list, (e.g. adjustment based on video block syntax; [VCC; Chap. 7.3; 7.4]) associated with the candidate list constructed using plurality of prediction tools “IBC mode”, “IBC merge mode” [VCC; Chap. 7.4. and 8.5.]);
or wherein an adaptive reordering of merge candidates (ARMC) of the further video block is dependent on the information, or wherein an adaptive reordering of merge candidates (ARMC) is applied on the further video block independently from the information; (e.g. see adaptive ordering of the syntax [for Ex. Table 50; page 389], that by definition comprises coding unit (CU), prediction unit (PU), and/or information of a transform unit (TU) of the target block) as described in section “Syntax semantics”; [Chap. 7.3; 7.4].)
Claim 9. VCC/Chen discloses - The method of claim 8, wherein the adjusting process is applied on samples of the further video block,
or wherein samples of each motion candidate in the further motion candidate list are adjusted in the same way as the samples of the further video block, or wherein samples of a motion candidate in the further motion candidate list are adjusted in a way different from the samples of the further video block; (e.g. adjustment based on information of a video block of the video, based on syntax; [VCC; Chap. 7.3; 7.4]); on samples of each motion candidate in the further motion candidate list, (e.g. candidate list constructed using plurality of prediction tools “IBC mode”, “IBC merge mode” [VCC; Chap. 7.4; 8.5.]).
Claim 10. VCC/Chen discloses - The method of claim 1, wherein a further motion candidate list is generated for a further video block of the video independently from information regarding how to adjust samples of a video block in the adjusting process, and the further video block is different from the current video block; (e.g. see candidate list independently constructed using plurality of prediction tools “IBC mode”, “IBC merge mode” [VCC; Chap. 7.4. and 8.5.]).
Claim 11. VCC/Chen discloses - The method of claim 1, further comprising: determining, based on coded information of the video, cost information associated with an adjusting process in which samples of a video block are adjusted; (e.g. information associated with cost during adjustment may be “CU/TU reducing search spaces/partitions”; used best single and/or combined “prediction tool, PU”; using statistic analysis like SAD/SATD for RDO (rate dist. optimization) mode selection, etc; [VCC; Chap. 7.4. and 8.5.]).
determining target information regarding the adjusting process for the current video block based on the cost information; and performing the conversion based on the target information; (e.g. adjustment based on information of a video block of the video, based on syntax; [VCC; Chap. 7.3; 7.4]).
Claim 12. VCC/Chen discloses - The method of claim 11, wherein the target information comprises how to adjust a plurality of samples of the current video block, or wherein the target information comprises whether to flip a plurality of samples of the current video block horizontally or vertically. (The same rationale and motivation apply (at least one feature mapped) as given to Claims (1 -2) above. See also “flipping” technique in Figs. 7 and 8; [Chen].)
Claim 13. VCC/Chen discloses - The method of claim 1, wherein the plurality of samples comprises one of the following: reconstruction samples of the current video block, original samples of the current video block, or prediction samples of the current video block, and/or wherein the adjusting process comprises at least one of the following: reordering the plurality of samples, flipping the plurality of samples, shifting the plurality of samples, rotating the plurality of samples, or transforming the plurality of samples. (The same rationale and motivation apply (i.e. wherein at least one feature mapped) as given to Claims (1 -2) above. See also ordering, shifting, transform, etc) techniques similarly implemented in at least [Chap. 7.3]).
Claim 14. VCC/Chen discloses - The method of claim 13, wherein the plurality of samples are transformed according to one of the following: a M-parameter model, M being an integer, an affine model, a linear model, or a projection model, or wherein the plurality of samples are flipped along a horizontal direction or a vertical direction; (e.g. see affine model implemented in [VVC; Chap. 7.4.]) and also similar models in [Chen; page 2].)
Claim 15. VCC/Chen discloses - The method of claim 1, wherein the adjusting process is applied based on at least one of the following: information of a video block of the video, information of a coding unit (CU) of the video, information of a prediction unit (PU) of the video, or information of a transform unit (TU) of the video, (i.e. adjusting, based on information of a video block of the video, including coding unit (CU), prediction unit (PU), and/or information of a transform unit (TU) of the video) based on “Syntax semantics”; [Chap. 7.3; 7.4]); or wherein the adjusting process is applied independently from at least one of the following: information of a tile of the video, information of a slice of the video, or information of a picture of the video; (i.e. adjusting, based on information of a video block of the video; [VCC; [Chap. 7.3; 7.4]).
Claim 16. VCC/Chen discloses - The method of claim 1, wherein the conversion includes encoding the current video block into the bitstream. (The same rationale and motivation apply as given to Claims (1 -2) above. See also details for encoder [VCC; 7.4.2.3; A.1] and decoder [VCC; C.5] techniques of the same.)
Claim 17. VCC/Chen discloses - The method of claim 1, wherein the conversion includes decoding the current video block from the bitstream. (The same rationale and motivation apply as given to Claims (1 -2) above. See also encoder [7.4.2.3; A.1] and decoder [C.5] techniques of the same in [VCC].)
Claim 18. VCC/Chen discloses - An apparatus for video processing comprising a processor and a non-transitory memory with instructions thereon, (e.g. see “storage media” methodology of the same, in accordance with the VVC codec format; [7.4.2.1]) wherein the instructions upon execution by the processor, cause the processor to perform acts comprising: generating, for a conversion between a current video block of a video and a bitstream of the video, a motion candidate list for the current video block, an adjusting process being applied on a plurality of samples of the current video block; and performing the conversion based on the motion candidate list. (Current lists all the same elements as recites in Claim 1 above, but in “Apparatus form” instead, and is/are therefore on the same premise.)
Claim 19. VCC/Chen discloses - A non-transitory computer-readable storage medium storing instructions that cause a processor to perform acts comprising: (e.g. see “storage media” of the same, in accordance with the VVC codec format; [7.4.2.1]) generating, for a conversion between a current video block of a video and a bitstream of the video, a motion candidate list for the current video block, an adjusting process being applied on a plurality of samples of the current video block; and performing the conversion based on the motion candidate list. (Current lists all the same elements as recites in Claims (1 and 20) above, but in “CRM form” instead, and is/are therefore on the same premise.)
Prior Art Citations
9. The following List of prior art, made of record and not relied upon, is/are considered pertinent to applicant's disclosure:
9.1. Patent documentation:
US 11,627,333 B2 Zhang; Li et al. H04N19/30; H04N19/82; H04N19/117;
US 11,882,274 B2 Zhang; Li et al. H04N19/46; H04N19/159; H04N19/11;
US 11,895,318 B2 Zhang; Li et al. H04N19/70; H04N19/159; H04N19/184;
US 11,758,142 B2 Koo; Moonmo et al. H04N19/11; H04N19/12; H04N19/159;
US 11,902,530 B2 Koo; Moonmo et al. H04N19/132; H04N19/593; H04N19/105;
US 11,943,448 B2 Zhao; Liang et al. H04N19/513; H04N19/70; H04N19/172;
US 12,212,756 B2 Koo; Moonmo et al. H04N19/132; H04N19/157; H04N19/70;
US 12,101,509 B2 Koo; Moonmo et al. H04N19/159; H04N19/11; H04N19/12;
US 12,355,962 B2 Xu; Xiaozhong et al. H04N19/119; H04N19/105; H04N19/70;
US 11,503,336 B2 Xu; Xiaozhong et al. H04N19/176; H04N19/61; H04N19/105;
US 11,418,777 B2 Xu; Xiaozhong et al. H04N19/105; H04N19/11; H04N19/159;
US 12,069,282 B2 Xu; Jizheng et al. H04N19/593; H04N19/176; H04N19/96;
US 12,439,044 B2 Deng; et al. H04N19/60; H04N19/186; H04N19/107;
US 12,244,811 B2 Zhu; et al. H04N19/105; H04N19/186; H04N19/174;
US 12,425,610 B2 Yan; Ning et al. H04N19/105; H04N19/159; H04N19/117;
9.2. Non-Patent documentation:
_ Intra Block Copy Mirror Mode for Screen Content Coding in VVC; Chen – 2021.
_ T-REC-H.266 Versatile Video coding - ver-1 – 2020.
CONCLUSIONS
10. Any inquiry concerning this communication or earlier communications from Examiner should be directed to LUIS PEREZ-FUENTES (luis.perez-fuentes@uspto.gov) whose telephone number is (571) 270 -1168. The examiner can normally be reached on Monday-Friday 8am-5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, WILLIAM VAUGHN can be reached on (571) 272-3922. The fax phone number for the organization where this application or proceeding is assigned is (571) 272 -1168. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, please call (800)786-9199 (USA/CANADA) or (571)272 -1000.
/LUIS PEREZ-FUENTES/
Primary Examiner, Art Unit 2481.