DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
2. The information disclosure statement (IDS) was submitted on 02/04/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
3. Claims 1 and 2, are objected to, because of the following informalities where two sets of claims numbered 1 and 2 are listed as below reproduced.
A first set:
1. A method for video processing, comprising:
obtaining, for a conversion between a current video block of a video and a bitstream of the video, information regarding applying a geometric transformation on the current video block;
selecting, based on the information, a set of samples in a first color component of the current video block, the set of samples being used for coding at least one sample in a second color component of the current video block, the second color component being different from the first color component; and
performing the conversion based on the set of samples.
2. The method of claim 1, wherein the information comprises at least one of the following:
whether to apply the geometric transformation on the current video block, or how to apply the geometric transformation on the current video block.
The second set:
1. The method of claim 1, wherein the geometric transformation comprises at least one of the following:
adjusting an orientation of the current video block, or adjusting positions of samples in the current video block, or
wherein the at least one sample in the second color component is determined based on the set of samples in the first color component and a cross-component prediction scheme.
2. The method of claim 3, wherein the cross-component prediction scheme comprises at least one of the following:
a cross-component linear model (CCLM) mode, a multi-model linear model (MMLM) mode, a convolutional cross-component model (CCCM) mode, or a chroma decoder-side intra mode derivation (DIMD) mode, or
wherein if the geometric transformation is applied on the current video block, the set of samples in the first color component are selected by using a first filter, and
if the geometric transformation is not applied on the current video block, the set of samples in the first color component are selected by using a second filter, or
wherein the orientation of the current video block is adjusted by rotating the current video block clockwise with an angle, or
the positions of the samples in the current video block are adjusted by flipping the current video block in a direction.
Examiner amends the repeated claim numbering of 1 and 2, at the Second set as inadvertently representing a typographical error and renumbers the Second set as determined by the cardinally of the listing where the claims erroneously numbered as 1 and 2 would represent in fact and be renumbered as claims 3 and 4.
This corrective action conforms with Applicant’s DOC Code: AUX.PDF provided on 02/04/2025.
Applicant’s confirmation is respectfully requested.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
4. Claim 6 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as based on a disclosure which is not enabling the ordinary skilled to distinctly interpret the claimed matter. The claim reciteing inter alia;
“….wherein if the color format is the format 4:2:0, the corresponding relationship is adjusted in a first manner, and
if the color format is the format 4:2:2, the corresponding relationship is adjusted in a second manner different from the first manner, or “, is deemed indefinite where the terms establishing a “first manner” and a “second manner” are claimed to be “different” without determining support in Specification as to the meaning nature of the manner, nor by setting the boundaries of the claimed first manner and second manner for being different from each other.
Though the limitations are separated by the logic OR statement where only one alternative may be interpreted at the time within the process method established in context with the functional scope of the claim, the indefinite claimed matter may not be analyzed accordingly where the disclosure recites the matter in haec verba without a given definition, hence deeming the claim indefinite.
See In re Mayhew, 527 F.2d 1229, 188 USPQ 356 (CCPA 1976).
Clarification is respectfully requested.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
The applied references do not have a common inventor with the instant application.
5. Claims 1 - 20, are rejected under 35 U.S.C. 103 as being obvious over Li Zhang et al., (hereinafter Zhang) (US 2018/0041778) in view of Yixin Du et al., (hereinafter Du) (US 2022/0150517).
Re Claim 1. Zhang discloses, a method for video processing, comprising:
obtaining, for a conversion between a current video block of a video and a bitstream of the video (a conversion method, at Title and Abstract, Par.[0002-0004]), information regarding applying a geometric transformation (the geometric transformation is signaled by a syntax elements, Par.[0151-0157] by signaled filter coefficients, Par.[0144-0146]. e.g., for every frame when ALG/GALF is enabled Par.[0162-0167]) on the current video block (Figs.11A-C Par.[0032] , Figs.12A-12B Par.[0033] and according to local information component 1530 in Fig.15 signaling the geometry transformation index, Par.[0045], as applied to block-level Par.[0171-0173]);
selecting, based on the information (selecting based on the information at block level, enabling or disabling the ALF/GALF by a flag, Par.[0171] where the implementation of ALF or GALF, by a predefined filter Par.[0004] is based on the syntax elements signaled in the bitstream and received at decoder 30, , Par.[0080-0088] Fig.15, i.e., by a flag Par.[0171]), a set of samples in a first color component of the current video block (the set of samples comprise pixel blocks having luma components and two chroma components (Cb, Cr) in a 4:2:0 color format Par.[0035-0036] and Figs.14B and 14C, Par.[0045]), the set of samples being used for coding at least one sample in a second color component of the current video block, the second color component being different from the first color component (the set of color components comprising a first, Cb and a second, Cr components, representing different colors are thus different from each other, where at Fig.18, block 1820 Par.[0255] decodes data at block-level control of the filtering ALF or the geometry adaptive line filter (GALF) for chroma components, which includes a first chroma component Cb, a second chroma component Cr, or both Par.[0257]); and
performing the conversion based on the set of samples (performing for the reconstructed video units, at block 1830, 1840, of the method 1800, the conversion based on the set of samples, of the (Cb, Cr) components, Par.[0258-0261]).
Further Du teaches about,
obtaining, for a conversion between a current video block of a video and a bitstream of the video, information regarding applying a geometric transformation on the current video block (performing on the current luma block, a geometric transformation comprising diagonal, vertical and rotation flip of the block, Par.[0134-0135]);
The ordinary skilled in the art would have found obvious before the effective filing date of invention, and find the incentive to elaborate on the suggested cross-component ALF/GALF filtering disclosed in Zhang (at Par.[0036) to seek further details of the process similarly applied in the art to Du, (Par.[0156]) to further improving coding efficiency by performing a geometric transform before the filtering process in the alternative, hence finding the combination predictable.
Re Claim 2. Zhang and Du disclose, the method of claim 1, wherein the information comprises at least one of the following:
whether to apply the geometric transformation on the current video block (the GALF (ALF) based geometry transformation Par.[0133] is enabled/disabled the same way as the ALF syntax information provided at Par.[0115-0120, 0134] signaled for ALF/GALF Par.[0169]), or
how to apply the geometric transformation on the current video block (about how the filtering is applied to the prediction method and the one bit flag, Par.[0122-0133]).
Re Claim 3. Zhang and Du disclose, the method of claim 1, wherein the geometric transformation comprises at least one of the following:
adjusting an orientation of the current video block (applying ALF/GALF based on orientation of the reconstructed samples, Par.[0003, 0133]), or
adjusting positions of samples in the current video block (based on position, Par.[0128-0129]), or
wherein the at least one sample in the second color component is determined based on the set of samples in the first color component and a cross-component prediction scheme (or adjusting one sample in the second color component e.g., the Cb component, being determined based on the set of samples e.g., {Cb, Cr} in the first color component Par.[0035] Fig.14A and cross component pixels for 4:2:0 color format Par.[0036] and Fig.14B and 14C).
Re Claim 4. Zhang and Du disclose, the method of claim 3, wherein the cross-component prediction scheme comprises at least one of the following:
a cross-component linear model (CCLM) mode, a multi-model linear model (MMLM) mode, a convolutional cross-component model (CCCM) mode (cross component applied to pixels of 4:2:0 color format Par.[0036] and Fig.14B and 14C), or
a chroma decoder-side intra mode derivation (DIMD) mode, or
wherein if the geometric transformation is applied on the current video block, the set of samples in the first color component are selected by using a first filter (or selecting geometry transformation by a first filter from multiple groups of filters, Par.[0123-0124]), and
if the geometric transformation is not applied on the current video block, the set of samples in the first color component are selected by using a second filter (a second filter Par.[0125]), or
wherein the orientation of the current video block is adjusted by rotating the current video block clockwise with an angle (the orientation is determined by rotation of the current block pixels, Par.[0141] Fig.11C), or
the positions of the samples in the current video block are adjusted by flipping the current video block in a direction (performing flipping Par.[0141], Fig.11B).
Du teaches at least about,
a cross-component linear model (CCLM) mode, a multi-model linear model (MMLM) mode, a convolutional cross-component model (CCCM) mode (employing in the prediction process a cross component adaptive loop filter (CC-ALF), Par.[0157-0161] Fig.14A), or
wherein the orientation of the current video block is adjusted by rotating the current video block clockwise with an angle (adjusting the current block by rotation , Par.[0135]), or
the positions of the samples in the current video block are adjusted by flipping the current video block in a direction (flipping the pixel positions at Par.[0134-0135]).
Re Claim 5. Zhang and Du disclose, the method of claim 3,
wherein the geometric transformation is applied on the current video block to obtain a first video block, and a corresponding relationship between luma and chroma samples for coding the first video block is adjusted for the cross-component prediction scheme (the geometric transformation GALF is applied in predicting of the current block, Par.[0122-0125] in a relationship between luma and chroma components, Par.[0196-0204] and adjusted by generating a correspondent cross-component relationship to pixels per 4:2:0 format, Par.[0036]), or
wherein the geometric transformation is applied on the current video block to obtain the first video block, and information regarding at least one of the following is dependent on a color format of the current video block: whether to adjust the corresponding relationship between the luma and chroma samples for coding the first video block, or how to adjust the corresponding relationship (or in the alternative, adjusting the GALF transformation according to a relationship to calculated gradients at Eq.(11)-(14), and Table 3, Par.[0141] or by adopting the diamond-shaped 5x5 and 7x7, coefficients per Fig.8A and 8B, Par.[0142-0143]).
Du teaches about,
wherein the geometric transformation is applied on the current video block to obtain a first video block, and a corresponding relationship between luma and chroma samples for coding the first video block is adjusted for the cross-component prediction scheme (determining a relationship between luma/chroma pixel components of the YCrCb 4:2:0 or YCrCb 4:4:4 formats, Par.[0080]).
Re Claim 6. Zhang and Du disclose, the method of claim 5,
wherein if the color format is format 4:4:4, the corresponding relationship is not adjusted (the method at 1900 Gih.19, for 4:4:4, color format, Par.[0262, 0267, 0281] and Fig.14A), or
wherein if the color format is format 4:2:0 or format 4:2:2, the corresponding relationship is adjusted, or
wherein if the color format is the format 4:2:0, the corresponding relationship is adjusted in a first manner (OR, the 4:2:0 color format are adjusted by geometry transform, Par.[0268, 0282 Fig.14A]), and
if the color format is the format 4:2:2, the corresponding relationship is adjusted in a second manner different from the first manner, or
wherein a type of a filter for adjusting the corresponding relationship is dependent on at least one of the following:
a location of the current video block, or
the color format of the current video block, or
wherein the adjusted corresponding relationship is a same as the corresponding relationship between the luma and chroma samples for coding the current video block without applying the geometric transformation (the filter adjusting occurs according to the 4:4:4 or 4:2:0 luma/chroma coefficient relationships, per Fig.14A-14C, Par.[0034-0036) as summarized at claims 34, 47 or 56).
{This claim is rejected for being indefinite under the 35 U.S.C. 112(b) statute, requiring clarification.}
Re Claim 7. Zhang and Du disclose, the method of claim 1, wherein the at least one sample in the second color component is coded based on the set of samples in the first color component (where the chroma component is surrounded by six luma samples per Fig.14C, Par.[0199]) and a cross-component in-loop filter (and generating correspondent cross-component pixels, per Fig.14B-C, Par.[0036]).
Du teaches about,
wherein the at least one sample in the second color component is coded based on the set of samples in the first color component and a cross-component in-loop filter (determining a relationship between luma/chroma pixel components of the YCrCb 4:2:0 or YCrCb 4:4:4 formats, Par.[0080]).
Re Claim 8. Zhang and Du disclose, the method of claim 7,
wherein the cross-component in-loop filter comprises at least one of the following: a cross-component sample adaptive offset (CC-SAO) (on correspondent cross-component pixels, per Fig.14B-C, Par.[0036] applying SAO filtering, at Par.[0065, 0086]), or
a cross-component adaptive loop filter (CC-ALF) (applying ALF or GALF filtering, Par.[0061] etc.) , or
wherein if the geometric transformation is applied on the current video block, the set of samples in the first color component are selected by using a third filter (while applying the GALF transformation, selecting another filter option as signaled by an index, Par.[0118-0125]), and
if the geometric transformation is not applied on the current video block, the set of samples in the first color component are selected by using a fourth filter different from the third filter (at block level, flag disabling the ALF/GALF by a flag, Par.[0171]).
The analogous art to Du, teaches in detail about,
a cross-component adaptive loop filter (CC-ALF) (the cross-component adaptive loop filter CC-ALF, at Par.[0157-0161] Fig.14 and syntax for Cb, Cr color components at Table 3), or
wherein if the geometric transformation is applied on the current video block, the set of samples in the first color component are selected by using a third filter (when CC-ALF is applied, Par.[0159]), and
if the geometric transformation is not applied on the current video block, the set of samples in the first color component are selected by using a fourth filter different from the third filter (when the CC-ALF is not applied for {Cb, Cr} components, Par.[0161-0162]).
The ordinary skilled in the art would have found obvious before the effective filing date of invention, and find the incentive to elaborate on the suggested cross-component ALF/GALF filtering disclosed in Zhang (at Par.[0036) to seek further details of the process similarly applied in the art to Du, (Par.[0156]) to further improving coding efficiency by redesigning the filter coefficients according to the ALF on/off results, hence finding the combination predictable.
Re Claim 9. Zhang and Du disclose, the method of claim 1,
wherein the information regarding applying the geometric transformation on the current video block is dependent on coding information of the current video block (the filter support is determined by the encoded/decoded information, Par.[0167]) or
coding information of a neighboring video block of the current video block, or
wherein if the current video block is on an I-slice, the geometric transformation is applied on the current video block, or
wherein the information regarding applying the geometric transformation on the current video block is indicated by at least one syntax element in the bitstream, or
wherein the geometric transformation is applied on the current video block to obtain a first video block (the geometric transformation is applied to block-level coding ALF/GALF as signaled by a flag, Par.[0171] within a block prediction process, i.e., by a video preprocessor Par.[0088]), and
if a dimension of the first video block is different from a dimension of the current video block, dimension information regarding at least one of the following is indicated in the bitstream (geometric transformation is applied according to block size partitions Par.[0159, 0171, 0234]):
a width of the current video block, a height of the current video block, a width of the first video block, or a height of the first video block (according to the block partition size information, Par.[0238]).
Re Claim 10. Zhang and Du disclose, the method of claim 1, wherein the geometric transformation is applied on the current video block, and the method further comprising:
obtaining a first video block, the first video block being generated by applying the geometric transformation on the current video block (applying geometry transform to the current block, i.e., block-based transformation Par.[0002-0007], Figs.11A-11C or Figs.12A-12B at encoding site of Fig.2A, Par.[0061]); and
generating a second video block based on a further transformation on the first video block, the further transformation being an inverse process of the geometric transformation (applying invers geometric transform at decoder site of Fig.2A, Par.[0060]).
Re Claim 11. Zhang and Du disclose, the method of claim 10, wherein positions of samples in the current video block are adjusted by reordering at least one of the samples (position of the samples is adjusted by vertical or horizontal flipping, Par.[0141], Table 3).
Re Claim 12. Zhang and Du disclose, the method of claim 11,
wherein a position of a first sample in the current video block is reordered in the geometric transformation, such that at least one of the following conditions is satisfied:
a reordered horizontal position of the first sample is not equal to an original horizontal position of the first sample (by the reordering of the horizontal position when no transformation, diagonal, vertical flip and rotation is performed bu t only a transposition of 90o change such that the position of the first sample is not equal to an original horizontal position, Par.[0161]), or
a reordered vertical position of the first sample is not equal to an original vertical position of the first sample, or wherein reordered positions of different samples in the current video block are different.
Re Claim 13. Zhang and Du disclose, the method of claim 10,
Du teaches these limitations,
wherein information regarding at least one of the following is indicated in the bitstream or pre-defined:
whether to apply the further transformation on the first video block, or
how to apply the further transformation on the first video block, or
wherein the further transformation is applied before all in-loop filters in a set of in-loop filters, or
wherein the further transformation is applied after the all in-loop filters in the set of in-loop filters, or
wherein the further transformation is applied before a first in-loop filter in the set of in-loop filters and after a second in-loop filter the set of in-loop filters, or
wherein the further transformation is applied after a post processing filter, or
wherein information regarding at least one of the following is indicated in the bitstream:
where is the further transformation applied, or when is the further transformation applied (in a synoptic review of the limitations claimed it is found that in either of the OR established alternatives, the gist of the claim relies on the limitations reciting; “whether to apply the further transformation on the first video block, or” depend on the geometric transform is enabled or disabled before “in-loop filters in a set of in-loop filters, or “ according to the information signaled by encoder in the bitstream, per Par.[0133-0137] and signaled over the APS syntax at the respective CTB, at Par.[0138-0139]).
Re Claim 14. Zhang and Du disclose, the method of claim 1,
Zhang teaches about, wherein the geometric transformation is determined based on a rate- distortion optimization (RDO) process (considering the rate-distortion to the prediction mode including the geometric transformation process when enabled -emphasis added-, Par.[0071-0072]).
Du teaches, (considering the pixel samples relationship according to the rate distortion factor , Par.[0080-0081] before applying the geometric transformation as signaled in the adaptation parameter set (APS) for the applied ALF filter parameters, Par.[0133-0136]).
Re Claim 15. Zhang and Du disclose, the method of claim 14,
Zhang teaches about, wherein luma and chroma rate-distortion (RD) costs are determined for a plurality of candidate geometric transformation schemes (considering the rate-distortion to the prediction mode including the geometric transformation process when enabled -emphasis added-, Par.[0071-0072]), and
the geometric transformation comprises a candidate geometric transformation scheme with a least cost, or
wherein the geometric transformation is applied on the current video block, and
at least one coding tool is used with a condition during the determination of the geometric transformation based on the RDO process (where the rate distortion RD/RDO or cost of transformation, is computed according to its least cost, Par.[0072] as established identifying the filter for the video blocks, at least in part on a value of a current pixel and a value of neighboring pixels to the current value, i.e., a cost estimation, Par.[0242] applied to the chroma blocks, Cb or Cr, Par.[0243] by which further determining the geometric transformation for the filter coefficients, at Par.[0244-0253]).
Du teaches, (considering the pixel samples relationship according to the rate distortion factor , Par.[0080-0081] before applying the geometric transformation as signaled in the adaptation parameter set (APS) for the applied ALF filter parameters, Par.[0133-0136]).
Re Claim 16. Zhang and Du disclose, the method of claim 1,
Zhang teaches about, wherein the conversion includes encoding the current video block into the bitstream (encoding at the source device 12 in Fig.1, or Fig.2A).
Re Claim 17. Zhang and Du disclose, the method of claim 1,
Zhang teaches about, wherein the conversion includes decoding the current video block from the bitstream (decoding at the destination device 14 in Fig.1, or Fig.2B).
Re Claim 18. This claim represents the apparatus for video processing comprising a processor (Zhang: processors 1505 and system 1500 Fig. 18 Par.[0255], Fig.2A, Fig.2B) and a non-transitory memory (Zhang: non-transitory memory Par.[0226, 0286]) with instructions (Zhang: instructions in the non-transitory computer readable medium, Par.[0286) thereon, causing the implementation of each and every limitation of the method claim 1, hence it is rejected on the same evidentiary mapped premises mutatis mutandis.
Re Claim 19. This claim represents the non-transitory computer-readable storage medium storing instructions (Zhang: processors 1505 and system 1500 Fig. 18 Par.[0255], Fig.2A, Fig.2B) and a non-transitory memory (Zhang: non-transitory memory Par.[0226, 0286]) storing instructions thus being “printed subject matter” per (Zhang: instructions in the non-transitory computer readable medium, Par.[0286) thereon, causing the implementation of each and every processing limitation of the method claim 1, hence it is rejected on the same evidentiary mapped premises mutatis mutandis.
Re Claim 20.This claim represents a non-transitory computer-readable recording medium storing a bitstream of a video in a non-transitory memory (Zhang: non-transitory memory Par.[0226, 0286]) with a bitstream in (Zhang: storing the bitstream at video memory 78 in Fig.2B, Par.[0079) thereon, where the method of performing the subsequent limitations does not represent support for the storage process but rather recite the functional limitations of the claim 19, hence it is solely rejected as representing “printed matter” based on the same mapped premises mutatis mutandis.
Explanatory NOTE:
For reference, please refer to: MPEP 2111.05 Functional and Nonfunctional Descriptive Material [R-07.2022].
Regarding Claims 19-20, the USPTO personnel must consider all claim limitations when determining patentability of an invention over the prior art. In re Gulack, 703 F.2d 1381, 1385, 217 USPQ 401, 403-04 (Fed. Cir. 1983). Since a claim must be read as a whole, USPTO personnel may not disregard claim limitations that include printed matter. See Id. at 1384, 217 USPQ at 403; see also Diamond v. Diehr, 450 U.S. 175, 191, 209 USPQ 1, 10 (1981). The first step of the printed matter analysis is the determination that the limitation in question is in fact directed toward printed matter. “Our past cases establish a necessary condition for falling into the category of printed matter: a limitation is printed matter only if it claims the content of information.” See In re DiStefano, 808 F.3d 845, 848, 117 USPQ2d 1265, 1267 (Fed. Cir. 2015). “[O]nce it is determined that the limitation is directed to printed matter, [the examiner] must then determine if the matter is functionally or structurally related to the associated subsequent claimed limitations and only if the answer is ‘no’ is the printed matter owed no patentable weight.” Id. at 850, 117 USPQ2d at 1268. If a new and nonobvious functional relationship between the printed matter and the substrate i.e., the subsequently claimed limitations, does exist, the examiner should give patentable weight to printed matter. See In re Lowry, 32 F.3d 1579, 1583-84, 32 USPQ2d 1031, 1035 (Fed. Cir. 1994); In re Ngai, 367 F.3d 1336, 70 USPQ2d 1862 (Fed. Cir. 2004); In re Gulack, 703 F.2d 1381, 1385, 217 USPQ 401, 403-04 (Fed. Cir. 1983). The rationale behind the printed matter cases, in which, for example, written instructions are added to a known product, has been extended to method claims in which an instructional limitation is added to a method known in the art. Similar to the inquiry for products with printed matter thereon, in such method cases the relevant inquiry is whether a new and nonobvious functional relationship with the known method exists. See In re DiStefano, 808 F.3d 845, 117 USPQ2d 1265 (Fed. Cir. 2015); In re Kao, 639 F.3d 1057, 1072-73, 98 USPQ2d 1799, 1811-12 (Fed. Cir. 2011); King Pharmaceuticals Inc. v. Eon Labs Inc., 616 F.3d 1267, 1279, 95 USPQ2d 1833, 1842 (Fed. Cir. 2010).
Consideration to this interpretation is required.
Conclusion
6. The prior art made of record and not relied upon, is considered pertinent to applicant's disclosure is listed below.
NPL: Wei-Jung Chien et al., "Motion Vector Coding and Block Merging in the Versatile Video Coding Standard ", IEEE Transactions on Circuits and Systems for Video Technology, Vol.31, No. 10, (c) Oct. 2021 IEEE; and
US 12,126,821; US 11,451,773; US 2022/0329826.
See PTO-892 form. Applicant is required under 37 C.F.R. 1.111(c) to consider these references when responding to this action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DRAMOS KALAPODAS whose telephone number is (571)272-4622. The examiner can normally be reached on Monday-Friday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached on 571-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DRAMOS KALAPODAS/Primary Examiner, Art Unit 2487