DETAILED ACTION
This Office action for U.S. Patent Application No. 17/789,457 is responsive to communications filed 15 December 2025 and 17 December 2025, in reply to the Non-Final Rejection of 16 July 2025.
Claims 1–11, 14, and 16 are pending.
In the previous Office action, claims 1–11 and 14–16 are rejected under 35 U.S.C. § 103 as obvious over International Publication No. WO 2018/172079 A1 (“Xiu”) in view of U.S. Patent Application Publication No. 2018/0315199 A1 (“Socek”).
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see p. 4, filed 15 December 2025, with respect to the rejection of claim 1 under 35 U.S.C. § 103 on the grounds that Socek does not teach both the non-uniform and uniform global motion vector fields have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of U.S. Patent Application Publication No. 2020/0260111 A1 (“Liu”). Specifically, the examiner has reviewed Socek and finds that the “motion vectors merger” in paragraph 0098 refers not to a large area of merged blocks that shares a single global motion vector as alleged in the Non-Final Rejection, but is instead an adaptive merger of two local motion vector fields, one with 16x16 blocks and one with 8x8 blocks. However, it is submitted that Liu teaches for inter prediction for video coding a blockwise selection of a global motion or local compensation, followed by a selection for a globally motion compensated block a global motion model selected from a plurality of candidate global motion models, which may include uniform global motion such as translation, and non-uniform global motion such as affine. As will be shown below in full, this is sufficient to overcome the alleged deficiencies of Socek.
Claim Rejections - 35 U.S.C. § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1–11 and 14–16 are rejected under 35 U.S.C. § 103 as being unpatentable over International Publication No. WO 2018/170279 A11 (“Xiu”) in view of U.S. Patent Application Publication No. 2020/0260111 A1 (“Liu”).
Xiu, directed to motion compensation for a panoramic video codec, teaches with respect to claim 1 a method for providing a bitstream comprising video data encoded by an encoder apparatus (Fig. 7, ¶ 0048; block diagram of video encoder that outputs an encoded bitstream), the method including:
a processor of the encoder apparatus determining a current motion vector of a current block of a current video frame of a sequence of video frames comprising video data (¶ 0058, determining motion vectors at an encoder, in accordance with the HEVC standard),
the current motion vector defining a spatial offset of the current block relative to a prediction block of a previously encoded reference video frame stored in a memory of the encoder apparatus (¶¶ 0051–54, motion vector prediction for block or sub-block from temporal reference pictures; ¶ 0050, reference picture is made of reconstructed blocks and is stored in a reference picture store 264);
the processor determining or receiving motion information (¶¶ 0053–54, motion prediction uses motion information for other blocks), . . .
the processor determining a motion vector predictor candidate (¶¶ 0078–89, motion prediction using candidate motion vectors from different faces;), wherein the determining includes:
selecting one of a plurality of motion vector predictor algorithms (¶¶ 0078–79, enabling or disabling various motion vector candidates based on location to the current block),
the plurality of motion vector predictor algorithms including at least a first motion vector predictor algorithm and a second motion vector predictor algorithm (¶¶ 0078, 0087; various algorithms including CMP and merge),
the selection being based on the motion information (¶ 0049, coding mode and prediction mode selection based on rate-distortion optimization) . . . ;
determining a list of motion vector predictor candidates based on the selected motion vector predictor algorithm (¶ 0089, motion vector prediction candidates);
selecting the motion vector predictor candidate from the list of motion vector predictor candidates (id., considering a motion vector prediction candidate to be used in the prediction of the current block); wherein determining a list of motion vector predictor candidates includes:
- the first motion vector predictor algorithm determining at least part of the list of motion vector predictor candidates based on one or more motion vectors of one or more already encoded blocks of one of one or more reference video frames stored in the memory of the encoder apparatus (Xiu ¶¶ 0085–86, availability of reference blocks as candidates for a current block depending on their location, such as whether they are in the same face as the current block);
the processor determining a motion vector difference based on the selected motion vector predictor candidate and the current motion vector (¶¶ 0049–50, 0092; prediction residuals); and,
the processor generating a bitstream (Fig. 7, bitstream output),
the generating including encoding the motion vector difference, an indication of the selected motion vector predictor candidate and a residual block, the residual block defining a difference between the current block and the prediction block (¶ 0092, signalling coding mode, motion related information, and residual)
and, inserting an indication of the selected motion vector predictor algorithm or at least part of the motion information into the bitstream (id.).
The claimed invention differs from Xiu in that the claimed invention specifies the motion information indicates whether the current block uses a non-uniform global motion vector field. Xiu does not teach this limitation. However, Liu, directed to global motion compensation video coding, teaches with respect to claim 1:
the motion information defining whether the current block is part of a set of blocks in the current video frame that is associated with a non-uniform global motion vector field, in the data of the current video frame (¶¶ 0046–48, 0052; global motion compensation (GMC) enabling flag signalled at block level);
wherein selecting one of a plurality of motion vector predictor algorithms includes:
- selecting the first motion vector predictor algorithm, if the motion information defines that the current block is part of a set of blocks that are associated with a non-uniform motion vector field (¶¶ 0053–54, signalling an affine motion model for global motion for current unit); and
- selecting the second motion vector predictor algorithm, if the motion information signals the processor that the current block is part of a set of blocks that are associated with a uniform vector field (id., signalling a translational motion model for global motion for current unit).
It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Xiu to allow for uniform and non-uniform global motion models for blocks that use global motion, as taught by Liu, in order to increase encoding efficiency. Liu ¶ 0022.
Regarding claim 2, Xiu in view of Liu teaches a method according to claim 1 wherein the first motion vector algorithm determining at least part of the list of motion vector predictor candidates based on a parametric model of the non-uniform global motion vector field includes the parametric model representing a parametric algorithm configured to compute the non-uniform motion vector field at the position of the current block (Liu ¶¶ 0013–14, affine global model parameters).
Regarding claim 3, Xiu in view of Liu teaches a method according to claim 1 wherein the first motion vector predictor algorithm determining at least part of the list of motion vector predictor candidates based on one or more motion vectors of one or more already encoded blocks of one or mor reference frames stored in the memory of the encoder apparatus includes:
only evaluating temporal motion vector predictor candidates (Xiu ¶ 0086, “Motion candidates may be fetched from the spatial and/or temporal neighboring blocks of the current block”).
Regarding claim 4, Xiu in view of Liu teaches the method according to claim 1 wherein determining a list of motion vector candidates includes:
the second motion vector predictor algorithm determining at least part of the list of motion vector predictor candidates based on first evaluating one or more motion vectors of one or more already encoded blocks of the current video frame (Xiu ¶ 0048, spatial prediction uses pixels from already encoded neighboring blocks in the same video picture); and,
after evaluating one or more motion vectors of one or more already encoded blocks of the current video frame, evaluating one or more motion vectors of one or more already encoded blocks of one or more reference video frames stored in the memory of the encoder apparatus (id., temporal prediction uses pixels from already coded video pictures to predict a current video block).
Regarding claim 5, Xiu in view of Liu teaches the method according to claim 1, wherein the motion information includes one or more parameters for a map function . . . being configured to determine one or more first regions in the current frame for which the first motion vector predictor algorithm is used (Liu ¶ 0056, signalling particular global motion model at slice or region level).
Regarding claim 6, Xiu in view of Liu teaches the method according to claim 1 wherein the motion information includes a value for signaling the processor to use the first motion vector predictor algorithm or the second motion vector predictor algorithm (Liu ¶¶ 0053–54, global motion model index or motion type number).
Regarding claim 7, Xiu in view of Liu teaches the method according to claim 1 wherein the motion information includes a map . . . including a plurality of data units (Liu ¶ 0056, Table 6; signalling the use of global motion at a higher level such as slice level),
each data unit being associated with a block of the current frame, each data unit including a value for signaling the processor to use the first motion vector predictor algorithm or the second motion vector predictor algorithm (id., selecting particular global motion index at lower level from the global motion model set or subset available from the higher level).
Regarding claim 8, Xiu in view of Liu teaches a method according to claim 1 wherein video frames of the sequence of video frames comprise spherical data projected onto a rectangular video frame based on a projection model (Xiu Figs. 1A–1B, 2A–2B, 3; spherical panoramic image segmented and converted into a rectangular frame).
Regarding claim 9, Xiu in view of Liu teaches the method according to claim 1 wherein the encoding process is based on a block-based video coding standard (Xiu ¶ 0054, operation on a “current block” in HEVC video).
Regarding claim 10, Xiu in view of Liu teaches a method according to claim 1 wherein the processor determining motion information includes:
comparing a magnitude and/or a direction of motion vectors of blocks in a region of the current video frame (Liu ¶¶ 0007–0021, determining global motion models from motion vector behavior within an area of interest); and
determining that the motion vectors in the region belong to a non-uniform motion field based on the compared magnitudes and/or directions (¶¶ 0023, 0025–26, 0053; GMC and global motion model indication at the slice, tile, or region level).
Regarding claim 11, all other things equal to claim 1, Xiu at Figure 8 illustrates a corresponding decoder for decoding and outputting the bitstream encoded with the Figure 7 encoder.
Regarding claims 14–16, all other things equal to claims 1 and 11, at least Liu may be computer-implemented. Liu ¶ 0117.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 2004/0252759 A1
US 2016/0277645 A1
US 2019/0045193 A1
US 2019/0045192 A1
US 2022/0038709 A1
US 2022/0030264 A1
US 2008/0151997 A1
US 2009/0268819 A1
US 2005/0094852 A1
The following prior art was found using an Artificial Intelligence assisted search using an internal AI tool that uses the classification of the application under the Cooperative Patent Classification (CPC) system, as well as from the specification, including the claims and abstract, of the application as contextual information. The documents are ranked from most to least relevant. Where possible, English-language equivalents are given, and redundant results within the same patent families are eliminated. See “New Artificial Intelligence Functionality in PE2E Search”, 1504 OG 359 (15 November 2022), “Automated Search Pilot Program”, 90 F.R. 48,161 (8 October 2025).
US 2013/0114725 A1
US 2021/0400300 A1
US 2019/0342572 A1
US 2021/0352275 A1
Any inquiry concerning this communication or earlier communications from the examiner should be directed to David N Werner whose telephone number is (571)272-9662. The examiner can normally be reached M--F 7:30--4:00 Central.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dave Czekaj can be reached at 571.272.7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/David N Werner/Primary Examiner, Art Unit 2487
1 This reference was cited as a ‘Y” reference for corresponding International Application PCT/EP2020/087852, and was listed in the 27 June 2022 Information Disclosure Statement.