Prosecution Insights
Last updated: April 19, 2026
Application No. 18/692,814

VIDEO SIGNAL PROCESSING METHOD USING OBMC, AND DEVICE THEREFOR

Non-Final OA §102§103§112
Filed
Oct 17, 2024
Examiner
HODGES, SUSAN E
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Wilus Institute Of Standards And Technology Inc.
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
81%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
250 granted / 375 resolved
+8.7% vs TC avg
Moderate +14% lift
Without
With
+14.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
31 currently pending
Career history
406
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
20.9%
-19.1% vs TC avg
§112
22.6%
-17.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 375 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION This office action is in response to the application filed on October 17, 2024. Claims 21 – 40 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. KR10-2021-0125137, filed on September 17, 2021, Application No. KR10-2021-0134289, filed on October 8, 2021, Application No. KR10-2022-0078491 filed on June 27, 2022, Application No. KR10-2022-0083184 filed on July 6, 2022 and Application No. KR10-2022-0087811 filed on July 15, 2022. Information Disclosure Statement The information disclosure statements (IDS) were submitted on March 16, 2024 and February 26,2025. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the Examiner. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 27 and 35 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Regarding Claims 27 and 35, they recite the limitation “when a MHP (Multi-hypothesis prediction) mode is not applied to the current block, wherein the OBMC is not applied to the current block regardless of the predetermined condition”. This limitation is indefinite. The phrase “not applied” renders the claim(s) indefinite because the claim(s) include(s) elements not actually disclosed (those encompassed by “a MHP (Multi-hypothesis prediction) mode”), thereby rendering the scope of the claim(s) unascertainable. The claims should clearly point out and set forth what the applicant is trying to encompass, positively reciting the elements to those directed to “Overlapped Block Motion Compensation of the current block”. One of an ordinary skill in the art would not ascertain the scope of the invention as claimed, since a particular preference or way, lead to confusion over the intended scope of a claim. Furthermore, the specification states in Par. [0193] that “when the MHP mode is applied to the current block, the OBMC may not be applied to the current block or a sub-block of the current block. In addition, when the MHP mode is applied to the current block, the decoder may not parse a syntax element related to the OBMC. For example, when the MHP mode is applied to the current block, the value of obmc_flag may be inferred to be 0. Conversely, when the MHP mode is applied to the current block to improve performance, the value of obmc_flag may be inferred to be 1”. Nowhere in the specification does it further reference, describe or define the MHP mode as being not applied to the current block. Therefore, the meets and bounds of the claim’s requirements regarding the MHP mode is not applied is unclear. The specification does not support the interpretation of when a MHP (Multi-hypothesis prediction) mode is not applied. For examination purposes the Examiner has broadly interpreted the claim limitation “when a MHP(Multi-hypothesis prediction) mode is not applied to the current block, wherein the OBMC is not applied to the current block regardless of the predetermined condition” to mean “when a MHP (Multi-hypothesis prediction) mode is Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 37 - 40 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by LIU et al. (US 2021/0250587 A1) referred to as LIU hereinafter. Regarding Claim 37, LIU discloses a computer-readable non-transitory storage medium (Par. [0355], The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them) configured to store a bitstream (Par. [0173], The memory (memories) 1104 may be used for storing data (i.e. bitstream) and code used for implementing the methods and techniques described herein, Claim 20. A non-transitory computer-readable recording medium storing a bitstream of a video), wherein the bitstream is decoded by a decoding method, the decoding method comprising: obtaining first motion information of a current block; obtaining second motion information related to a neighboring block of the current block; obtaining a first prediction block based on the first motion information; obtaining a second prediction block based on the second motion information; determining whether OBMC (Overlapped Block Motion Compensation) is applied to the current block based on a predetermined condition; when the OBMC is applied to the current block, obtaining a prediction block for the current block based on the first prediction block and the second prediction block; and when the OBMC is not applied to the current block, obtaining the prediction block for the current block based on the first prediction block (See MPEP §2113, it recites: "Product-by-Process claims are not limited to the manipulations of the recited steps, only the structure implied by the steps"). Regarding Claim 38, LIU discloses Claim 37. LIU further discloses a computer-readable non-transitory storage medium (Par. [0355], The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them) to store a bitstream (Par. [0173], The memory (memories) 1104 may be used for storing data (i.e. bitstream) and code used for implementing the methods and techniques described herein, Claim 20. A non-transitory computer-readable recording medium storing a bitstream of a video), wherein the predetermined condition is a condition based on similarity between the first prediction block and the second prediction block (See MPEP §2113 it recites: "Product-by-Process claims are not limited to the manipulations of the recited steps, only the structure implied by the steps"). Regarding Claim 39, LIU discloses Claim 38. LIU further discloses a computer-readable non-transitory storage medium (Par. [0355], The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them) to store a bitstream (Par. [0173], The memory (memories) 1104 may be used for storing data (i.e. bitstream) and code used for implementing the methods and techniques described herein, Claim 20. A non-transitory computer-readable recording medium storing a bitstream of a video), wherein the predetermined condition is a condition based on a result of comparing a value related to the similarity with a predetermined value (See MPEP §2113 it recites: "Product-by-Process claims are not limited to the manipulations of the recited steps, only the structure implied by the steps"). Regarding Claim 40, LIU discloses Claim 38. LIU further discloses a computer-readable non-transitory storage medium (Par. [0355], The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them) to store a bitstream (Par. [0173], The memory (memories) 1104 may be used for storing data (i.e. bitstream) and code used for implementing the methods and techniques described herein, Claim 20. A non-transitory computer-readable recording medium storing a bitstream of a video), wherein the similarity is determined based on a pixel value of the first prediction block and a pixel value of the second prediction block(See MPEP §2113 it recites: "Product-by-Process claims are not limited to the manipulations of the recited steps, only the structure implied by the steps"). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 21 - 25, 28 - 33, and 36 are rejected under 35 U.S.C. 103 as being unpatentable over LIU et al. (US 2021/0250587 A1) referred to as LIU hereinafter, and in view of Zhang et al. (US 2020/0288168 A1) referred to as Zhang hereinafter. Regarding Claim 21, LIU teaches a video signal decoding device (Par. [0004], video decoder or encoder embodiments for in which overlapped block motion compensation with derived motion from neighbors) comprising a processor, wherein the processor is configured to (Fig. 11, Par. [0009], a processor to carry out a method, Par. [0173] video processing apparatus 1100 which may include one or more processors 1102): obtain first motion information of a current block (Fig. 3, Par. [0061] motion vector of above (i.e. first motion information) neighboring sub-block, PN1, Prediction block based on motion vectors of a neighbouring sub-block is denoted as PN, with N indicating an index for the neighbouring above, below, left and right sub-blocks and prediction block based on motion vectors of the current sub-block is denoted as PC), obtain second motion information related to a neighboring block of the current block (Par. [0061] motion vector of left (i.e. second motion information) neighboring sub-block, PN2, Prediction block based on motion vectors of a neighbouring sub-block is denoted as PN, with N indicating an index for the neighbouring above, below, left and right sub-blocks and prediction block based on motion vectors of the current sub-block is denoted as PC), obtain a first prediction block based on the first motion information (Par. [0061], prediction block based on motion vectors above (i.e. first prediction block) neighboring of the current sub-block is denoted as PC), obtain a second prediction block based on the second motion information (Par. [0061], prediction block based on motion vectors left (i.e. second prediction block) neighboring of the current sub-block is denoted as PC), determine whether OBMC (Overlapped Block Motion Compensation) is applied to the current block based on a predetermined condition (Par. [0063] for a CU with size less than or equal to (i.e. predetermined condition) 256 luma samples, a CU level flag is signaled to indicate whether OBMC is applied or not for the current CU), when the OBMC is applied to the current block (Par. [0060] When OBMC applies to the current sub-block, besides current motion vectors, motion vectors of four connected neighbouring sub-blocks, if available and are not identical to the current motion vector, are also used to derive prediction block for the current sub-block), obtain a prediction block for the current block based on the first prediction block and the second prediction block (Par. [0063], The prediction signal formed by OBMC using motion information of the top neighbouring block (i.e. first prediction block) and the left neighbouring block (i.e. second prediction block) is used to compensate the top and left boundaries of the original signal of the current CU, and then the normal motion estimation process is applied), and when the OBMC is not applied to the current block (Par. [0061], When PN is based on the motion information of a neighbouring sub-block that contains the same motion information to the current sub-block, the OBMC is not performed from PN. Par. [0063] for a CU with size less than or equal to 256 luma samples, a CU level flag is signaled to indicate whether OBMC is applied or not for the current CU). LIU does not specifically teach the next step when the OBMC is not applied to the current block. Therefore, LIU fails to explicitly teach when the OBMC is not applied to the current block, obtain the prediction block for the current block based on the first prediction block. However, Zhang teaches when the OBMC is not applied to the current block (Fig. 11, Par. [0115], OBMC based on one or more neighboring blocks of a current block may be skipped for the current block or for one or more video components (e.g., the chroma components) of the current block. A decision to skip OBMC may be based on similarities between the motion vector associated with a neighbor block and the motion vector associated with the current block), obtain the prediction block for the current block based on the first prediction block (Par, [0003], a first motion vector associated with the current video block may be determined that refers to a specific reference picture. A second motion vector associated with a neighboring video block may be determined to also refer to the reference picture. Further, the current video block (i.e. based on first prediction block) and the neighboring video block may both be predicted using a same directional prediction mode (e.g., a unidirectional mode or a bidirectional mode), and the difference between the first and second motion vectors (e.g., based on a sum of absolute difference (SAD) between the first motion vector and the second motion vector) may be determined to be not substantial (e.g., less than a threshold value). Under these conditions, OBMC based on the neighboring video block may be omitted for the current video block (i.e. obtain prediction block)). References LIU and Zhang are considered to be analogous art because they relate to overlap block motion compensation in video coding. Therefore, it would have been obvious that one of ordinary skill in the art, before the effective filing date of the claimed invention, would recognize the advantage of further specifying obtain the prediction block when the OBMC is not applied as suggested by Zhang in the invention of LIU in order that OBMC based on a neighboring block may be skipped (See Zhang, Par. [0117]). Regarding Claim 22, LIU in view of Zhang teaches Claim 21, LIU further teaches wherein the predetermined condition is a condition based on similarity between the first prediction block and the second prediction block (Par. [0060] When OBMC applies to the current sub-block, besides current motion vectors, motion vectors of four connected neighbouring sub-blocks, if available and are not identical (i.e. based on similarity) to the current motion vector, are also used to derive prediction block for the current sub-block). Regarding Claim 23, LIU in view of Zhang teaches Claim 22, LIU further teaches wherein the predetermined condition is a condition (Par. [0061] When PN is based on the motion information of a neighbouring sub-block that contains the same motion information to the current sub-block, the OBMC is not performed from PN) based on a result of comparing a value related to the similarity with a predetermined value (Par. [0105] If multiple reference blocks are found to match the current block with the same hash key, the block vector costs (i.e. a value) of each candidates are calculated (i.e. compared) and the one with minimum cost (i.e. predetermined value) is selected. In block matching search, the search range is set to be 64 pixels to the left and on top of current block). Regarding Claim 24, LIU in view of Zhang teaches Claim 22, LIU further teaches wherein the similarity is determined based on a pixel value of the first prediction block and a pixel value of the second prediction block (Par. [0248], Overlapped Block Motion Compensation (OBMC) for the current block based on the motion information of the at least one neighboring block, wherein the OBMC tool includes using an intermediate prediction values (i.e. pixel value) of one sub-block of the current block and a prediction values (i.e. pixel value) of at least one neighboring sub-block to generating a final prediction values of the sub-block). Regarding Claim 25, LIU in view of Zhang teaches Claim 21, LIU further teaches wherein the prediction block for the current block is obtained by a weighted average of the first prediction block and the second prediction block (Par. [0119] - [0120], each of the selected motion information may be firstly scaled to the same reference picture (e.g., for each prediction direction) of the current video unit, then the scaled MV (denoted as neigScaleMvLX) and MV of the current video unit (denoted as currMvLX) may be jointly used to derive final MVs (e.g., using weighted averaged) for MC of the video unit. When multiple sets of motion information are selected, neigScaleMvLX may derived from multiple scaled motion vectors, e.g., using weighted average or average of all scaled motion vectors). Regarding Claim 28, LIU in view of Zhang teaches Claim 21, LIU further teaches wherein the coding block is one of sub-blocks of a coding block (Fig. 3, Par. [0015], sub-blocks where overlapped block motion compensation (OBMC) applies. Par. [0060] When OBMC applies to the current sub-block, besides current motion vectors, motion vectors of four connected neighbouring sub-blocks, if available and are not identical to the current motion vector, are also used to derive prediction block for the current sub-block. These multiple prediction blocks based on multiple motion vectors are combined to generate the final prediction signal of the current sub-block). Regarding Claim 29, LIU teaches a video signal encoding device (Par. [0004], video decoder or encoder embodiments for in which overlapped block motion compensation with derived motion from neighbors) comprising a processor (Fig. 11, Par. [0173] video processing apparatus 1100, which may include one or more processors 1102,), wherein the processor is configured to (Par. [0173] video processing apparatus 1100 which may include one or more processors 1102) acquire a bitstream decoded by a decoding method (Par. [0035], a decoder of video to improves the quality of decompressed or decoded digital video. A video encoder for encoding in order to reconstruct decoded frames used for further encoding), the decoding method comprising: obtain first motion information of a current block (Fig. 3, Par. [0061] motion vector of above (i.e. first motion information) neighboring sub-block, PN1, Prediction block based on motion vectors of a neighbouring sub-block is denoted as PN, with N indicating an index for the neighbouring above, below, left and right sub-blocks and prediction block based on motion vectors of the current sub-block is denoted as PC), obtain second motion information related to a neighboring block of the current block (Par. [0061] motion vector of left (i.e. second motion information) neighboring sub-block, PN2, Prediction block based on motion vectors of a neighbouring sub-block is denoted as PN, with N indicating an index for the neighbouring above, below, left and right sub-blocks and prediction block based on motion vectors of the current sub-block is denoted as PC), obtain a first prediction block based on the first motion information (Par. [0061], prediction block based on motion vectors above (i.e. first prediction block) neighboring of the current sub-block is denoted as PC), obtain a second prediction block based on the second motion information (Par. [0061], prediction block on motion vectors left (i.e. second prediction block) based neighboring of the current sub-block is denoted as PC), determine whether OBMC (Overlapped Block Motion Compensation) is applied to the current block based on a predetermined condition (Par. [0063] for a CU with size less than or equal to (i.e. predetermined condition) 256 luma samples, a CU level flag is signaled to indicate whether OBMC is applied or not for the current CU), when the OBMC is applied to the current block (Par. [0060] When OBMC applies to the current sub-block, besides current motion vectors, motion vectors of four connected neighbouring sub-blocks, if available and are not identical to the current motion vector, are also used to derive prediction block for the current sub-block), obtain a prediction block for the current block based on the first prediction block and the second prediction block (Par. [0063], The prediction signal formed by OBMC using motion information of the top neighbouring block (i.e. first prediction block) and the left neighbouring block (i.e. second prediction block) is used to compensate the top and left boundaries of the original signal of the current CU, and then the normal motion estimation process is applied), and when the OBMC is not applied to the current block (Par. [0061], When PN is based on the motion information of a neighbouring sub-block that contains the same motion information to the current sub-block, the OBMC is not performed from PN. Par. [0063] for a CU with size less than or equal to 256 luma samples, a CU level flag is signaled to indicate whether OBMC is applied or not for the current CU). LIU does not specifically teach the next step when the OBMC is not applied to the current block. Therefore, LIU fails to explicitly teach when the OBMC is not applied to the current block, obtain the prediction block for the current block based on the first prediction block. However, Zhang teaches when the OBMC is not applied to the current block (Fig. 11, Par. [0115], OBMC based on one or more neighboring blocks of a current block may be skipped for the current block or for one or more video components (e.g., the chroma components) of the current block. A decision to skip OBMC may be based on similarities between the motion vector associated with a neighbor block and the motion vector associated with the current block), obtain the prediction block for the current block based on the first prediction block (Par, [0003], a first motion vector associated with the current video block may be determined that refers to a specific reference picture. A second motion vector associated with a neighboring video block may be determined to also refer to the reference picture. Further, the current video block (i.e. based on first prediction block) and the neighboring video block may both be predicted using a same directional prediction mode (e.g., a unidirectional mode or a bidirectional mode), and the difference between the first and second motion vectors (e.g., based on a sum of absolute difference (SAD) between the first motion vector and the second motion vector) may be determined to be not substantial (e.g., less than a threshold value). Under these conditions, OBMC based on the neighboring video block may be omitted for the current video block (i.e. obtain prediction block)). References LIU and Zhang are considered to be analogous art because they relate to overlap block motion compensation in video coding. Therefore, it would have been obvious that one of ordinary skill in the art, before the effective filing date of the claimed invention, would recognize the advantage of further specifying obtain the prediction block when the OBMC is not applied as suggested by Zhang in the invention of LIU in order that OBMC based on a neighboring block may be skipped (See Zhang, Par. [0117]). Regarding Claim 30, LIU in view of Zhang teaches Claim 29, LIU further teaches wherein the predetermined condition is a condition based on similarity between the first prediction block and the second prediction block (Par. [0060] When OBMC applies to the current sub-block, besides current motion vectors, motion vectors of four connected neighbouring sub-blocks, if available and are not identical (i.e. based on similarity) to the current motion vector, are also used to derive prediction block for the current sub-block). Regarding Claim 31, LIU in view of Zhang teaches Claim 30, LIU further teaches wherein the predetermined condition is a condition (Par. [0061] When PN is based on the motion information of a neighbouring sub-block that contains the same motion information to the current sub-block, the OBMC is not performed from PN) based on a result of comparing a value related to the similarity with a predetermined value (Par. [0105] If multiple reference blocks are found to match the current block with the same hash key, the block vector costs (i.e. a value) of each candidates are calculated (i.e. compared) and the one with minimum cost (i.e. predetermined value) is selected. In block matching search, the search range is set to be 64 pixels to the left and on top of current block). Regarding Claim 32, LIU in view of Zhang teaches Claim 30, LIU further teaches wherein the similarity is determined based on a pixel value of the first prediction block and a pixel value of the second prediction block (Par. [0248], Overlapped Block Motion Compensation (OBMC) for the current block based on the motion information of the at least one neighboring block, wherein the OBMC tool includes using an intermediate prediction values (i.e. pixel value) of one sub-block of the current block and a prediction values (i.e. pixel value) of at least one neighboring sub-block to generating a final prediction values of the sub-block). Regarding Claim 33, LIU in view of Zhang teaches Claim 29, LIU further teaches wherein the prediction block for the current block is obtained by a weighted average of the first prediction block and the second prediction block (Par. [0119] - [0120], each of the selected motion information may be firstly scaled to the same reference picture (e.g., for each prediction direction) of the current video unit, then the scaled MV (denoted as neigScaleMvLX) and MV of the current video unit (denoted as currMvLX) may be jointly used to derive final MVs (e.g., using weighted averaged) for MC of the video unit. When multiple sets of motion information are selected, neigScaleMvLX may derived from multiple scaled motion vectors, e.g., using weighted average or average of all scaled motion vectors). Regarding Claim 36, LIU in view of Zhang teaches Claim 29, LIU further teaches wherein the coding block is one of sub-blocks of a coding block (Fig. 3, Par. [0015], sub-blocks where overlapped block motion compensation (OBMC) applies. Par. [0060] When OBMC applies to the current sub-block, besides current motion vectors, motion vectors of four connected neighbouring sub-blocks, if available and are not identical to the current motion vector, are also used to derive prediction block for the current sub-block. These multiple prediction blocks based on multiple motion vectors are combined to generate the final prediction signal of the current sub-block). Claims 26 and 34 are rejected under 35 U.S.C. 103 as being unpatentable over LIU (US 2021/0250587 A1), in view of Zhang (US 2020/0288168 A1), and in further view of LIM et al. (US 2022/0312005 A1) referred to as LIM hereinafter. Regarding Claim 26, LIU in view of Zhang teaches Claim 21, LIU further teaches where when the OBMC is applied to the current block, a deblock filtering (Par. [0111] It is proposed that whether to and how to apply deblocking filter may depend on whether dependent scalar quantization is used or not). LIU does not specifically teach deblocking filtering is not performed. Therefore, LIU in view of Zhang fails to explicitly teach where when the OBMC is applied to the current block, a deblock filtering is not performed. However, LIM teaches where when the OBMC is applied to the current block (Fig. 23, Par. [0935] Condition 3) Block A does not perform OBMC that uses the motion information of block B), a deblock filtering is not performed (Fig. 23, Par. [0932], At step 2360, filtering strength may be determined such that, when at least one of the following condition 1, condition 2, and condition 3 is satisfied, low filtering strength is used for filtering (e.g., BS=1) Par. [0938] filtering strength BS may be determined to have a value of 0 (i.e. not used) when OBMC is used (i.e. is applied). When OBMC is not used, filtering strength BS may be determined to have a value of 1). References LIU, Zhang and LIM are considered to be analogous art because they relate to overlap block motion compensation in video coding. Therefore, it would have been obvious that one of ordinary skill in the art, before the effective filing date of the claimed invention, would recognize the advantage of further specifying not performing deblock filtering when the OBMC is applied as suggested by LIM in the inventions of LIU and Zhang in order to lower filtering strength and thus improve image quality (See LIM, Par. [0937]). Regarding Claim 34, LIU in view of Zhang teaches Claim 29, LIU further teaches where when the OBMC is applied to the current block, a deblock filtering (Par. [0111] It is proposed that whether to and how to apply deblocking filter may depend on whether dependent scalar quantization is used or not). LIU does not specifically teach deblocking filtering is not performed. Therefore, LIU in view of Zhang fails to explicitly teach where when the OBMC is applied to the current block, a deblock filtering is not performed. However, LIM teaches where when the OBMC is applied to the current block (Fig. 23, Par. [0935] Condition 3) Block A does not perform OBMC that uses the motion information of block B), a deblock filtering is not performed (Fig. 23, Par. [0932], At step 2360, filtering strength may be determined such that, when at least one of the following condition 1, condition 2, and condition 3 is satisfied, low filtering strength is used for filtering (e.g., BS=1) Par. [0938] filtering strength BS may be determined to have a value of 0 (i.e. not used) when OBMC is used (i.e. is applied). When OBMC is not used, filtering strength BS may be determined to have a value of 1). References LIU, Zhang and LIM are considered to be analogous art because they relate to overlap block motion compensation in video coding. Therefore, it would have been obvious that one of ordinary skill in the art, before the effective filing date of the claimed invention, would recognize the advantage of further specifying not performing deblock filtering when the OBMC is applied as suggested by LIM in the inventions of LIU and Zhang in order to lower filtering strength and thus improve image quality (See LIM, Par. [0937]). Claims 27 and 35 are rejected under 35 U.S.C. 103 as being unpatentable over LIU (US 2021/0250587 A1), in view of Zhang (US 2020/0288168 A1), and in further view of LIN et al. (US 2020/0021845 A1) referred to as LIN hereinafter. Regarding Claim 27, LIU in view of Zhang teaches Claim 21, LIU in view of Zhang does not specifically teach MHP(Multi-hypothesis prediction) mode. Therefore, LIU in view of Zhang fails to explicitly teaches wherein when a MHP(Multi-hypothesis prediction) mode is not applied to the current block, wherein the OBMC is not applied to the current block regardless of the predetermined condition. However, LIN teaches wherein when a MHP (Multi-hypothesis prediction) mode is not applied to the current block, wherein the OBMC is not applied to the current block regardless of the predetermined condition (Par. [0141], For example, if the current inter-mode is multi-hypothesis mode, then the OBMC is turned off implicitly. See 35 U.S.C. 112(b) rejection). References LIU, Zhang and LIN are considered to be analogous art because they relate to overlap block motion compensation in video coding. Therefore, it would have been obvious that one of ordinary skill in the art, before the effective filing date of the claimed invention, would recognize the advantage of further specifying not performing deblock filtering when the OBMC is applied as suggested by LIN in the inventions of LIU and Zhang in order to exclude OBMC for some Inter prediction modes (See LIN, Par. [0141]). Regarding Claim 35, LIU in view of Zhang teaches Claim 29, LIU in view of Zhang does not specifically teach MHP(Multi-hypothesis prediction) mode. Therefore, LIU in view of Zhang fails to explicitly teaches wherein when a MHP(Multi-hypothesis prediction) mode is not applied to the current block, wherein the OBMC is not applied to the current block regardless of the predetermined condition. However, LIN teaches wherein when a MHP(Multi-hypothesis prediction) mode is not applied to the current block, wherein the OBMC is not applied to the current block regardless of the predetermined condition (Par. [0141], For example, if the current inter-mode is multi-hypothesis mode, then the OBMC is turned off implicitly. See 35 U.S.C. 112(b) rejection). References LIU, Zhang and LIN are considered to be analogous art because they relate to overlap block motion compensation in video coding. Therefore, it would have been obvious that one of ordinary skill in the art, before the effective filing date of the claimed invention, would recognize the advantage of further specifying not performing deblock filtering when the OBMC is applied as suggested by LIN in the inventions of LIU and Zhang in order to exclude OBMC for some Inter prediction modes (See LIN, Par. [0141]). Conclusion Any inquiry concerning this communication should be directed to SUSAN E HODGES whose telephone number is (571)270-0498. The Examiner can normally be reached on Monday - Friday from 8:00 am (EST) to 4:00 pm (EST). If attempts to reach the Examiner by telephone are unsuccessful, the Examiner's supervisor, Brian T. Pendleton, can be reached on (571) . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /Susan E. Hodges/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Oct 17, 2024
Application Filed
Dec 23, 2025
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603982
STEREOSCOPIC HIGH DYNAMIC RANGE VIDEO
2y 5m to grant Granted Apr 14, 2026
Patent 12604008
ADAPTIVE CLIPPING IN MODELS PARAMETERS DERIVATIONS METHODS FOR VIDEO COMPRESSION
2y 5m to grant Granted Apr 14, 2026
Patent 12574558
Method and Apparatus for Sign Coding of Transform Coefficients in Video Coding System
2y 5m to grant Granted Mar 10, 2026
Patent 12568212
ADAPTIVE LOOP FILTERING ON OUTPUT(S) FROM OFFLINE FIXED FILTERING
2y 5m to grant Granted Mar 03, 2026
Patent 12556671
THREE DIMENSIONAL STROBO-STEREOSCOPIC IMAGING SYSTEMS AND ASSOCIATED METHODS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
81%
With Interview (+14.4%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 375 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month