Prosecution Insights
Last updated: April 19, 2026
Application No. 18/818,831

IMAGE ENCODING, DECODING METHOD AND DEVICE, CODER-DECODER

Non-Final OA §101§103§DP
Filed
Aug 29, 2024
Examiner
ZHOU, ZHIHAN
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
BOE TECHNOLOGY GROUP CO., LTD.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
81%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
784 granted / 987 resolved
+21.4% vs TC avg
Minimal +1% lift
Without
With
+1.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
28 currently pending
Career history
1015
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
18.5%
-21.5% vs TC avg
§112
2.0%
-38.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 987 resolved cases

Office Action

§101 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This office action is in response to a continuation application filed in which claims 1-17 of the instant application are pending and ready for examination. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). Claims 1-6 and 8-16 are rejected on the ground of nonstatutory double patenting over claims 1-14 of U.S. Patent No. 12,177,489. The subject matter claimed in the instant application is fully disclosed in the patent and is covered by the patent since the patent and the application are claiming common subject matter, as follows: Although the conflicting claims are not identical, they are not patentably distinct from each other because claims 1-14 of U.S. Patent No. 12,177,489, either singularly or in combination, contain each and every element and/or render each and every element of claims 1-6 and 8-16 of the instant application obvious. The claims of the instant application therefore are not patently distinct from the issued patent claims and as such are unpatentable over obvious-type double patenting. More specifically, claims 7 and 14 and of U.S. Patent No. 12,177,489 disclose all the elements and steps of independent claims 1, 8, and 11 of the instant application and, as such, anticipate each and every feature of the independent claims 1, 8, and 11 of the instant application. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. A rejection based on double patenting of the “same invention” type finds its support in the language of 35 U.S.C. 101 which states that “whoever invents or discovers any new and useful process... may obtain a patent therefor...” (Emphasis added). Thus, the term “same invention,” in this context, means an invention drawn to identical subject matter. See Miller v. Eagle Mfg. Co., 151 U.S. 186 (1894); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Ockert, 245 F.2d 467, 114 USPQ 330 (CCPA 1957). A statutory type (35 U.S.C. 101) double patenting rejection can be overcome by canceling or amending the claims that are directed to the same invention so they are no longer coextensive in scope. The filing of a terminal disclaimer cannot overcome a double patenting rejection based upon 35 U.S.C. 101. Claims 7 and 17 are rejected under 35 U.S.C. 101 as claiming the same invention as that of claims 7 and 14 of U.S. Patent Number 12,177,489, respectively. This is a statutory double patenting rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 8, and 10-15 are rejected under 35 U.S.C. 103 as being unpatentable over Rijnders (US 2018/0240221) in view of Rosman (US 2022/0153278) and further in view of Himawan (“Adaptive Bilateral Filtering Using Saliency Map for Deblocking Low Bit Rate Videos”). As to claim 11, Rijnders teaches an image encoding device, comprising: a processor; and a memory, having a computer program stored thereon that, when executed by the processor, causes the processor to: filter an image of a current frame to obtain a target image ([0107]-[0108], [0134], and [0258]-[0263]); acquire, by using the target image and an input image of a next frame, a motion estimation vector and a target prediction image of the input image of the next frame ([0030], [0110]-[0113], [0115], [0118]-[0119], [0133], and [0136]-[0141]); and encode a difference image between the input image of the next frame and the target prediction image and the motion estimation vector ([0030], [0110]-[0113], [0115], [0118]-[0119], [0133], and [0136]-[0141]). Rijnders does not teach acquiring a visual saliency heat map of an image of a current frame, and filtering, by using the visual saliency heat map of the image of the current frame, the image of the current frame to obtain a target image, wherein filtering, by using the visual saliency heat map of the image of the current frame, the image of the current frame to obtain the target image comprises: determining a saliency score of each area in the visual saliency heat map of the image of the current frame; determining a filtering mechanism of each area of the image of the current frame according to the saliency score; and filtering the image of the current frame according to the filtering mechanism of each area to obtain the target image. However, Rijnders does teach obtaining data for object detection or visual saliency to use for video compression including salience-based video compression that use saliency maps (abstract, [0009]-[0010], [0058]-[0060], [0073]-[0074], [0098]-[0106], and [0225]-[0228]). In addition, Rosman teaches a visual saliency heat map involving convolutional encoder-decoder neural networks for evaluating scene information from image/video to estimate gaze and awareness estimation. The visual saliency heat map of an environment is generated through the implementation of a machine learning model (FIG. 3 and [0040], [0046], [0074], [0079]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Rijnders’s system with Rosman’s system to show acquiring a visual saliency heat map of an image of a current frame, and filtering, by using the visual saliency heat map of the image of the current frame, the image of the current frame to obtain a target image in order to refine noise and/or biased gaze sequences through leveraging visual saliency of a scene a person is viewing. The external imagery data is processed with a neural network configured to identify visually salient regions in the environment in combination with the gaze sequences (Rosman; [0027] and [0032]). The combination of Rijnders and Rosman does not teach wherein filtering, by using the visual saliency heat map of the image of the current frame, the image of the current frame to obtain the target image comprises: determining a saliency score of each area in the visual saliency heat map of the image of the current frame; determining a filtering mechanism of each area of the image of the current frame according to the saliency score; and filtering the image of the current frame according to the filtering mechanism of each area to obtain the target image. However, Rijnders does teach obtaining data for object detection or visual saliency to use for video compression including salience-based video compression that use saliency maps (abstract, [0009]-[0010], [0058]-[0060], [0073]-[0074], [0098]-[0106], and [0225]-[0228]). In addition, Rosman teaches a visual saliency heat map involving convolutional encoder-decoder neural networks for evaluating scene information from image/video to estimate gaze and awareness estimation. The visual saliency heat map of an environment is generated through the implementation of a machine learning model (FIG. 3 and [0040], [0046], [0074], [0079]). In that regard, Himawan teaches determining a saliency score of each area in a visual saliency map of an image; determining a filtering mechanism of each area of the image according to the saliency score; and filtering the image according to the filtering mechanism of each area to obtain a filtered image (see abstract – “using a saliency map to control the strength of the filter for each individual point in the image based on its perceptual importance”; see Section 2, Page 2, left-hand column and right-hand column – “less smoothing is applied to the salient regions which typically constitute the region of interest; the proposed technique considers salient regions to give the measure of perceptual importance”; see Section 3.1, Page 3, left-hand column to Page 4, left-hand column for controlling filter mechanisms such as independently tuning the shape of the filter for each individual point in the image based on its perceptual importance using a saliency map comprising saliency values; see Section 6, Page 5, right-hand column). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Rijnders’s system and Rosman’s system with Himawan’s system to show wherein filtering, by using the visual saliency heat map of the image of the current frame, the image of the current frame to obtain the target image comprises: determining a saliency score of each area in the visual saliency heat map of the image of the current frame; determining a filtering mechanism of each area of the image of the current frame according to the saliency score; and filtering the image of the current frame according to the filtering mechanism of each area to obtain the target image. In Himawan’s disclosure, there is presented a novel approach to video deblocking using adaptive bilateral filtering based on a saliency detection model. The model considers color, intensity, and temporal changes between frames to give the measure of perceptual significance of image region. By adapting the parameters of a bilateral filter based on saliency map, each pixel in the image can be adaptively tuned for improved perceptual quality. Results show that the proposed algorithm improves the objective quality of highly compressed video sequences (i.e. H.264/AVC format with in-loop filtering is disabled). Moreover, over-blurring of edges and textures in salient regions of image are avoided (Himawan; see Section 6, Page 5, right-hand column). As to claim 1, the aforementioned claim is rejected similarly as claim 11. As to claim 8, Rijnders teaches a computer-implemented image decoding method, comprising: acquiring a reference prediction image of a current frame; obtaining a decoded difference image by decoding encoded data of a difference image between an input image of the current frame and the reference prediction image; obtaining a to-be-processed image according to the decoded difference image and the reference prediction image ([0030], [0110]-[0113], [0115], [0118]-[0119], [0133], and [0136]-[0141]); and filtering the to-be-processed image to obtain an output image of the current frame ([0107]-[0108], [0134], and [0258]-[0263]). Rijnders does not teach acquiring a visual saliency heat map of the to-be-processed image of the current frame, and filtering, by using the visual saliency heat map of the to-be-processed image of the current frame, the to-be-processed image to obtain an output image of the current frame, wherein filtering, by using the visual saliency heat map of the to-be-processed image of the current frame, the to-be-processed image to obtain the output image of the current frame comprises: determining a saliency score of each area in the visual saliency heat map of the to-be-processed image of the current frame; determining a filtering mechanism of each area of the to-be-processed image of the current frame according to the saliency score; and filtering the to-be-processed image of the current frame according to the filtering mechanism of each area to obtain the output image. However, Rijnders does teach obtaining data for object detection or visual saliency to use for video compression including salience-based video compression that use saliency maps (abstract, [0009]-[0010], [0058]-[0060], [0073]-[0074], [0098]-[0106], and [0225]-[0228]). In addition, Rosman teaches a visual saliency heat map involving convolutional encoder-decoder neural networks for evaluating scene information from image/video to estimate gaze and awareness estimation. The visual saliency heat map of an environment is generated through the implementation of a machine learning model (FIG. 3 and [0040], [0046], [0074], [0079]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Rijnders’s system with Rosman’s system to show acquiring a visual saliency heat map of the to-be-processed image of the current frame, and filtering, by using the visual saliency heat map of the to-be-processed image of the current frame, the to-be-processed image to obtain an output image of the current frame in order to refine noise and/or biased gaze sequences through leveraging visual saliency of a scene a person is viewing. The external imagery data is processed with a neural network configured to identify visually salient regions in the environment in combination with the gaze sequences (Rosman; [0027] and [0032]). The combination of Rijnders and Rosman does not teach wherein filtering, by using the visual saliency heat map of the to-be-processed image of the current frame, the to-be-processed image to obtain the output image of the current frame comprises: determining a saliency score of each area in the visual saliency heat map of the to-be-processed image of the current frame; determining a filtering mechanism of each area of the to-be-processed image of the current frame according to the saliency score; and filtering the to-be-processed image of the current frame according to the filtering mechanism of each area to obtain the output image. However, Rijnders does teach obtaining data for object detection or visual saliency to use for video compression including salience-based video compression that use saliency maps (abstract, [0009]-[0010], [0058]-[0060], [0073]-[0074], [0098]-[0106], and [0225]-[0228]). In addition, Rosman teaches a visual saliency heat map involving convolutional encoder-decoder neural networks for evaluating scene information from image/video to estimate gaze and awareness estimation. The visual saliency heat map of an environment is generated through the implementation of a machine learning model (FIG. 3 and [0040], [0046], [0074], [0079]). In that regard, Himawan teaches determining a saliency score of each area in a visual saliency map of an image; determining a filtering mechanism of each area of the image according to the saliency score; and filtering the image according to the filtering mechanism of each area to obtain a filtered image (see abstract – “using a saliency map to control the strength of the filter for each individual point in the image based on its perceptual importance”; see Section 2, Page 2, left-hand column and right-hand column – “less smoothing is applied to the salient regions which typically constitute the region of interest; the proposed technique considers salient regions to give the measure of perceptual importance”; see Section 3.1, Page 3, left-hand column to Page 4, left-hand column for controlling filter mechanisms such as independently tuning the shape of the filter for each individual point in the image based on its perceptual importance using a saliency map comprising saliency values; see Section 6, Page 5, right-hand column). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Rijnders’s system and Rosman’s system with Himawan’s system to show wherein filtering, by using the visual saliency heat map of the to-be-processed image of the current frame, the to-be-processed image to obtain the output image of the current frame comprises: determining a saliency score of each area in the visual saliency heat map of the to-be-processed image of the current frame; determining a filtering mechanism of each area of the to-be-processed image of the current frame according to the saliency score; and filtering the to-be-processed image of the current frame according to the filtering mechanism of each area to obtain the output image. In Himawan’s disclosure, there is presented a novel approach to video deblocking using adaptive bilateral filtering based on a saliency detection model. The model considers color, intensity, and temporal changes between frames to give the measure of perceptual significance of image region. By adapting the parameters of a bilateral filter based on saliency map, each pixel in the image can be adaptively tuned for improved perceptual quality. Results show that the proposed algorithm improves the objective quality of highly compressed video sequences (i.e. H.264/AVC format with in-loop filtering is disabled). Moreover, over-blurring of edges and textures in salient regions of image are avoided (Himawan; see Section 6, Page 5, right-hand column). As to claims 2 and 12, Rijnders further teaches wherein the image of the current frame is an input image of the current frame ([0030], [0110]-[0113], [0115], [0118]-[0119], [0133], and [0136]-[0141]). As to claims 3 and 13, Rijnders further teaches wherein the image of the current frame is a to-be-processed image of an input image of the current frame ([0030], [0110]-[0113], [0115], [0118]-[0119], [0133], and [0136]-[0141]). As to claims 4 and 14, Rijnders further teaches wherein acquiring the to-be-processed image of the current frame comprises: acquiring the input image of the current frame and a reference prediction image of the current frame; decoding encoded data of a difference image between the input image and the reference prediction image of the current frame to obtain a decoded difference image; and obtaining the to-be-processed image according to the decoded difference image and the reference prediction image ([0030], [0110]-[0113], [0115], [0118]-[0119], [0133], and [0136]-[0141]). As to claims 5 and 15, Rijnders further teaches wherein acquiring the reference prediction image of the current frame comprises: performing, by using a to-be-processed image of a previous frame and the input image of the current frame, motion estimation to obtain the reference prediction image of the current frame ([0030], [0110]-[0113], [0115], [0118]-[0119], [0133], and [0136]-[0141]). As to claim 10, Rijnders further teaches wherein acquiring the reference prediction image of the current frame comprises: acquiring the reference prediction image by using an output image of a previous frame and the motion estimation vector of the output image of the previous frame and the input image of the current frame ([0030], [0110]-[0113], [0115], [0118]-[0119], [0133], and [0136]-[0141]). Allowable Subject Matter Claims 6, 9, and 16 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if the above non-statutory double patenting rejections are overcome. Claims 7 and 17 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if the above 101 statutory double patenting rejections are overcome. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHIHAN ZHOU whose telephone number is (571)270-7284. The examiner can normally be reached Mondays-Fridays 8:30am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Kelley can be reached at 571-272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZHIHAN ZHOU/Primary Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Aug 29, 2024
Application Filed
Jan 06, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602830
METHOD FOR CALIBRATING CAMERAS OF A MULTICHANNEL MEDICAL VISUALIZATION SYSTEM AND MEDICAL VISUALIZATION SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12604043
Sample Adaptive Offset (SAO) Parameter Signaling
2y 5m to grant Granted Apr 14, 2026
Patent 12597167
SYSTEM AND METHODS FOR DETERMINING CAMERA PARAMETERS AND USES THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12593055
SELECTIVE JUST-IN-TIME TRANSCODING
2y 5m to grant Granted Mar 31, 2026
Patent 12593039
CROSS-COMPONENT SAMPLE OFFSET (CCSO) WITH ADAPTIVE MULTI-TAP-FILTER CLASSIFIERS
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
81%
With Interview (+1.3%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 987 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month