Prosecution Insights
Last updated: April 19, 2026
Application No. 18/357,951

VIDEO DECODER GUIDED REGION AWARE FILM GRAIN SYNTHESIS

Non-Final OA §101§103§112
Filed
Jul 24, 2023
Examiner
SULLIVAN, TYLER
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Beijing Yojaja Software Technology Development Co. Ltd.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
98%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
251 granted / 380 resolved
+8.1% vs TC avg
Strong +32% interview lift
Without
With
+31.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
31 currently pending
Career history
411
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
2.8%
-37.2% vs TC avg
§112
30.3%
-9.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 380 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55 (Chinese Application CN 2023-10869073.3 filed on July 14th, 2023). Election/Restrictions Applicant's election with traverse of Group I in the reply filed on January 21st, 2026 is acknowledged. The traversal is on the ground(s) that all claim relate to “regions” and thus isn’t a search burden [Page 1 lines 1 – 11]. This is not found persuasive because the Examiner has shown distinct groups with additional considerations claimed beyond merely searching for “regions” as alleged by the Applicant in which distinct classifications and different search techniques for each group was shown. Further, while the Applicant cites claim 8 and claim 1 to show “region types” [Page 1 lines 6 – 10] are in the claims, but does not show the features as obvious variants or likely within the same reference found / cited, the Examiner notes different classifications / search strategies would be needed (claim 1 does not require the particulars of claim 8) thus rebutting Applicant’s broad assertions alleging mere “region type” is not a burden despite the numerous specifics claimed [Page 1 lines 10 – 11]. Thus, the Examiner maintains the Restriction Requirement. The requirement is still deemed proper and is therefore made FINAL. Claims 7 – 16, and 19 are withdrawn from further consideration pursuant to 37 CFR 1.142(b), as being drawn to a nonelected Inventions (II, III, and IV), there being no allowable generic or linking claim. Applicant timely traversed the restriction (election) requirement in the reply filed on January 21st, 2026. Information Disclosure Statement The information disclosure statement (IDS) submitted on October 29th, 2024; December 9th, 2024; and February 5th, 2025 was filed before the mailing date of the First Action on the Merits (this Office Action). The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the Examiner. The information disclosure statement filed October 29th, 2024 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information referred to therein has not been considered. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character “112” has been used to designate both “Interface” [Figure 1] and “Decoder System” [Figure 2]. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: “114” [Paragraph 25 in Figure 2]. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: “No” and “Yes” [Figure 5]. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: In Paragraph 77 lines 1 – 2, the acronym “GPU” is not defined for reference character 720 which is used in the Drawings (Figure 7) for the corresponding reference character. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3 – 6 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 3 – 6, the claims rely on decoding features or decoding an encoded bitstream, but the independent claim appears to be an encoder in which the bitstream is not generated and thus the decoding features claimed have not been generated thus the claims have Indefinite metes and bounds. Claim limitation “a computer-readable storage medium comprising instructions for controlling the one or more computer processors to be operable for:” [Claim 20] has been evaluated under the three-prong test set forth in MPEP § 2181, subsection I, but the result is inconclusive. Thus, it is unclear whether this limitation should be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the “storage medium” may encompass transitory embodiments [Specification Paragraphs 75 and 79 at least] and non-statutory subject matter and thus renders the claim Indefinite as to the elements all directed towards statutory subject matter and terms connoting sufficient structure. The boundaries of this claim limitation are ambiguous; therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. In response to this rejection, applicant must clarify whether this limitation should be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Mere assertion regarding applicant’s intent to invoke or not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph is insufficient. Applicant may: (a) Amend the claim to clearly invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, by reciting “means” or a generic placeholder for means, or by reciting “step.” The “means,” generic placeholder, or “step” must be modified by functional language, and must not be modified by sufficient structure, material, or acts for performing the claimed function; (b) Present a sufficient showing that 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, should apply because the claim limitation recites a function to be performed and does not recite sufficient structure, material, or acts to perform that function; (c) Amend the claim to clearly avoid invoking 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, by deleting the function or by reciting sufficient structure, material or acts to perform the recited function; or (d) Present a sufficient showing that 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, does not apply because the limitation does not recite a function or does recite a function along with sufficient structure, material or acts to perform that function. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter. Regarding claim 20, the claim recites a “computer-readable storage medium”, however the Specification only lists “non-transitory” embodiments as an exemplary (e.g. Paragraphs 75 and 79), thus may include transitory embodiments which is non-statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 – 6, 17 – 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Demarty, et al. (WO2024/156660 A1 referred to as “Demarty” throughout), and further in view of Sjoeberg, et al. (WO2023/274688 A1 referred to as “Sjoeberg” throughout) [Cited in Applicant’s December 9th, 2024 IDS as FOR Item #1] and Olekas, et al. (US PG PUB 2022/0030247 A1 referred to as “Olekas” throughout). Regarding claim 1, see claim 20 which is the apparatus performing the steps of the claimed method. Regarding claim 17, see claim 20 which is the apparatus performing the steps of the claimed program. Regarding claim 18, see claim 3 which is the method steps performed by the claimed program. Regarding claim 20, Demarty teaches signaling the use of film grain synthesis in video sequences with synthesis techniques and region considerations. Sjoeberg teaches additional modifications to Demarty to perform region based film grain synthesis processing and additional SEI messaging considerations. Olekas teaches classifying the region types for fill grain synthesis parameters / technique to use (e.g. adding film grain or not). It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Demarty with the segmentation / region based film grain synthesis processing of Sjoeberg with the region classification / type determinations taught by Olekas for noise level consideration including adding film grain. The combination teaches one or more computer processors [Demarty Figure 1 (see at least reference characters 110) as well as Page 7 lines 21 – 32 (processors for encoding / decoding), Page 9 lines 14 – 23 and Page 46 lines 20 – 30 (processor base implementation)]; and a computer-readable storage medium comprising instructions for controlling the one or more computer processors to be operable for [Demarty Figure 1 (see at least reference characters 110, 120, and 140) as well as Page 6 lines 1 – 23(memory / storage media implementations), Page 9 lines 14 – 23 (software implementations using memory / storage media and processor), and Page 46 lines 20 – 30 (processor bases implementation using code on memory / storage media)]: segmenting a frame of a video into a plurality of regions [Demarty Figures 2, 4, 6 (region based processing with information and null space and see at least reference character 202), and 8 – 9 as well as Page 10 lines 1 – 15 (block based partitioning / segmenting), Page 15 lines 1 – 29 (film grain applied lock based or segmented based on textures / regions); Sjoeberg Figure 6 (subfigures included) as well as Paragraphs 114 – 118 (partitioning pictures into regions), 130, and 161 (example partitions into regions)]; classifying a region in the plurality of regions with a region type in a plurality of region types [Demarty Page 31 lines 8 – 30 (homogenous regions or not classifying region type), Page 34 lines 1 – 30 (adapting film grain based on content) and Page 50 line 28 – Page 51 line 18 (analyzing the whole image in a region based approach) to combine with Olekas Figure 6 and 9 as well as Paragraphs 44 and 54 (content analysis (e.g. reference character 602) for classifications of region / type of the region such as on complexity)]; generating film grain synthesis information for the region in the plurality of regions based on the region type that is associated with the region [Demarty Figures 8, 10, 12 (subfigures included) as well as Page 14 line 15 – Page 15 line 29 (content aware / region based film grain to add / incorporate into images), Page 25 line 3 – 29 (film grain added to images based on characteristics / regions / types) and Page 34 lines 1 – Page 35 line 17 (adapting film grain based on content with characteristics considered); Sjoeberg Figure 3 (see at least reference character 341 and 347) as well as Paragraphs 63, 73, and 108 (adding film grain to regions / images); Olekas Figure 6 and 9 as well as Paragraphs 44 – 54 (classifications of region / type of the region such as on complexity which is used to determine film grain synthesis / noise level to the region and the parameters with the classification)]; and outputting the film grain synthesis information for adding film grain to the region of the frame [Demarty Figures 8, 10, 12, 13B (subfigures included) as well as Page 14 line 15 – Page 15 line 29 (content aware / region based film grain to add / incorporate into images), Page 25 line 3 – 29 (film grain added to images based on characteristics / regions / types) Page 34 lines 1 – Page 35 line 17 (adapting film grain based on content with characteristics considered and output image with film grain added), and Page 51 lines 1 – 18 (outputting images with film grain synthesis); Sjoeberg Figures 3 (see at least reference character 341, 344, and 347) and 8 as well as Paragraphs 63, 73, and 108 (adding film grain to regions / images);]. The motivation to combine Sjoeberg with Demarty is to combine features in the same / related field of invention of film grain synthesis applied on a region / subpicture basis [Sjoeberg Paragraphs 1 and 6] in order to improve signaling parameter information and have larger / region based processing of film grain synthesis [Sjoeberg Paragraphs 4, 6, and 62 – 65 where the Examiner observes at least KSR Rationales (D) or (F) are also applicable]. The motivation to combine Olekas with Sjoeberg and Demarty is to combine features in the same / related field of invention of video content analysis for encoding / decoding parameter selection [Olekas Paragraphs 2 – 3 and 44] in order to improve video quality presented to a user and improve encoding / decoding performance processing video above a pixel level [Olekas Paragraphs 3 and 28 – 30 where the Examiner observes at least KSR Rationales (D) or (F) are also applicable]. This is the motivation to combine Demarty, Sjoeberg, and Olekas which will be used throughout the Rejection. Regarding claim 2, Demarty teaches signaling the use of film grain synthesis in video sequences with synthesis techniques and region considerations. Sjoeberg teaches additional modifications to Demarty to perform region based film grain synthesis processing and additional SEI messaging considerations. Olekas teaches classifying the region types for fill grain synthesis parameters / technique to use (e.g. adding film grain or not). It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Demarty with the segmentation / region based film grain synthesis processing of Sjoeberg with the region classification / type determinations taught by Olekas for noise level consideration including adding film grain. The combination teaches analyzing content in the frame to determine regions in the plurality of regions [Demarty Figure 4 as well as Page 12 lines 14 – 33 (analyzing / masking frames / pictures to determine regions to add film grain synthesis), Page 31 lines 8 – 30 (homogenous regions or not classifying region type), Page 33 lines 8 – 30 (analysis of input region for smooth / homogenous / textured regions – to combine with Olekas), Page 34 lines 1 – 30 (adapting film grain based on content), and Page 50 line 28 – Page 51 line 18 (analyzing the whole image in a region based approach) to combine with Olekas Figure 6 and 9 as well as Paragraphs 44 and 54 (content analysis (e.g. reference character 602) for classifications of region / type of the region such as on complexity)]. See claim 1 for the motivation to combine Demarty, Sjoeberg, and Olekas. Regarding claim 3, Demarty teaches signaling the use of film grain synthesis in video sequences with synthesis techniques and region considerations. Sjoeberg teaches additional modifications to Demarty to perform region based film grain synthesis processing and additional SEI messaging considerations. Olekas teaches classifying the region types for fill grain synthesis parameters / technique to use (e.g. adding film grain or not). It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Demarty with the segmentation / region based film grain synthesis processing of Sjoeberg with the region classification / type determinations taught by Olekas for noise level consideration including adding film grain. The combination teaches analyzing decoding features for the frame to determine the region in the plurality of regions in the frame, wherein the decoding features are based on information related to decoding the frame [Demarty Figures 8 – 12 (subfigures included of encoder / decoders for region analysis to add film grain to) as well as Page 11 lines 2 – 14 (decoder using encoded information to arrive at partitions / regions determined in encoding) Page 14 line 15 – Page 15 line 29 (content aware / region based film grain to add / incorporate into images), Page 25 line 3 – 29 (film grain added to images based on characteristics / regions / types), Page 31 lines 8 – 30 (homogenous regions or not classifying region type), Page 34 lines 1 – 30 (adapting film grain based on content), Page 38 lines 1 – 22 (decoder side film grain synthesis signaled by the encoder), and Page 50 line 28 – Page 51 line 18 (analyzing the whole image in a region based approach) to combine with Olekas Figure 6 and 9 as well as Paragraphs 44 – 54 (content analysis (e.g. reference character 602) for classifications of region / type of the region such as on complexity); Sjoeberg Figure 3 (see at least reference character 341 and 347) as well as Paragraphs 63 and 70 – 75 (including Table 6 and 7 of film grain parameters with sub picture considerations), 108 (adding film grain to regions / images), and Paragraphs 297 – 300 (Table 9 for film grain on sub picture / region based level); Olekas Figure 6 and 9 as well as Paragraphs 44 – 54 (classifications of region / type of the region such as on complexity which is used to determine film grain synthesis / noise level to the region and the parameters with the classification)]. See claim 1 for the motivation to combine Demarty, Sjoeberg, and Olekas. Regarding claim 4, Demarty teaches signaling the use of film grain synthesis in video sequences with synthesis techniques and region considerations. Sjoeberg teaches additional modifications to Demarty to perform region based film grain synthesis processing and additional SEI messaging considerations. Olekas teaches classifying the region types for fill grain synthesis parameters / technique to use (e.g. adding film grain or not). It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Demarty with the segmentation / region based film grain synthesis processing of Sjoeberg with the region classification / type determinations taught by Olekas for noise level consideration including adding film grain. The combination teaches wherein the decoding features are based on the encoded bitstream or information from the decoding process of the frame [Demarty Figures 8 – 12 (subfigures included of encoder / decoders for region analysis to add film grain to) as well as Page 11 lines 2 – 14 (decoder using encoded information to arrive at partitions / regions determined in encoding and forming a bitstream with the parameters / style vector for the film grain also in Page 12 lines 25 – 33), Page 14 line 15 – Page 15 line 29 (content aware / region based film grain to add / incorporate into images), Page 25 line 3 – 29 (film grain added to images based on characteristics / regions / types), Page 31 lines 8 – 30 (homogenous regions or not classifying region type), Page 34 lines 1 – 30 (adapting film grain based on content), Page 38 lines 1 – 22 (decoder side film grain synthesis signaled by the encoder and signaled in bitstream in Page 40 lines 1 – 13), and Page 50 line 28 – Page 51 line 18 (analyzing the whole image in a region based approach); Sjoeberg Figure 3 (see at least reference character 341 and 347) as well as Paragraphs 63 and 70 – 75 (including Table 6 and 7 of film grain parameters with sub picture considerations), 108 (adding film grain to regions / images), and Paragraphs 252, 266 – 270 and 297 – 301 (Table 9 for film grain on sub picture / region based level for the bitstream)]. See claim 1 for the motivation to combine Demarty, Sjoeberg, and Olekas. Regarding claim 5, Demarty teaches signaling the use of film grain synthesis in video sequences with synthesis techniques and region considerations. Sjoeberg teaches additional modifications to Demarty to perform region based film grain synthesis processing and additional SEI messaging considerations. Olekas teaches classifying the region types for fill grain synthesis parameters / technique to use (e.g. adding film grain or not). It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Demarty with the segmentation / region based film grain synthesis processing of Sjoeberg with the region classification / type determinations taught by Olekas for noise level consideration including adding film grain. The combination teaches analyzing decoding features for the frame to determine a region type in the plurality of region types, wherein the decoding features are based on information related to decoding the frame [Demarty Figures 8 – 12 (subfigures included of encoder / decoders for region analysis to add film grain to) as well as Page 11 lines 2 – 14 (decoder using encoded information to arrive at partitions / regions determined in encoding) Page 14 line 15 – Page 15 line 29 (content aware / region based film grain to add / incorporate into images), Page 25 line 3 – 29 (film grain added to images based on characteristics / regions / types), Page 31 lines 8 – 30 (homogenous regions or not classifying region type), Page 34 lines 1 – 30 (adapting film grain based on content), Page 38 lines 1 – 22 (decoder side film grain synthesis signaled by the encoder), and Page 50 line 28 – Page 51 line 18 (analyzing the whole image in a region based approach) to combine with Olekas Figure 6 and 9 as well as Paragraphs 44 – 54 (content analysis (e.g. reference character 602) for classifications of region / type of the region such as on complexity); Sjoeberg Figure 3 (see at least reference character 341 and 347) as well as Paragraphs 63 and 70 – 75 (including Table 6 and 7 of film grain parameters with sub picture considerations), 108 (adding film grain to regions / images), 164 – 170 (region mode values), 252 and 266 – 274 (encoding / decoding region information and type / seed value to use / mode value for a region) and Paragraphs 297 – 300 (Table 9 for film grain on sub picture / region based level); Olekas Figure 6 and 9 as well as Paragraphs 44 – 54 (classifications of region / type of the region such as on complexity which is used to determine film grain synthesis / noise level to the region and the parameters with the classification)]. See claim 1 for the motivation to combine Demarty, Sjoeberg, and Olekas. Regarding claim 6, Demarty teaches signaling the use of film grain synthesis in video sequences with synthesis techniques and region considerations. Sjoeberg teaches additional modifications to Demarty to perform region based film grain synthesis processing and additional SEI messaging considerations. Olekas teaches classifying the region types for fill grain synthesis parameters / technique to use (e.g. adding film grain or not). It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Demarty with the segmentation / region based film grain synthesis processing of Sjoeberg with the region classification / type determinations taught by Olekas for noise level consideration including adding film grain. The combination teaches analyzing decoding features for the frame to determine a region type in the plurality of region types, wherein the decoding features are based on information related to decoding the frame [Demarty Figures 8 – 12 (subfigures included of encoder / decoders for region analysis to add film grain to) as well as Page 11 lines 2 – 14 (decoder using encoded information to arrive at partitions / regions determined in encoding and forming a bitstream with the parameters / style vector for the film grain also in Page 12 lines 25 – 33), Page 14 line 15 – Page 15 line 29 (content aware / region based film grain to add / incorporate into images), Page 25 line 3 – 29 (film grain added to images based on characteristics / regions / types), Page 31 lines 8 – 30 (homogenous regions or not classifying region type), Page 34 lines 1 – 30 (adapting film grain based on content), Page 38 lines 1 – 22 (decoder side film grain synthesis signaled by the encoder and signaled in bitstream in Page 40 lines 1 – 13), and Page 50 line 28 – Page 51 line 18 (analyzing the whole image in a region based approach); Sjoeberg Figure 3 (see at least reference character 341 and 347) as well as Paragraphs 63 and 70 – 75 (including Table 6 and 7 of film grain parameters with sub picture considerations), 108 (adding film grain to regions / images), 164 – 170 (region mode values), 252, 266 – 270 and 297 – 301 (Table 9 for film grain on sub picture / region based level for the bitstream and mode value / type of region)]. See claim 1 for the motivation to combine Demarty, Sjoeberg, and Olekas. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tyler W Sullivan whose telephone number is (571)270-5684. The examiner can normally be reached IFP. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at (571)-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TYLER W. SULLIVAN/ Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Jul 24, 2023
Application Filed
Feb 23, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594884
TRAILER ALIGNMENT DETECTION FOR DOCK AUTOMATION USING VISION SYSTEM AND DYNAMIC DEPTH FILTERING
2y 5m to grant Granted Apr 07, 2026
Patent 12593027
INTRA PREDICTION FOR SQUARE AND NON-SQUARE BLOCKS IN VIDEO COMPRESSION
2y 5m to grant Granted Mar 31, 2026
Patent 12563211
VIDEO DATA ENCODING AND DECODING USING A CODED PICTURE BUFFER WHOSE SIZE IS DEFINED BY PARAMETER DATA
2y 5m to grant Granted Feb 24, 2026
Patent 12542894
Method, An Apparatus and a Computer Program Product for Implementing Gradual Decoding Refresh
2y 5m to grant Granted Feb 03, 2026
Patent 12541880
CAMERA CALIBRATION METHOD, AND STEREO CAMERA DEVICE
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
98%
With Interview (+31.6%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 380 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month