Prosecution Insights
Last updated: April 19, 2026
Application No. 18/449,238

CONTEXT PRESERVATION FOR SYNTHETIC IMAGE AUGMENTATION USING DIFFUSION

Non-Final OA §103§112
Filed
Aug 14, 2023
Examiner
MOYER, ANDREW M
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
89%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
326 granted / 427 resolved
+14.3% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
8 currently pending
Career history
435
Total Applications
across all art units

Statute-Specific Performance

§101
8.3%
-31.7% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
22.8%
-17.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 427 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending. Claim Interpretation The term “circuit” in claim 8 is interpreted to be an arrangement of interconnected electrical components, which is a structural machine, consistent with the specification and the plain meaning. The term “processor” in claim 16 is interpreted to be short for “microprocessor” or equivalent, which is a structural machine, consistent with the specification and the plain meaning. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 9-12 and 17-19 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention. Claims 9-12 and 17-19 each recites the limitation (element) "the object". There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 5-8, 10, 12-16, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mironica et al. (hereafter referred to as “Mironica”, US 2024/0420394), and in view of C et al. (hereafter referred to as “C”, US 10,319,116). Regarding claim 1, Mironica discloses a computer-implemented method (Figs. 2-5) comprising: removing text from an object represented in a first image (Fig. 3, image 305 represents a captured scene (object) that contains text. Fig. 2 and pg. [0057] “noise component 220 adds noise to the image in the region corresponding to the text to obtain a noisy image”. Text is effectively removed by adding noise to the text regions, resulting in a noisy image. Figs. 4&5 show details of generating the “noisy image”); providing the first image of the object, without the removed text, as input to a generative diffusion model; receiving, as output of the generative diffusion model, a second image of the object including one or more annotations to the object (Pg. [0028] “generate a new modified image, which can remain largely similar to the original background image, but now contains the contrasting color in the text overlap region”. Figs. 4&5, final step: “Generate new modified image using noisy image as condition to generative diffusion model”. The new modified image corresponds to the “second image” that includes the contrasting color (i.e., annotation); and blending the text into the second image of the object to cause the one or more annotations to be represented as being applied to the object Figs. 4&5, final step: “… and combine original text with new modified image”, pg. [0044] “superimposes the text on the modified image to obtain a composite image”). Mironica does not expressly disclose causing the contrasting color to be represented as being applied to the text. However, combining two images to generate a composite image using weighted average to harmonize the blending is well known and common practice in the art, as for example disclosed in C (Fig. 3 and col. 11, lines 33-63. Blend text with theme background through “weighted average of the theme background color code value and the text background color code value”). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to incorporate the well-known techniques as described in C into Mironica’s system to yield the invention as described in claim 1, because the blending through weighted average effectively causes the contrasting color to be represented as being applied to the text. This combination (modification) could be made using known methods with no changes to the operating principles of either reference to produce nothing more than highly predictable results of harmonizing images and text. Regarding claim 3, Mironica in view of C discloses the computer-implemented method of claim 1, but fails to expressly disclose: providing, as additional input to the generative diffusion model, a text prompt indicating at least a type or a magnitude of the one or more annotations to be generated for the object in the second image. However, as disclosed in Mironica (Fig. 6 and pg. [0092]-[0095]), a user may provide a design with text (605) that provides additional information about the type (e.g., flower) of modified image to be generated by the generative diffusion model (i.e., flowers with color that contrasts with the text). It is clearly desirable to provide designer easy ways such as text prompt to modify a design. Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to yield the invention as described in claim 3 from the teachings of Mironica in view of C. Regarding claim 5, Mironica in view of C discloses the computer-implemented method of claim 1, wherein blending the text into the second image of the object allows the one or more annotations to be generated for the object in the second image without modification of a semantic meaning of the text by the generative diffusion model (Mironica, Figs. 4&5, combining original text with new modified image clearly should not modify the semantic meaning of the text. The purpose of changing the image underneath the text is to increase the contrast and legibility of the text). Regarding claim 6, Mironica in view of C discloses the computer-implemented method of claim 1, wherein the blending is performed using a text blend weight and an annotation blend weight (see analysis of claim 1, C, Fig. 3 and col. 11, lines 33-63. Blend text with theme background through “weighted average of the theme background color code value and the text background color code value”). Regarding claim 7, Mironica in view of C discloses the computer-implemented method of claim 6, wherein the blending includes calculating a weighted pixel value average for one or more pixel locations corresponding to the text performed according to at least one of the text blend weight or the annotation blend weight (see analysis of claim 6). Claims 8, 10, 12-14, 16 and 18 have been analyzed and are rejected for the same reasons as outlined above in the rejection of claims 1, 3, 5-7, 1 and 3 respectively. The term “texture” in claims 8 and 16 corresponds to “object” in claim 1. The terms “semantic content” and “augmentations” in claim 16 corresponds, respectively, to “text” and “annotations” in claim 1. Both Mironica and C’s system are computer-based. Processor(s) and storage(s) are the main building blocks of a computer system. Regarding claims 15 and 20, Mironica’s system is a system for rendering graphical output (Figs. 2-5). Claims 4, 11 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Mironica (US 2024/0420394) in view of C (US 10,319,116), and further in view of Gokturk et al. (hereafter referred to as “Gokturk”, US 2006/0253491). Regarding claim 4, Mironica in view of C discloses the computer-implemented method of claim 1, the removing comprising: removing the text from the first image by setting pixel values, at one or more pixel locations corresponding to the identified text, to pixel values determined based in part on pixel values of the object proximate the one or more pixel locations (Mironica, e.g., Fig. 4, step 410 “Generate a Gaussian mask corresponding to the region(s) and blurred by Gaussian noise”). Mironica does not expressly disclose identifying at least a portion of the first image as the text using an optical character recognition (OCR) model. However, using OCR model to detect text is well known and common practice in the art, as for example disclosed in Gokturk (pg. [0170] “text detection and OCR can be used jointly, for example using an iterative process where the text detection first performs a crude segmentation of the image, and OCR then identifies likely text regions. The likely text regions are passed to the text detection and normalization to be refined”). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to incorporate the well-known techniques as described in Gokturk into Mironica in view of C’s system to yield the invention as described in claim 4. This combination (modification) could be made using known methods with no changes to the operating principles of either reference to produce nothing more than the highly predictable results. Claims 11 and 19 have been analyzed and are rejected for the same reasons as outlined above in the rejection of claims 3, respectively. Both Mironica in view of C’s and Gokturk’s systems are computer-based. Processor(s) and storage(s) are the main building blocks of a computer system. Allowable Subject Matter Claim 2 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all limitations of the base claim. Claims 9 and 17 would be allowable if the rejection under U.S.C. 112(b) above is overcome and rewritten in independent form including all limitations of the base claim. As allowable subject matter has been indicated, applicant's reply must either comply with all formal requirements or specifically traverse each requirement not complied with. See 37 CFR 1.111(b) and MPEP § 707.07(a). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to LI LIU whose telephone number is (571)270-5363. The examiner can normally be reached on Monday-Friday, 8:00AM-4:30PM, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LI LIU/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Aug 14, 2023
Application Filed
Dec 15, 2025
Non-Final Rejection — §103, §112
Apr 14, 2026
Applicant Interview (Telephonic)
Apr 14, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12580065
IMAGE QUALITY RELATIVE TO MACHINE LEARNING DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12551121
BLOOD PRESSURE PREDICTION METHOD AND DEVICE FUSING NOMINAL PHOTOPLETHYSMOGRAPHY (PPG) SIGNAL DATA
2y 5m to grant Granted Feb 17, 2026
Patent 12437205
FOCUSED HYPERPARAMETER TUNING USING ATTRIBUTION
2y 5m to grant Granted Oct 07, 2025
Patent 12236635
DIGITAL PERSON TRAINING METHOD AND SYSTEM, AND DIGITAL PERSON DRIVING SYSTEM
2y 5m to grant Granted Feb 25, 2025
Patent 12223693
OBJECT DETECTION METHOD, OBJECT DETECTION APPARATUS, AND OBJECT DETECTION SYSTEM
2y 5m to grant Granted Feb 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
89%
With Interview (+12.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 427 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month