Prosecution Insights
Last updated: April 19, 2026
Application No. 18/580,507

Guided Contextual Attention Map for Inpainting Tasks

Non-Final OA §102§103§112
Filed
Jan 18, 2024
Examiner
YANG, QIAN
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
709 granted / 963 resolved
+11.6% vs TC avg
Strong +31% interview lift
Without
With
+31.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
34 currently pending
Career history
997
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
48.3%
+8.3% vs TC avg
§102
21.2%
-18.8% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 963 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Claims 8 – 20 withdrawn from further consideration pursuant to 37 CFR 1.142(b), as being drawn to a nonelected groups, there being no allowable generic or linking claim. Applicant timely traversed the restriction (election) requirement in the reply filed on December 9, 2025. However, the Applicant does not provide reasoning why the restriction is traversed. The Applicant is recommended to cancel nonelected claims in in further formal response. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 3 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 recites the limitation "the blend model”. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1 and 6 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Liu et al. (“Coherent Semantic Attention for Image Inpainting”, 2019 IEEE/CVF International Conference on Computer Vision (ICCV)), hereinafter referred as Liu. Regarding claim 1, Liu discloses a computer-implemented method for training an inpainting model (abstract), the method comprising: receiving, by a computing system comprising one or more processors, an input image and a ground truth image, wherein the ground truth image depicts a scene, and wherein the input image depicts the scene with one or more occlusions (Fig. 2, section 3, Igt is a ground truth image depicts a scene; Iin is a input image depicts the scene with one or more occlusions); processing, by the computing system, the ground truth image with a contextual attention model to generate a contextual attention output (Fig. 2, section 3, processing Igt with a rough network (contextual attention model) to generate a (contextual attention) output Ip); processing, by the computing system, the input image and the contextual attention output with an augmentation model to generate a prediction image (Fig. 2, section 3, processing Ip and Iin with refinement network (augmentation model) to generate a prediction image Ir); evaluating, by the computing system, a loss function that evaluates a difference between the prediction image and the ground truth image (section 3.5, equation (7)); and adjusting, by the computing system, one or more parameters of the augmentation model based at least in part on the loss function (section 3.4, “This loss can help refinement network benefit from the gradients from both generated data and real data in adversarial training”; the purpose for the loss function is to optimize refinement network parameters to minimize the loss function taught in section 1 – 4). Regarding claim 6 (depends on claim 1), Liu discloses the method wherein the contextual attention model is trained by: processing, by the computing system, one or more training images with the contextual attention model to generate training contextual attention outputs (Fig. 2, section 3, a rough network (contextual attention model); also talked about training); processing, by the computing system, the training contextual attention outputs with an inpainting model to generate a training augmented image (Fig. 2, section 3, refinement network (augmentation model); also talked about training); evaluating, by the computing system, a training loss function that evaluates a difference between the training augmented image and the ground truth image (section 3.3 - 3.5); and adjusting, by the computing system, one or more contextual attention parameters of the contextual attention model based at least in part on the training loss function (the purpose for the loss function is to optimize refinement network parameters to minimize the loss function taught in section 1 – 4). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Iizuka et al. (“Globally and locally consistent image completion”, ACM Transactions on Graphics (TOG), Volume 36, Issue 4 (August 2017)), hereinafter referred as Iizuka. Regarding claim 2 (depends on claim 1), Liu discloses the method wherein the augmentation model comprises a prediction model, a blend model, and an occlusion model, and wherein processing the input image and the contextual attention output with the augmentation model comprises: processing, by the computing system, the input image with the prediction model to generate predicted contextual attention data (Fig. 2, section 3, processing Igt with a rough network (contextual attention model) to generate a (contextual attention) output Ip); processing, by the computing system, the predicted contextual attention data data and the input image to generate the prediction image (Fig. 2, section 3, processing Ip and Iin with refinement network (augmentation model) to generate a prediction image Ir). However, Liu fails to explicitly disclose processing, by the computing system, the predicted contextual attention data and the contextual attention output with a blend model to generate blended data between the prediction model and the refinement model. However, in a similar field of endeavor Iizuka discloses a method for globally and locally consistent image completion (abstract). In addition, Iizuka discloses processing, a model output with a blend model to generate blended data as a post processing (section 3.5.1). There is some teaching, suggestion, or motivation, either in the references themselves or in the knowledge generally available to one of ordinary skill in the art, to modify the reference or to combine reference teachings. There was reasonable expectation of success to achieve the claimed limitation for inserting a blend model between the prediction model and the refinement model by modify the reference or to combine reference teachings (KSR scenario G). Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify the invention of Liu, and processing, by the computing system, the predicted contextual attention data and the contextual attention output with a blend model to generate blended data between the prediction model and the refinement model. The motivation for doing this is to achieve an image consistency for a better quality. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Takahashi et al. (“RICAP: Random Image Cropping and Patching Data Augmentation for Deep CNNs”, Proceedings of Machine Learning Research 95:786-798, 2018), hereinafter referred as Takahashi. Regarding claim 3 (depends on claim 1), Liu fails to explicitly disclose the method wherein the blend model is trained to randomly blend the predicted contextual attention data and the contextual attention output. However, in a similar field of endeavor Takahashi discloses a method for image processing (abstract). In addition, Takahashi discloses the blend model is trained to randomly blend the two image data (section 3). There is some teaching, suggestion, or motivation, either in the references themselves or in the knowledge generally available to one of ordinary skill in the art, to modify the reference or to combine reference teachings. There was reasonable expectation of success to achieve the claimed limitation modify the reference or to combine reference teachings (KSR scenario G). Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify the invention of Liu, and the blend model is trained to randomly blend the predicted contextual attention data and the contextual attention output. The motivation for doing this is to avoid any blending bias. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Goswami et al. (US Patent Application Publication 2019/0238568), hereinafter referred as Goswami, and in further view of Price et al. (US Patent Application Publication 2018/0108137), hereinafter referred as Price. Regarding claim 4 (depends on claim 1), Liu fails to explicitly disclose the method wherein the input image is generated by adding one or more occlusions to the ground truth image. However, in a similar field of endeavor Goswami discloses a method for image processing (abstract). In addition, Goswami discloses the method wherein adding one or more occlusions to an image (Fig. 2, [0033]). In a similar field of endeavor Price discloses a method for image processing (abstract). In addition, Price discloses the method wherein adding a noise to a ground truth image ([0042]). There is some teaching, suggestion, or motivation, either in the references themselves or in the knowledge generally available to one of ordinary skill in the art, to modify the reference or to combine reference teachings. There was reasonable expectation of success to achieve the claimed limitation by modify the reference or to combine reference teachings (KSR scenario G). Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify the invention of Liu, and the input image is generated by adding one or more occlusions to the ground truth image. The motivation for doing this is that the application of Liu can be extended so that training data can be easily obtained. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Yu et al. (“Generative Image Inpainting with Contextual Attention”, IDS), hereinafter referred as Yu. Regarding claim 5 (depends on claim 1), Liu discloses the method wherein the contextual attention model comprises a convolutional neural network (section 3.1 and 3.2, convolution). However, Liu fails to explicitly disclose the method wherein the contextual attention model comprises one or more contextual attention blocks. However, in a similar field of endeavor Yu discloses a method for generative image inpainting with contextual attention (abstract). In addition, Yu discloses the contextual attention model comprises one or more contextual attention blocks (Fig. 4, section 4). There is some teaching, suggestion, or motivation, either in the references themselves or in the knowledge generally available to one of ordinary skill in the art, to modify the reference or to combine reference teachings. There was reasonable expectation of success to achieve the claimed limitation by modify the reference or to combine reference teachings (KSR scenario G). Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify the invention of Liu, and the contextual attention model comprises one or more contextual attention blocks. The motivation for doing this is that the convolution network can be more powerful. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Kopylov (US Patent Application Publication 2018/0239867). Regarding claim 7 (depends on claim 1), Liu discloses the method further comprising: receiving, by the computing system, the input image; and wherein the prediction image is generated based at least in part on the one or more inputs (Fig. 2, section 3). However, Liu fails to disclose wherein receiving one or more inputs descriptive of a selection of a portion of the input image. However, in a similar field of endeavor Kopylov discloses a method for image processing (abstract). In addition, Kopylov discloses the method wherein image data is associated with metadata corresponding to a selected portion of the image ([0030 – 0033; 0073]). There is some teaching, suggestion, or motivation, either in the references themselves or in the knowledge generally available to one of ordinary skill in the art, to modify the reference or to combine reference teachings. There was reasonable expectation of success to achieve the claimed limitation by modify the reference or to combine reference teachings (KSR scenario G). Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify the invention of Liu, and receiving one or more inputs descriptive of a selection of a portion of the input image. The motivation for doing this is that the application of Liu can be more flexible to handle different portions of images. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to QIAN YANG whose telephone number is (571)270-7239. The examiner can normally be reached on Monday-Thursday 8am-6pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on 571-270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QIAN YANG/ Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Jan 18, 2024
Application Filed
Jan 25, 2026
Non-Final Rejection — §102, §103, §112
Apr 02, 2026
Examiner Interview Summary
Apr 02, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598273
Camera Platform Incorporating Schedule and Stature
2y 5m to grant Granted Apr 07, 2026
Patent 12586560
ELECTRONIC APPARATUS, TERMINAL APPARATUS AND CONTROLLING METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12586239
SMART IMAGE PROCESSING METHOD AND DEVICE USING SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12579432
METHODS AND APPARATUS FOR AUTOMATED SPECIMEN CHARACTERIZATION USING DIAGNOSTIC ANALYSIS SYSTEM WITH CONTINUOUS PERFORMANCE BASED TRAINING
2y 5m to grant Granted Mar 17, 2026
Patent 12579686
Mixed Depth Object Detection
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+31.3%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 963 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month