Prosecution Insights
Last updated: April 19, 2026
Application No. 18/456,595

SYSTEM AND A METHOD FOR DETECTING COMPUTER-GENERATED IMAGES

Final Rejection §102§103§112
Filed
Aug 28, 2023
Examiner
PEDAPATI, CHANDHANA
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Centre For Intelligent Multidimensional Data Analysis Limited
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
96%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
14 granted / 22 resolved
+1.6% vs TC avg
Strong +32% interview lift
Without
With
+32.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
26 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
11.7%
-28.3% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to the Applicant Limitations appearing inside {} are intended to indicate the limitations not taught by said prior art(s)/combinations. Claims 1-6, 10, and 12-20 are pending in the application. Response to Amendments The Amendment filled 02 February 2026 in response to Non-Final Office Action mailed 01 October 2025 has been entered. Claims 1, 10, and 13 are amended. Claims 7-9, and 11 are newly canceled. Claims 2-6, 12, and 14-20 are original. The amended claims have overcome the prior art rejections. Response to Arguments/Remarks Applicant’s arguments with respect to the 35 USC §112(f) have been considered. Examiner respectfully agrees with applicant that “deep parsing network” in claim 11, “fully connected layer” in claim 16, and “softmax layer” in claim 16 are specific structures, and withdraws the invocation of claim interpretation under 35 USC §112(f) for these structures. Regarding the "image processing engine" in claim 1, "global texture representation module" in claim 4, and "texture enhancement module" in claim 7, these continue to use a generic keyword coupled to function without sufficient structure and therefore invokes interpretation under 35 USC §112(f). Examiner thanks applicant for pointing to corresponding structure in the specification for these terms, which affirms these claim interpretations invoke 35 USC §112(f). Sufficient disclosure in the specification is not criteria to determine whether a limitation invokes 35 USC §112(f), but for whether a limitation that does invoke 35 USC §112 (f) complies with 35 USC §112 (b) and 35 USC §112 (a). A further note regarding “global texture representation module” of claim 5, while incorporating ResNet architecture, under the broadest reasonable interpretation, the module may include other structures, therefore there is insufficient structure in this claim to perform the claimed function, therefore claim 5 invokes interpretations under 35 USC §112(f). However, the “global texture representation module” of claim 6 provides additional structure which may be considered to present a sufficient showing to establish that the claim limitation recites sufficient structure to perform the claimed function and does not invoke interpretation under 35 USC §112(f). Regarding “attention-based feature perception module” in claim 13,the term is considered a placeholder coupled to function without sufficient structure, thereby invoking the claim interpretation under 35 USC §112(f). However, the same term in claim 14 recites sufficient structure and therefore does not invoke the claim interpretation under 35 USC §112(f). Regarding "channel-spatial attention module", introduced in claim 14 and further limited in claim 15, the claim does not appear to provide any functional limitation for the module. The "channel-spatial attention" appears to be providing what the module is by name, not what function it performs, arguably. The same applies to "channel attention submodule" and “spatial attention submodule” in claim 15, that are interpreted as functions for which the specification provides sufficient structure, thus invoking interpretation under 35 USC §112(f). Claim interpretations are subsequently reiterated in this office action in order to ensure a clear and complete prosecution history. Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Information Disclosure Statement No Information Disclosure Statement (IDS) was filed; therefore, no applicant-submitted references were considered. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “image processing engine” in claim 1 “global texture representation module” in claim 4. “texture enhancement module” in claim 7. “attention-based feature perception module” in claim 13. “channel-spatial attention module” in claim 15, “channel attention submodule” in claim 15, “spatial attention submodule” in claim 15. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 and 4 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Amendment results in a double-inclusion of "a global texture representation module" in claims 1 and 4. It is unclear whether the modules are one module are two separate modules. If it is the same module, consider amending claim 4, line 2 as follows: “[[a]] the global texture representation module”. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 10, and 12-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Xu (Qiang Xu and Shan Jia and Xinghao Jiang and Tanfeng Sun and Zhe Wang and Hong Yan. “Joint Learning of Deep Texture and High-Frequency Features for Computer-Generated Image Detection”. arXiv, 2022, arXiv: 2209.03322) Regarding claim 1, A system for detecting computer-generated images, comprising an image processing engine arranged to analyze an input digital image embedded with image traces created during generation and/or post-generation processing operation of the input digital image (Xu, [§1, p 2, Col 1, ¶1]; The inherent traces of CG and PG images caused by different generation and processing operations are analyzed), and to determine whether the input digital image is a computer-generated image or a natural photographic image based on the analysis of the image traces (Xu, [p 5, Fig 4 caption]; output probability of whether the input image is a PG or CG image.); wherein the image processing engine further comprises a texture enhancement module arranged to amplify texture differences of multi-scale texture patterns (Xu, [§I, p 2, Col 1, ¶2]; we design a texture rendering module to enhance the discriminative traces; deep texture and high-frequency features fusion); wherein the texture enhancement module is arranged to amplify discriminative traces associated with image features thereby to facilitate capturing of relationship and differences of the multi-scale texture patterns by a global texture representation module(Xu, [§I, p 1, Col 2, ¶2]; By adopting a separation-fusion detection strategy equipped with the attention mechanism, the proposed network can effectively learn the representative information of texture perturbation, high-frequency residual, and the global spatial trace in images); wherein the discriminative traces are amplified based on a semantic segmentation map guided affine transformation operation and convolutional neural networks-based texture recovery(Xu, [Abstract]; the semantic segmentation map is generated to guide the affine transformation operation, which is used to recover the texture in different regions of the input image.); and wherein the image processing engine further comprises a deep parsing network arranged to generate a segmentation map for the semantic segmentation map guided affine transformation operation (Xu, See Fig 4 and [§IV, p 4, Col 2, ¶2]; Deep Parsing Network [49] for semantic segmentation, the segmentation map is used to generate the spatial feature transformation and the produced feature maps). Regarding claim 10, Xu teaches he system for detecting computer-generated images of claim 1. Xu further teaches wherein the texture enhancement module comprises at least one convolution layer, semantic segmentation map- guide residual blocks, an affine transformation module and an upsampling module (Xu, See Fig 5 and [§IV.C., p 6, Col 1, ¶1]; the module mainly contains convolutional layers, semantic segmentation map-guided residual blocks, associated affine transformations and upsampling module). Regarding claim 12, Xu teaches he system for detecting computer-generated images of claim 1. Xu further teaches wherein the deep parsing network is further arranged to generate intermediate spatial feature transformation maps and feature maps associated with the image features for further process by the affine transformation module (Xu, [§IV.C., p 6, Col 2, ¶3]; )semantic segmentation maps in the segmentation response module to obtain the produced feature map to guide the affine transformation operation. Specifically, we adopt the Deep Parsing Network [49] for semantic segmentation). Regarding claim 13, Xu teaches he system for detecting computer-generated images of claim 1. Xu further teaches wherein the image processing engine further comprises an attention-based feature perception module arranged to facilitate trace exploration in spatial and channel dimensions (Xu, [§IV.D., p 7, Col 1, ¶1]; the channel attention submodule ATc([Symbol font/0xD7]) and the spatial attention submodule ATs(·)). Regarding claim 14, Xu teaches he system for detecting computer-generated images of claim 13. Xu further teaches wherein the attention- based feature perception module comprises a convolution layer, an average-pooling layer and a channel-spatial attention module (Xu, See Fig 4, shown below, Branches 1, 2, and 3 exhibit convolution layer followed by an average-pooling layer, that are then are followed by the channel-spatial attention module) PNG media_image1.png 420 1054 media_image1.png Greyscale Regarding claim 15, Xu teaches he system for detecting computer-generated images of claim 14. Xu further teaches wherein the channel-spatial attention module comprises a channel attention submodule connected to a spatial attention submodule 'n a sequential order (Xu, [§IV.D., p 7, Col 1, ¶1]; the channel attention submodule ATc([Symbol font/0xD7]) and the spatial attention submodule ATs(·) are connected in a sequential order). Regarding claim 16, Xu teaches he system for detecting computer-generated images of claim 14. Xu further teaches, wherein the image processing engine further comprises a fully connected layer and a softmax layer arranged to determine an output probability of whether the input digital image is a computer-generated image or a natural photographic image (Xu, Fig 4 caption; softmax layer is added to the [fully connected] model to obtain the output probability of whether the input image is a PG or CG image) based on concatenated high-level features obtained by the global texture representation module, the texture enhancement module and the attention-based feature perception nodule (Xu, [§IV.A., p 6, Col 1, ¶1]; the features of these branches are concatenated for classification, and the classification results on whether the input image is a CG or PG can be reported in the form of 0 or 1 through a fully-connected layer and a softmax layer). Regarding claim 17, Xu teaches he system for detecting computer-generated images of claim 1. Xu further teaches wherein the image traces include texture perturbation, high-frequency residual or global spatial trace in the input digital image (Xu, [§I, p 2, Col 1, ¶3]; the representative information of texture perturbation, high-frequency residual, and the global spatial trace in images). Regarding claim 18, Xu teaches the system for detecting computer-generated images of claim 1. Xu further teaches wherein the computer- generated image is generated by geometric data modeling, photorealistic rendering or is generated based on an artificial intelligence generative model (Xu, [§III, p 3, Col 2, ¶3- p 4, Col 1, §III, ¶1]; the geometric entities of an original design are modified to maximize performance and satisfy constraints via the representation of the geometric data. Next, a series of photorealistic rendering techniques are adopted to render the model). Regarding claim 19, Xu teaches the system for detecting computer-generated images of claim 18. Xu further teaches wherein the image processing engine is trained by providing both a plurality of computer-generated images and a plurality of natural photographic (Xu, [§V.A., p 8, Cols 1-2, ¶1-3]; the KGRA training dataset includes 6100 CG and 6100 PG images, where CG are computer generated images and PG are natural photographic images) image as positive samples and negative samples (Xu, [§V.A., p 9, Col 1, ¶1]; P and N represent the number of positive and negative samples) so as to train the image processing engine in a machine learning process (Xu, [§V.A., p 8, Col 2, ¶4]; datasets are used for training and validation of the proposed approach), wherein the natural photographic images are generated by a digital camera (Xu, [§V.A., p 8, Col 1, ¶2]; regarding the KGRA data set: “For images in the PG set, 102 images with different contents were taken by the authors in Singapore using a NIKON D5200 camera. The rest images (5998 in total) were downloaded from the RAISE dataset, and directly converted from RAW format to JPEG. RAISE is primarily designed for the evaluation of digital forgery detection algorithms. All the images have been collected from four photographers, capturing different scenes and moments in over 80 places in Europe employing three different cameras [56].” ). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 2-4 are rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Yang (J. Yang, A. Li, S. Xiao, W. Lu and X. Gao, "MTD-Net: Learning to Detect Deepfakes Images by Multi-Scale Texture Difference," in IEEE Transactions on Information Forensics and Security, vol. 16, pp. 4234-4245, 2021). Regarding claim 2, Xu teaches the system for detecting computer-generated images of claim 1. Xu further teaches wherein the image traces include {multi-scale texture patterns} of image features in the input digital image (Xu, [§IV, p 4, Col 2, ¶1]; “deep texture and high-frequency features learning to explore inherent traces left in CG images.”) Xu does not explicitly disclose wherein the image traces include multi-scale texture patterns of image features in the input digital image. However, Yang teaches wherein the image traces include multi-scale texture patterns of image features in the input digital image (Yang, [§IV, p. 4237, col. 2, ¶1]; we propose a multi-scale texture difference model inspired by how CNN models can behave to anticipate whether a face image is real or fake). Xu and Yang are analogous art because they are from the same field of endeavor of using CNN models in distinguish fake/real images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include a multi-scale texture difference model as taught by Yang to the combined invention of Liu and Yang. The motivation to do so would be to give an accurate prediction of fake images. Regarding claim 3, the combination of Xu and Yang teaches he system for detecting computer-generated images of claim 2. Xu further teaches wherein the image processing engine includes a machine-learning based processing engine (Xu, See Fig 4 and [§IV, p 4, Col 2, ¶2]; Deep Parsing Network [49] is a machine-learning based processing engine). Regarding claim 4, Xu teaches he system for detecting computer-generated images of claim 3. Xu further teaches wherein the image processing engine comprises a global texture representation module arranged to capture relationship and differences of the multi-scale texture patterns so as a determine whether the image features are generated by a computing process or by a photographical means (Xu, [§IV.A., p 5, Col 1, ¶2]; Branch-2 is complementarily incorporated to learn the global representative information of the input image; Xu teaches earlier that differences in texture patterns maybe used to determine image generation by computer or natural photography in [§III, p 4, Col 1,¶3 – Col 2,¶1]; the different acquisitions will lead to texture dissimilarity in the local repetitive patterns and their arranged rules in PG and CG images; local repetitive patterns and the arranged rules of the texture of CG image are rougher and more irregular than that of the PG image.). Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Yang and further in view of Liu (Z. Liu, X. Qi and P. H. S. Torr, "Global Texture Enhancement for Fake Face Detection in the Wild," 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 8057-8066, doi: 10.1109/CVPR42600.2020.00808), as cited in the IDS. Regarding claim 5, the combination of Xu and Yang teaches the system for detecting computer-generated images of claim 4. The combination does not explicitly disclose wherein the global texture representation module incorporates ResNet architecture. However, Liu, a similar field of endeavor of discerning real or fake images, teaches wherein the global texture representation module incorporates ResNet architecture (Liu, [§4.2, p. 8064, col. 1, ¶1]; Gram Blocks are added to the ResNet architecture on the input image and before every downsampling layer, incorporating global image texture information in different semantic levels). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include ResNet as taught by Liu to the combined invention of Xu and Yang. The motivation to do so would be because RestNet architecture detects untouched fake faces if the training and testing data are from the same face and can therefore be effective as a backbone. Regarding claim 6, the combination of Xu, Yang and Liu teaches the system for detecting computer-generated images of claim 5. Liu further teaches wherein the global texture representation module comprises at least one convolution layer, a Gram matrix-based activation layer and a global pooling layer (Liu, [§4.2, p. 8064, col. 1, ¶1]; Gram matrix calculation layer, two conv-bn-relu layers… and a global-pooling layer). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include at least one convolution layer, a Gram matrix layer and a global pooling layer as taught by Liu to the combined invention of Xu and Yang. The motivation to do so would be because Gram matrix is more robust to image edits and has an improved performance over GAN models in detecting fake images and can be used to extract global image texture feature, convolution layer for refining and to align the gram-style feature with ResNet backbone. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Liu. Regarding claim 20, Xu teaches the system for detecting computer-generated images of claim 19. Xu further teaches herein the negative samples and the positive samples includes, respectively, {natural photographic images and} computer- generated images added with image noise and/or compression traces (Xu, [p 10, Col 1, §V.D., ¶1]; we conduct two types of postprocessing operations (i.e., JPEG compression and adding noise) on the CG images in the testing subsets.). Xu does not explicitly teach added image noise and/or compression traces to natural photographic images and computer-generated images. However, Liu, teaches natural photographic images and computer- generated images added with image noise and/or compression traces (Liu, [§1 Contribution 3, p. 8061, col. 1, ¶2]; proposed Gram-Net is robust for detecting fake faces which are edited by resizing (10% improvement), blurring (15% improvement), adding noise (13% improvement) and JPEG compressing (9% improvement).) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include editing both natural photographs and computer generated images as taught by Liu to the invention of Xu. The motivation to do so would be to train a model that is robust to image edits. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zhao (Zhao, L., Zhang, M., Ding, H., & Cui, X. (2021). MFF-Net: Deepfake Detection Network Based on Multi-Feature Fusion. Entropy, 23(12), 1692. https://doi.org/10.3390/e23121692) teaches deep fake detection using the following:(1) textural and frequency information extraction; (2) texture enhancement module; (3) attention module to force the classifier to focus on the forged part; (4) fusion of textural features from the shallow RGB branch and feature extraction module and then fusion of textural features and semantic information. Zhao was not relied upon for not teaching at least deep parsing network for semantic segmentation. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHANDHANA PEDAPATI whose telephone number is 571-272-5325. The examiner can normally be reached M-F 8:30am-6pm (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at 571-272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHANDHANA PEDAPATI/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Aug 28, 2023
Application Filed
Sep 24, 2025
Non-Final Rejection — §102, §103, §112
Feb 02, 2026
Response Filed
Mar 23, 2026
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602896
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12597095
INTELLIGENT SYSTEM AND METHOD OF ENHANCING IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12571683
ELEVATED TEMPERATURE SCREENING SYSTEMS AND METHODS
2y 5m to grant Granted Mar 10, 2026
Patent 12548180
HOLE DIAMETER MEASURING DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12541829
MOTION-BASED PIXEL PROPAGATION FOR VIDEO INPAINTING
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
96%
With Interview (+32.5%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month