Prosecution Insights
Last updated: April 19, 2026
Application No. 18/747,778

CONDITION-BASED IMAGE EDITING

Non-Final OA §102§103
Filed
Jun 19, 2024
Examiner
VU, KHOA
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Adobe Inc.
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
84%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
234 granted / 345 resolved
+5.8% vs TC avg
Strong +16% interview lift
Without
With
+15.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
27 currently pending
Career history
372
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
73.3%
+33.3% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
5.9%
-34.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 345 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 5, 6, 8 and 16-20 are rejected under 35 U.S.C. 102 (a) (1) as being unpatentable by Petrangeli et al. (U.S. 2023/0162330 A1). Regarding Claim 1, Petrangeli discloses a method (Petrangeli, [0006] “a computer-implemented method for image”) comprising: obtaining a source image (Petrangeli, [0029] “retaining the source image” and a modification input that indicates a target edit to the source image (Petrangeli, abstract “An input image including a target region to be edited” Petrangeli teaches modification input that indicates a target region to be edited; generating a modification encoding representing the target edit (Petrangeli, [0006] “receiving an input image comprising a target region and an edit parameter specifying a modification to the target region; generating a parsing map of the input image and [0046] “The shape editing subsystem 110 includes an encoder that takes as input the source parsing map and generates a latent representation of the source parsing map” Petrangeli teaches generating a modification encoding (encoder that takes input the source which includes a target region) by a shape editing subsystem; and generating, using an image generation model, an output image that depicts the source image with the target edit based on the source image and the modification encoding, wherein the image generation model is trained to perform a pose modification task and a part replacement task (Petrangeli, [0006] “receiving an input image comprising a target region…specifying a modification to the target region” and [0046] “The shape editing subsystem 110 includes an encoder that takes as input the source parsing map and generates a latent representation of the source parsing map” and [0049] “The encoder 114 includes one or more machine learning models trained to generate a latent representation of an input image” and [0118] “results in the shape attribute editing task, even for asymmetric poses and challenging tasks that involve multiple regions to be edited, as in the cloth length manipulation” and [0119] FIG. 6, a single source image 602 presenting a challenging asymmetric pose” and [0120] In columns 604, the sleeve length is edited, and in columns 606, the shirt length is edited” Petrangeli teaches an image generation model is trained (a machine learning models trained) generates a source image with the target edit (source image 602, Fig. 6), and perform a pose modification task (modify the appearance and the pose of a person wearing a shirt in source image 602) and a part replacement task (In columns 604, the sleeve length is edited, and in columns 606, the shirt length is edited, Fig. 6). Regarding Claim 2, Petrangeli discloses the method of claim 1, wherein generating the modification encoding comprises: encoding an image depicting a target replacement element for an element of the source image (Petrangeli, [0046] “The shape editing subsystem 110 includes an encoder that takes as input the source parsing map and generates a latent representation of the source parsing map” and [0118] “results in the shape attribute editing task, even for asymmetric poses and challenging tasks that involve multiple regions to be edited, as in the cloth length manipulation” and [0119] FIG. 6, a single source image 602 presenting a challenging asymmetric pose” and [0120] In columns 604, the sleeve length is edited, and in columns 606, the shirt length is edited” Petrangeli teaches encoding an image shows a target replacement element for an element of the source image (In columns 604, the target sleeve length is edited, and in columns 606, the target shirt length is edited, Fig. 6). Regarding Claim 3, Petrangeli discloses the method of claim 1, wherein generating the modification encoding comprises: generating a pose-warped texture based on the source image and the modification input, wherein the modification encoding is based on the pose-warped texture (Petrangeli, [0079] “the target region is a shirt, and the shirt is to be lengthened. The color, texture, and other properties of the shirt are extended into the masked region to lengthen the shirt and [0119] FIG. 6, a single source image 602 presenting a challenging asymmetric pose” and [0120] In columns 604, the sleeve length is edited, and in columns 606, the shirt length is edited” Petrangeli teaches generating a pose-warped texture based on the source image and the modification input based on the pose-warped texture e.g., a pose-warped texture of the pose of person wearing a short-sleeved shirt (pose information) in the source image 602 is modified with the sleeve length (604), or the shirt length (606), Fig. 6. Regarding Claim 5, Petrangeli discloses the method of claim 1, further comprising: identifying a background portion of the source image, wherein the modification encoding is generated based on the background portion (Petrangeli, [0046] “The shape editing subsystem 110 includes an encoder that takes as input the source parsing map” and [0069] “the mapper 112 can be a neural network trained to generate a parsing map of an input image, where the parsing map identifies different regions of the image. For example, the parsing map separates the image into different regions such as a shirt, a face, an arm, a skirt, and a background” Petrangeli teaches modification encoding (source parsing map encoding by an encoder) is generated based on the background region. Regarding Claim 6, Petrangeli discloses the method of claim 1, further comprising: obtaining a text prompt describing the target edit (Petrangeli, Fig. 1, [0040] “the editor interface 104 responds to user selection of an upload element by transitioning to a view showing available files to upload, prompt a user to take a photo” and [0048] “An editor 115 manipulates the latent representation of the source parsing map to modify a shape of a target region” Petrangeli teaches obtaining a text prompt (from the editor interface) describing the target edit (modify a shape of a target region); and encoding the text prompt to obtain a text encoding, wherein the output image is generated based on the text encoding (Petrangeli, [0046] “The shape editing subsystem 110 includes an encoder that takes as input the source parsing map” and [0050] “The encoder 114 is a machine learning model trained to generate such a latent representation” Petrangeli teaches the editor interface includes a text prompt for user selection on input source image and encoder takes as input source to generate a text encoding (e.g., a latent representation). Regarding Claim 8, Petrangeli discloses the method of claim 1, wherein: the modification input comprises at least one of a part replacement input or a pose modification. [0049] “The encoder 114 includes one or more machine learning models trained to generate a latent representation of an input image” and [0118] “results in the shape attribute editing task, even for asymmetric poses and challenging tasks that involve multiple regions to be edited, as in the cloth length manipulation” and [0119] FIG. 6, a single source image 602 presenting a challenging asymmetric pose” and [0120] In columns 604, the sleeve length is edited, and in columns 606, the shirt length is edited” Petrangeli teaches the modification input (modify the appearance and the pose of a person wearing a shirt in source image 602, Fig. 6) includes a part replacement input (In columns 604, the sleeve length is edited, and in columns 606, the shirt length is edited, Fig. 6). Regarding Claim 16, Petrangeli discloses an apparatus (Petrangeli, [0139] “a general purpose computing apparatus”) comprising: at least one processor (Petrangeli, [0125] “a processor 802”) ; at least one memory storing instruction executable by the at least one processor (Petrangeli, [0125] “The processor 802 executes computer-executable program code stored in a memory device 804”; a part encoder comprising parameters stored in the at least one memory and trained to generate a part encoding based on a source image and a part image indicating a target part (Petrangeli, Fig. 1, [0056] “The generator 136 may be trained to produce an image by inpainting a specific region identified by the shape editing subsystem 110. In other words, the generator 136 generates a targeted region of the image (e.g., sleeves if sleeves are added, arms if sleeves are removed, and so forth) Petrangeli teaches a part encoder (referred to as a generator 136) includes parameters e.g., sleeves and trained to generate a part based on a source image (image 602, AMGAN, Fig. 6) and a part image indicating a target part e.g., sleeves if sleeves are added, arms if sleeves are removed, (image 604, Fig. 6) ; a condition encoder comprising parameters stored in the at least one memory and trained to generate a condition encoding based on the source image and pose information indicating a target pose (Petrangeli, Fig. 1, [0049] “The encoder 114 includes one or more machine learning models trained to generate a latent representation of an input image” and [0118] “results in the shape attribute editing task, even for asymmetric poses and challenging tasks that involve multiple regions to be edited, as in the cloth length manipulation” and [0119] FIG. 6, a single source image 602 presenting a challenging asymmetric pose” and [0120] In columns 604, the sleeve length is edited, and in columns 606, the shirt length is edited” Petrangeli teaches a condition encoder (encoder 114) includes parameters (e.g., sleeve length, shirt length) stored in memory and trained to generate a condition editing shape attribute (cloth length manipulation) base on source image an pose information indicating a target pose (a pose of person wearing a shirt in source image 602, Fig. 6) ; and an image generation model comprising parameters stored in the at least one memory and trained to generate an output image that depicts an entity from the source image with the target pose or the target part based on the source image, the part encoding, and the condition encoding (Petrangeli, Fig. 1, [0049] “The encoder 114 includes one or more machine learning models trained to generate a latent representation of an input image” and [0118] “results in the shape attribute editing task, even for asymmetric poses and challenging tasks that involve multiple regions to be edited, as in the cloth length manipulation” and [0119] FIG. 6, a single source image 602 presenting a challenging asymmetric pose” and [0120] In columns 604, the sleeve length is edited, and in columns 606, the shirt length is edited” Petrangeli teaches an image generation model (a machine learning model trained) includes parameters (e.g., cloth length, sleeve, shirt, arm) stored in memory and trained to generate a condition editing cloth e.g., sleeve length, shirt length, Fig. 6) or target part e.g., sleeves if sleeves are added, arms if sleeves are removed, Fig. 6, based on source image an pose information indicating a target pose (an asymmetric pose of a person wearing a shirt in source image 602, Fig. 6) ; Regarding Claim 17, Petrangeli discloses the apparatus of claim 16, further comprising: a pose-warping mode configured to generate a pose-warped texture based on the source image and the pose information (Petrangeli, Fig. 2, [0066] “at 202, the image editing system (e.g., the shape editing subsystem) obtains an input image and an edit parameter, the input image depicts a person wearing a short-sleeved shirt, and the edit parameter specifies that the sleeves should be made longer” and [0079] “the target region is a shirt, and the shirt is to be lengthened. The color, texture, and other properties of the shirt are extended into the masked region to lengthen the shirt and [0119] FIG. 6, a single source image 602 presenting a challenging asymmetric pose” and [0120] In columns 604, the sleeve length is edited, and in columns 606, the shirt length is edited” Petrangeli teaches a pose-warping mode to generate a pose-warped texture based on the source image and the pose information e.g., a pose-warped texture of the person wearing a short-sleeved shirt (pose information) in the source image 602 is modified with the sleeve length (604), or the shirt length (606). Regarding Claim 18, Petrangeli discloses the apparatus of claim 16, further comprising: a text encoder configured to generate a text encoding based on a text prompt (Petrangeli, Fig. 1, [0040] “the editor interface 104 responds to user selection of an upload element by transitioning to a view showing available files to upload, prompt a user to take a photo” and [0049] “The encoder takes as input the source parsing map and transforms it into a latent representation of the parsing map” and [0050] “The encoder 114 is a machine learning model trained to generate such a latent representation” and [0052] “The editor 115 applies changes to the latent representation based upon edit parameters 108” Petrangeli teaches the editor interface includes a text prompt for user selection on input source image and encoder takes as input source to generate a text encoding (e.g., a latent representation). Regarding Claim 19, Petrangeli discloses the apparatus of claim 16, further comprising: a pose detector configured to generate the pose information based on the source image (Petrangeli, [0069] “The parsing map identifies regions in the input image including the target region, FIG. 1, the mapper 112 can be a neural network trained to generate a parsing map of an input image, where the parsing map identifies different regions of the image, the parsing map separates the image into different regions such as a shirt, a face, an arm, a skirt, and a background” Petrangeli teaches a pose detector (referred to as the mapper 112) to generate (identify) the pose information (different regions such as a shirt, a face, an arm, a skirt and a background) based on source image. Regarding Claim 20, Petrangeli discloses the apparatus of claim 16, further comprising: a segmentation model configured to generate the part image based on the source image (Petrangeli, [0069] “The parsing map identifies regions in the input image including the target region, FIG. 1, the mapper 112 can be a neural network trained to generate a parsing map of an input image, where the parsing map identifies different regions of the image, the parsing map separates the image into different regions such as a shirt, a face, an arm, a skirt, and a background” Petrangeli teaches a segmentation model (referred to as a parsing map) to generate (separate) the image into different parts (regions) such as a shirt, a face, an arm, a skirt and a background, based on source image. Claims 4, 7, and 9-15 are rejected under 35 U.S.C. 103 as being unpatentable by Petrangeli et al. (U.S. 2023/0162330 A1) in view of Kheradmand et al. (U.S. 12,354,337 B1). Regarding Claim 4, the method of claim 3, Petrangeli does not explicitly teach further comprising: selecting a mode from a set of pose-warping modes including a dense warping mode and a sparse warping mode, wherein the pose-warped texture is generated based on the selected mode. However, Kheradmand teaches selecting a mode from a set of pose-warping modes including a dense warping mode (Kheradmand Col. 9 lines 5-10 “Garments are warped from the ith source pose PSi to the target pose PT, to synthesize the target garments. A DensePose-based garment deformation method can perform well with drastic view changes for the input images” Kheradmand teaches selecting a mode from a denspose wraping mode of a garment and a sparse warping mode (Kheradmand, Col. 3, lines 4-10 “Conventional techniques often take a single-view reference image as input. Even when garment warping is used, the quality suffers with drastic view changes due to occlusions and limited visible regions from a single view” Kheradmand teaches a sparse warping mode (a limited warping mode) of garment, wherein the pose-warped texture is generated based on the selected mode (Kheradmand, Col. 3, lines 1-3 “In addition, realistic garments are hard to synthesize. As non-rigid objects, the garments may fold and their texture may alter differently on different part of the human body” Kheradmand teaches the pose warped texture is generated based on the selected mode (the garments may fold and their texture may alter differently on different part of the human body). Petrangeli and Kheradmand are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Petrangeli to combine with a dense warping mode and sparse warping mode (as taught by Kheradmand) in order to include a dense warping mode and a sparse warping mode because Kheradmand can provide selecting a mode from a denspose wraping mode of a garment and a sparse warping mode (a limited warping mode) of garment (Kheradmand, Col. 9 lines 5-10, Col. 3, lines 4-10). Doing so, it may provide directly synthesize human images to enhance textural quality and photorealism (Kheradmand, Col. 3, lines 17-18). Regarding Claim 7, the method of claim 1, Petrangeli does not explicitly teach wherein: the target edit comprises a replacement of at least one of an articles of clothing, a hair style, a makeup style, or a body art style, and wherein the output image comprises a virtual try-on based on the replacement. However, Kheradmand teaches the target edit comprises a replacement of at least one of an article of clothing, a hair style, a makeup style, or a body art style, and wherein the output image comprises a virtual try-on based on the replacement (Kheradmand, Col. 4 lines 29-33 “the request 112 may be a try-on request for a garment (e.g., a shirt) worn by an element (e.g., a person). The request 112 may include data that indicates an image that shows a shirt of a person in a target pose along with a try-on request of the shirt” Kheradmand teaches the target edit includes a replacement is an article of clothing (a shirt) and output image include a virtual try-on based on the replacement (try-on request of the shirt). Petrangeli and Kheradmand are combinable see rationale in claim 4. Regarding Claim 9, Petrangeli discloses a method for training a machine learning model Petrangeli, [0006] “a computer-implemented method for image” and [0045] “the image editing system 102 include trained machine learning models”), the method comprising: obtaining a training set including a ground-truth image depicting an entity, pose information indicating a target pose of the entity (Petrangeli, [0044] “the shape editing subsystem 110 could be a separate entity from the appearance…the training subsystem 140, or the same entity” and [0069] “the parsing map separates the image into different regions such as a shirt, a face, an arm, a skirt, and a background. These regions may be defined using binary attributes corresponding to the semantic parsing of the elements of a target image” Petrangeli teaches , and a part image depicting a target part of the entity (Petrangeli, [0119] FIG. 6, a single source image 602 presenting a challenging asymmetric pose” and [0120] In columns 604, the sleeve length is edited, and in columns 606, the shirt length is edited” Petrangeli teaches a part image (Fig. 6) shows a target part of the entity (the sleeve, the shirt are edited, Fig. 6) and training, using the training set, an image generation model to generate an output image that depicts the entity with the target pose and the target part (Petrangeli, [0119] FIG. 6, a single source image 602 presenting a challenging asymmetric pose” and [0120] In columns 604, the sleeve length is edited, and in columns 606, the shirt length is edited” Petrangeli teaches training, using the training set (source image 602) to edit shape attribute (cloth length manipulation) base on source image to output image depicts the entity (sleeve, shirt) with the target pose (the pose of a person wearing a shirt in source image 602 presenting an asymmetric pose, Fig. 6) and target part (In columns 604, the sleeve length is edited, and in columns 606, the shirt length is edited). However, Petrangeli does not explicitly teach a ground-truth image depicting an entity, pose information. Kheradmand teaches a ground-truth image depicting an entity, pose information (Kheradmand, Col. 8 lines 33-36 “The output ML model 562 may further be trained based on a distance minimization between a generated image and a ground truth image in a feature space” and Col. lines “The first image may be input to an ML model, which can output pose data of the first pose…poses include front upper body, front full body, left side view, right side view, back upper body, back lower body, etc” Kheradmand teaches a ground truth image depicting an entity (body), pose information (upper body, lower body etc.) Petrangeli and Kheradmand are combinable see rationale in claim 4. Regarding Claim 10, the method of claim 9, Petrangeli does not explicitly teach wherein training the image generation model comprises: computing a multi-task loss function including an entity-part loss term and a pose-warp loss term; and updating parameters of the image generation model based on the multi-task loss function. However, Kheradmand teaches computing a multi-task loss function including an entity-part loss term and a pose-warp loss term (Kheradmand, Col. 3 lines 36-37 “A conditional patch loss is also introduced to improve the fidelity and details of the generated images” and Col. 10 lines 56-60 “Another training loss is a combination of the above-mentioned loss functions: L=LGAN+λ1Lrec+λ2Lface+λ3Lpatch, and Col. 10 lines 41-43 “In addition to the patch loss Lpatch, the full image adversarial LGAN used in StyleGAN2, and the face identify loss Lface used in PWS” Kheradmand teaches a multi-task loss function (L) includes an entity-part loss term (LGAN used in part of StyleGAN2) and a pose-warp loss term (the pose of face identify loss Lface used in PWS); and updating parameters of the image generation model based on the multi-task loss function (Kheradmand, Col. lines “The output ML model 562 may be a conditional patch discriminator Dpatch that enforces the realism of patches… The corresponding patch loss for the discriminator is calculated as: Lpatch= E[log(Dpatch(patch(LT),patch(IT)|patch(IS)))]  (2)” Kheradmand teaches updating (caculating) parameters (patchs) of the image generation model (ML model 562) based the multi-task loss function (Lpatch). Petrangeli and Kheradmand are combinable see rationale in claim 4. Regarding Claim 11, the method of claim 10, Petrangeli does not explicitly teach wherein: the entity-part loss term is based on a segmentation map for the target part of the entity. However, Kheradmand teaches the entity-part loss term is based on a segmentation map for the target part of the entity (Kheradmand, Col. 3 lines 36-37 “A conditional patch loss is also introduced to improve the fidelity and details of the generated images” and Col. 9 lines 13-17 “For a source appearance image ISi of the ith view, an upper body garment and a lower body garment are segmented with an off-the-shelf clothed human image segmentation algorithm. The upper and lower body garments go through the same procedure for deforming to the target pose” Kheradmand teaches the entity-part loss term based on a segmentation map for the target part of the entity (an upper body garment and a lower body garment are segmented with an off-the-shelf clothed human). Petrangeli and Kheradmand are combinable see rationale in claim 4. Regarding Claim 12, the method of claim 10, Petrangeli does not explicitly teach wherein: the pose-warp loss term is based on a visibility map corresponding to the target pose of the entity. However, Kheradmand teaches the pose-warp loss term is based on a visibility map corresponding to the target pose of the entity (Kheradmand, Col. 3, lines 36-37 “A conditional patch loss is also introduced to improve the fidelity and details of the generated images” and Fig. 6, Col. 8, lines 3-10 “A visibility map may additionally be used to generate the appearance feature for each of the input images 540A-C… The visibility map is determined based on the pose and the target pose, where the visibility map indicates a region in the target image that is also available in the input image” and Col. 12, lines 22-24 “The warping may involve a TPS transformation that transforms the garment to the target pose using the visibility map” Kheradmand teaches the pose-warp loss term (patch loss is used on details of the generated image, warped images) is based on a visibility map corresponding to the target pose of the entity (the garment), Fig. 6. Petrangeli and Kheradmand are combinable see rationale in claim 4. Regarding Claim 13, a combination of Petrangeli and Kheradmand discloses the method of claim 10, wherein: the multi-task loss function includes a diffusion loss term (Petrangeli, [0008] “a loss function comprising the reconstruction loss, the adversarial loss, and the attribute manipulation loss” and [0094] “while the reconstruction loss and adversarial loss teach the second neural network to accurately reproduce the input parsing map” Petrangeli teaches the loss function includes a diffusion loss term (referred to as the reconstruction loss to accurately reproduce the input parsing map). Regarding Claim 14, a combination of Petrangeli and Kheradmand discloses the method of claim 9, wherein obtaining the training set comprises: applying a pose detection model to the ground-truth image to the pose information (Petrangeli, [0069] “The parsing map identifies regions in the input image including the target region, FIG. 1, the mapper 112 can be a neural network trained to generate a parsing map of an input image, where the parsing map identifies different regions of the image, the parsing map separates the image into different regions such as a shirt, a face, an arm, a skirt, and a background” Petrangeli teaches applying a pose detector (referred to as the mapper 112) to the ground-truth image (referred to as the input image including the target region) to the pose information (different regions such as a shirt, a face, an arm, a skirt and a background) based on source input image. Regarding Claim 15, a combination of Petrangeli and Kheradmand discloses the method of claim 9, wherein obtaining the training set comprises: applying a segmentation model to the ground-truth image to obtain the part image (Petrangeli, [0069] “The parsing map identifies regions in the input image including the target region, FIG. 1, the mapper 112 can be a neural network trained to generate a parsing map of an input image, where the parsing map identifies different regions of the image, the parsing map separates the image into different regions such as a shirt, a face, an arm, a skirt, and a background” Petrangeli teaches applying a segmentation model (referred to as a parsing map) to the ground-truth image (referred to as the input image including the target region) to obtain the part image e.g., different parts (regions) of image such as a shirt, a face, an arm, a skirt and a background, based on source input image. Conclusion The prior arts made of record and not relied upon are considered pertinent to applicant's disclosure Mitra et al. (U.S. 2022/0028139 A1) and Ozkan et al. (U.S. 2024/0303883 A1). Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHOA VU whose telephone number is (571)272-5994. The examiner can normally be reached 8:00- 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KHOA VU/Examiner, Art Unit 2611 /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jun 19, 2024
Application Filed
Feb 05, 2026
Non-Final Rejection — §102, §103
Apr 10, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598266
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597087
HIGH-PERFORMANCE AND LOW-LATENCY IMPLEMENTATION OF A WAVELET-BASED IMAGE COMPRESSION SCHEME
2y 5m to grant Granted Apr 07, 2026
Patent 12578941
TECHNIQUE FOR INTER-PROCEDURAL MEMORY ADDRESS SPACE OPTIMIZATION IN GPU COMPUTING COMPILER
2y 5m to grant Granted Mar 17, 2026
Patent 12567181
SYSTEMS AND METHODS FOR REAL-TIME PROCESSING OF MEDICAL IMAGING DATA UTILIZING AN EXTERNAL PROCESSING DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12548431
CONTEXTUALIZED AUGMENTED REALITY DISPLAY SYSTEM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
84%
With Interview (+15.8%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 345 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month