Prosecution Insights
Last updated: April 19, 2026
Application No. 18/416,341

GENERATING AN ALPHA IMAGE BASED ON A TEXT PROMPT

Final Rejection §103§112
Filed
Jan 18, 2024
Examiner
LIU, ZHENGXI
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Adobe Inc.
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
225 granted / 354 resolved
+1.6% vs TC avg
Strong +40% interview lift
Without
With
+40.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
31 currently pending
Career history
385
Total Applications
across all art units

Statute-Specific Performance

§101
13.2%
-26.8% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
5.1%
-34.9% vs TC avg
§112
15.7%
-24.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 354 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1 and 3-20 are pending. Claims 1, 3-6, 8, 10-19 have been amended. Claim 2 has been cancelled. No claim has been added. All claims have been rejected. Response to Arguments Applicant’s arguments related to 35 U.S.C. 103 rejections of Claims 1 and 3-20 are moot in view of the Examiner’s new ground of rejections. Compact Prosecution With respect to Claim Interpretation, the Examiner has provided some notes regarding “[BRI on the record]” throughout the Office Action, so that the record is clear about the scope of the claimed invention, and the record is also clear about the basis for the Examiner’s analyses. A clear record of the claim interpretation could expedite the examination by creating the condition to allow the examination to focus on Applicant’s inventive concept and its comparison with related prior art. If there are disagreements, Applicant may present an alternative interpretation based on MPEP 2111. The Examiner will adopt Applicant’s interpretation on the record, if Applicant’s interpretation is reasonable and/or arguments are persuasive. Applicant may amend claims relying on the Examiner’s claim interpretation provided on the record. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 10-15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Amended Claim 10 recites “training an additional image generation model to generate alpha images with keyable backgrounds based on the training image and the training prompt.” However, the specification provided contradictory disclosure, where the additional image generation model is based on the training image, but not based on the training prompt. The specification states: [0108]In some examples, training component 740 creates an additional training dataset using the trained image generation model, where the additional training dataset includes an alpha image having an alpha channel. In some aspects, creating the additional training dataset includes performing a matting algorithm to replace the keyable background with the alpha channel. [0109] In some examples, training component 740 trains verification model 745 to label alpha images using the additional training dataset. In some examples, training component 740 trains additional image generation model 750 based on the additional training dataset. Spec. ¶¶ 108-109. Here, the specification discloses that the additional image generation model 750 is trained based on the additional training dataset (Spec. ¶ 109) based on alpha image having an alpha channel, created by performing a matting algorithm to replace the keyable background with the alpha channel (Spec. ¶ 108). The specification does not disclose that the additional image generation model is trained based on the claimed “training prompt.” Instead, the “training prompt” is used to train image generation model 715. The specification states: [0107] According to some aspects, training component 740 creates a training dataset including a training image and a training prompt, where the training image depicts an object and a keyable background. According to some aspects, training component 740 is configured to train image generation model 715 using a training image including a training keyable background. In some examples, training component 740 trains image generation model 715 to generate images with keyable backgrounds based on the training image and the training prompt. Spec. ¶ 107. Here, the image generation model 715 and the additional image generation model 750 are distinct models, as shown in fig. 7. [0084] According to some aspects, image generation model 715 comprises one or more ANNs trained to generate an image including an object and a keyable background based on a text prompt describing the object and the keyable background. For example, insomecases, image generation model 715 comprises a diffusion model. According to some aspects, the diffusion model implements a reverse diffusion process (such as the reverse diffusion process described with reference to FIGs. 9 and 14). In some cases, image generation model 715 includes a U-Net (such as a U-Net described with reference to FIG. 10). In some aspects, the image generation model 715 is trained using a training image including a training keyable background. [0116] According to some aspects, additional image generation model 750 comprises additional image generation parameters (e.g., machine learning parameters) stored in memory unit 710or the memory of the external apparatus. In some cases, additional image generation model 750 comprises one or more ANNs trained to generate an additional image based on the additional training dataset. For example, in some cases, additional image generation model 750 comprises an additional diffusion model. According to some aspects, the additional diffusion model implements a reverse diffusion process (such as the reverse diffusion process described with reference to FIGs. 9 and 14). In some cases, additional image generation model 750 includes a U-Net (such as a U-Net described with reference to FIG. 10). Spec. ¶¶ 84, 116. Therefore, Claim 10 is rejected for lack of support, and Claims 11-15 are rejected because of their dependence on Claim 10 and they inherit Claim 10’s deficiency. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-7, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Hou et al. (CN 113409221 A) in view of Gowal et al. (WO 2024038114 A1) and Smith et al. (“Blue Screen Matting”). Regarding Claim 1, Hou teaches A method for image generation (Hou teaches generation of an image with a foreground object and a selected background color, stating “Therefore, the primary principle is that the foreground object cannot contain the background color selected. From the principle, as long as the background used color does not exist in the foreground picture, . . ..” Hou p. 2.), comprising: obtaining a preliminary image including an object ( Hou states, “Therefore, the primary principle is that the foreground object cannot contain the background color selected. From the principle, as long as the background used color does not exist in the foreground picture, . . ..” Hou p. 2. The preliminary image is mapped to the disclosed foreground image or an image where the color of the background does not disturb the relevant color analysis of the foreground object. The object is mapped to the disclosed foreground object.); determining a least common color included in the preliminary image (Hou’s “color does not exist in the foreground picture” corresponds to the least common color included in foreground image, mapped to the preliminary image.); generating, using an image generation model, a synthetic image depicting the object and a background consisting of the least common color ( “Therefore, the primary principle is that the foreground object cannot contain the background color selected. From the principle, as long as the background used color does not exist in the foreground picture, . . ..” Hou p. 2. The synthetic image is mapped to an image comprising the foreground object and the selected background color that does not exist in the foreground image. The image generation model is mapped to algorithm model to select background to create a keyable image.) The Examiner mapped to the algorithm model to select background color to create a keyable image. The Specification discloses that the image generation model is a machine learning model. Spec. ¶ 84. Hou does not teach the image generation model as disclosed. Further, Hou does not teach generating an alpha image by replacing the background with an alpha channel. Gowal teaches generating, using an image generation model, a synthetic image depicting the object and a specified background, wherein the image generation model is a machine learning model ( “For example, the original input representation can be a text prompt reading ‘a snow leopard’ and the initial latent representation can be a text prompt reading ‘with a green background’. The system can create a new input representation that is a concatenation of the original input representation and the initial latent representations reading ‘a snow leopard with a green background’.” Gowal p. 12 lines 25-33.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Gowal’s machine-learning based image generation with Hou. One of ordinary skill in the art would be motivated to quickly prototype images by using AI. Gowal teaches text-to-image machine learning model, stating “By using the techniques described in this specification, e.g., by leveraging large- scale, text-to-image, generative models, it is much less difficult to obtain large and realistic datasets that can be reliably manipulated. The generative models are trained on web-scale datasets and can be re-used and have broad non-domain-specific coverage. They can generate large amounts of novel data and can realistically capture the essence of (most) subsets of inputs. This allows for automatic identification of a greater variety of realistic failure cases.” Gowal p. 2 lines 29-33. However, Hou in view of Gowal does not explicitly teach generating an alpha image by replacing the keyable background with an alpha channel ( [BRI on the record] With respect to “alpha image,” the Examiner is reading the limitation to mean: an image that includes an alpha channel. A definition is provided for the term from the specification: [0047] As used herein, an “alpha image” refers to an image that includes an alpha channel. In some cases, an alpha image also includes one or more color channels including corresponding color information of the image (such as a red channel, a blue channel, a green channel, or a combination thereof). In some cases, an alpha image refers to an RGBA image, where “RGB” indicates respective color channels and “A” indicates an alpha channel.). Smith teaches generating an alpha image by replacing the keyable background with an alpha channel ( Smith teaches a “matting” algorithm, stating “A classical problem of imaging—the matting problem—is separation of a non-rectangular foreground image from a (usually) rectangular background image—for example, in a film frame, extraction of an actor from a background scene to allow substitution of a different background. Of the several attacks on this difficult and persistent problem, we discuss here only the special case of separating a desired foreground image from a background of a constant, or almost constant, backing color. This backing color has often been blue, so the problem, and its solution, have been called blue screen matting. However, other backing colors, such as yellow or (increasingly) green, have also been used, so we often generalize to constant color matting.” Smith Abstract. Smith teaches the matting algorithm generates an image with an alpha channel, stating “The use of an alpha channel to form arbitrary compositions of images is well-known in computer graphics [9]. An alpha channel gives shape and transparency to a color image. It is the digital equivalent of a holdout matte—a grayscale channel that has full value pixels (for opaque) at corresponding pixels in the color image that are to be seen, and zero valued pixels (for transparent) at corresponding color pixels not to be seen. We shall use 1 and 0 to represent these two alpha values, respectively, . . .. We shall use ‘alpha channel’ and “matte” interchangeably, it being understood that it is really the holdout matte that is the analog of the alpha channel.” Smith Definitions. Here, the “keyable background,” mapped to “a background of a constant, or almost constant, backing color,” is replaced with 1 (opaque) or 0 (transparent) “alpha channel.” Smith teaches “constant color matting,” stating “we discuss here only the special case of separating a desired foreground image from a background of a constant, or almost constant, backing color.” Smith Abstract.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Smith’s chroma keying algorithm with Hou in view of Gowal. One of ordinary skill in the art would be motivated to make the background of an image easily replaceable. For example, it could be further used in creative photography or filming. Smith states, “A classical problem of imaging—the matting problem—is separation of a non-rectangular foreground image from a (usually) rectangular background image—for example, in a film frame, extraction of an actor from a background scene to allow substitution of a different background. Of the several attacks on this difficult and persistent problem, we discuss here only the special case of separating a desired foreground image from a background of a constant, or almost constant, backing color.” Smith Abstract. Regarding Claim 3, Hou in view of Gowal and Smith teaches The method of claim 1,further comprising: determining the least common color based on a color analysis of the preliminary image (Hou states, “Therefore, the primary principle is that the foreground object cannot contain the background color selected. From the principle, as long as the background used color does not exist in the foreground picture, using any color background can be, . . ..” Hou p. 2.). Regarding Claim 4, Hou in view of Gowal and Smith teaches The method of claim 1, further comprising: obtaining a preliminary text prompt describing the object ( “For example, the original input representation can be a text prompt reading ‘a snow leopard’ and the initial latent representation can be a text prompt reading ‘with a green background’. The system can create a new input representation that is a concatenation of the original input representation and the initial latent representations reading ‘a snow leopard with a green background’.” Gowal p. 12 lines 25-33.); and generating, by the image generation model, the preliminary image based on the preliminary text prompt ( Gowal states, “For example, the original input representation can be a text prompt reading ‘a snow leopard’ and the initial latent representation can be a text prompt reading ‘with a green background’. The system can create a new input representation that is a concatenation of the original input representation and the initial latent representations reading ‘a snow leopard with a green background’.” Gowal p. 12 lines 25-33. PNG media_image1.png 194 332 media_image1.png Greyscale Gowal fig. 4. Here, the system, as shown, could continue revising the text input for a machine learning model to generate a next image. Therefore, any text prompt is preliminary text prompt with respect to next related text prompt; any background is a preliminary background with respect to a next related background.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Gowal’s machine-learning based image generation with Hou. One of ordinary skill in the art would be motivated to quickly prototype images by using AI. Regarding Claim 5, Hou in view of Gowal and Smith teaches The method of claim 1, wherein: the preliminary image includes a preliminary background comprising a neutral monochrome color ( “For example, the original input representation can be a text prompt reading ‘a snow leopard’ and the initial latent representation can be a text prompt reading ‘with a green background’. The system can create a new input representation that is a concatenation of the original input representation and the initial latent representations reading ‘a snow leopard with a green background’.” Gowal p. 12 lines 25-33. Smith teaches “constant color matting,” stating “A classical problem of imaging—the matting problem—is separation of a non-rectangular foreground image from a (usually) rectangular background image—for example, in a film frame, extraction of an actor from a background scene to allow substitution of a different background. Of the several attacks on this difficult and persistent problem, we discuss here only the special case of separating a desired foreground image from a background of a constant, or almost constant, backing color. This backing color has often been blue, so the problem, and its solution, have been called blue screen matting. However, other backing colors, such as yellow or (increasingly) green, have also been used, so we often generalize to constant color matting.” Smith Abstract. ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Gowal’s machine-learning based image generation with Hou. One of ordinary skill in the art would be motivated to quickly prototype images by using AI. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Smith’s chroma keying algorithm with Hou in view of Gowal. One of ordinary skill in the art would be motivated to make the background of an image easily replaceable. For example, it could be further used in creative photography. Regarding Claim 6, Hou in view of Gowal and Smith teaches The method of claim 4, further comprising: modifying the preliminary text prompt with a description of the background to obtain a text prompt; and generating the synthetic image based on the text prompt ( Gowal states, “For example, the original input representation can be a text prompt reading ‘a snow leopard’ and the initial latent representation can be a text prompt reading ‘with a green background’. The system can create a new input representation that is a concatenation of the original input representation and the initial latent representations reading ‘a snow leopard with a green background’.” Gowal p. 12 lines 25-33. PNG media_image1.png 194 332 media_image1.png Greyscale Gowal fig. 4. Here, the system, as shown, could continue revising the text input for a machine learning model to generate a next image. Therefore, any text prompt is preliminary text prompt with respect to next related text prompt; any background is a background with respect to a next related background. The next text prompt could be: “a snow leopard with a red background.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Gowal’s machine-learning based image generation with Hou. One of ordinary skill in the art would be motivated to quickly prototype images by using AI. Gowal teaches text-to-image machine learning model, stating “By using the techniques described in this specification, e.g., by leveraging large- scale, text-to-image, generative models, it is much less difficult to obtain large and realistic datasets that can be reliably manipulated. The generative models are trained on web-scale datasets and can be re-used and have broad non-domain-specific coverage. They can generate large amounts of novel data and can realistically capture the essence of (most) subsets of inputs. This allows for automatic identification of a greater variety of realistic failure cases.” Gowal p. 2 lines 29-33. Regarding Claim 7, Hou in view of Gowal and Smith teaches The method of claim 1, wherein generating the alpha image comprises: performing a matting algorithm (Smith Title: “Blue Screen Matting”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Smith’s chroma keying algorithm with Hou in view of Gowal. One of ordinary skill in the art would be motivated to make the background of an image easily replaceable, which could be used in creative photography or filming. Claim 16 is substantially similar to Claim 1. Claim 1’s rejection analyses are also applied to Claim 16. In addition, Claim 16 recites, “A system for image generation, comprising: a memory component; and a processor device coupled to the memory component, the processing device configured to perform operations . . .” (Hou reciting, “The embodiment of the invention further claims a computer-readable storage medium, the computer-readable storage medium is stored with a computer program, the computer program is executed by a processor to realize the image color scratching method as described above.” Gowal p. 20 disclosing computer readable media and processor.). Regarding Claim 17, Hou in view of Gowal and Smith teaches The system of claim 16, the system further comprising: a color analysis component configured to determine the least common color based on a color analysis of the preliminary image ( Hou states, “Therefore, the primary principle is that the foreground object cannot contain the background color selected. From the principle, as long as the background used color does not exist in the foreground picture, . . ..” Hou p. 2.). Regarding Claim 18, Hou in view of Gowal and Smith teaches The system of claim 16, the system further comprising: a prompt generation component configured to modify a preliminary text prompt describing the object with a description of the least common color to obtain a text prompt, wherein the synthetic image is generated based on the text prompt ( Gowal states, “For example, the original input representation can be a text prompt reading ‘a snow leopard’ and the initial latent representation can be a text prompt reading ‘with a green background’. The system can create a new input representation that is a concatenation of the original input representation and the initial latent representations reading ‘a snow leopard with a green background’.” Gowal p. 12 lines 25-33. PNG media_image1.png 194 332 media_image1.png Greyscale Gowal fig. 4. Here, the system, as shown, could continue revising the text input for a machine learning model to generate a next image. Therefore, any text prompt is preliminary text prompt with respect to next related text prompt; any background is a preliminary background with respect to a next related background. The next text prompt could be: “a snow leopard with a red background.” Hou already teaches identifying the least common color that could be used as a background. After Hou and Gowal are combined, Gowal’s text input could be used to specify the identified background color to generate an image.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Gowal’s machine-learning based image generation with Hou. One of ordinary skill in the art would be motivated to quickly prototype images by using AI. Claims 8 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Hou in view of Gowal and Smith as applied to Claims 1 and 16, in further view of Han et al. (CN 108596913 A). Regarding Claim 8, Hou in view of Gowal and Smith teaches The method of claim 1. Hou in view of Gowal and Smith does not explicitly disclose wherein generating the alpha image comprises: performing a plurality of matting algorithms; and selecting an output from one of the plurality of matting algorithms as the alpha image. Han teaches wherein generating the alpha image comprises: performing a plurality of matting algorithms; and selecting an output from one of the plurality of matting algorithms as the alpha image ( Han states, “by applying multiple preset matting algorithm for obtaining an assessment of the optimal algorithm, obtaining the foreground transparency of image to obtain the matting result.” Han p. 2. After Hou in view of Gowal and Smith is combined with Han, the matting algorithms are applied to Hou in view of Gowal and Smith’s keyable background.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Han’s selection of an optimal matting algorithm with Hou in view of Gowal and Smith. One of ordinary skill in the art would be motivated to select an optimal matting algorithm to achieve better results. Han states, “by applying multiple preset matting algorithm for obtaining an assessment of the optimal algorithm, obtaining the foreground transparency of image to obtain the matting result.” Han p. 2. Claim 19 recites limitations that are substantially similar to Claim 8. Claim 8’s rejection analyses based on Hou in view of Gowal, Smith and Han are applied to Claim 19. Claims 9 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hou in view of Gowal and Smith as applied to Claims 1 and 16, and in further view of Guo (CN 114037710 A). Regarding Claim 9, Hou in view of Gowal and Smith teaches The method of claim 1. Hou in view of Gowal and Smith does not explicitly disclose wherein: the image generation model is trained using a training image including a training keyable background. Guo teaches wherein: the image generation model is trained using a training image including a training keyable background (“In the embodiment of the present disclosure, the image segmentation model is pre-trained, for the training method of the model, as a realizing mode, obtaining the training sample image, background of the training sample image is a pure background of the set color, such as green background, blue background and so on,”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Guo’s using pure-color backgrounds with Hou in view of Gowal and Smith. One of ordinary skill in the art would be motivated to allowed the machine learning model to provide better predictions. Supervised learning may involve machine learning model training based on labeled training data, which may include example input-output pairs. Claim 20 recites limitations that are substantially similar to Claim 9. Claim 9’s rejection analyses based on Hou in view of Gowal, Smith, and Han are applied to Claim 20. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.. BRANHAM (US 20100066753 A1): “The color of the embedded greenscreen marks would be selected to be selected to be clearly distinguishable from the background color, but would itself be keyable. Typically, a green would be used in for green greenscreens, blue in blue greenscreens, etc., as this would allow them to be easily removed from the initial image with the keying process.” Branham ¶ 30. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHENGXI LIU whose telephone number is (571)270-7509. The examiner can normally be reached M-F 9 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZHENGXI LIU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jan 18, 2024
Application Filed
Aug 02, 2025
Non-Final Rejection — §103, §112
Oct 30, 2025
Examiner Interview Summary
Oct 30, 2025
Applicant Interview (Telephonic)
Nov 06, 2025
Response Filed
Jan 30, 2026
Final Rejection — §103, §112
Mar 18, 2026
Examiner Interview Summary
Mar 18, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602865
METHODS FOR DEPTH CONFLICT MITIGATION IN A THREE-DIMENSIONAL ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12599463
COLOR MANAGEMENT PROCESS FOR CUSTOMIZED DENTAL RESTORATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597402
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM FOR APPLICATION WINDOW HAVING FIRST DISPLAY MODE AND SECOND DISPLAY MODE
2y 5m to grant Granted Apr 07, 2026
Patent 12567193
PARTICLE RENDERING METHOD AND APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12561929
METHOD AND ELECTRONIC DEVICE FOR PROVIDING INFORMATION RELATED TO PLACING OBJECT IN SPACE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+40.1%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 354 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month