Prosecution Insights
Last updated: April 19, 2026
Application No. 18/169,015

Artificial Intelligence (AI)-Based Generation of In-App Asset Variations

Non-Final OA §101§102§103
Filed
Feb 14, 2023
Examiner
CHEN, KUANG FU
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Sony Interactive Entertainment Inc.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
203 granted / 252 resolved
+25.6% vs TC avg
Strong +67% interview lift
Without
With
+67.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
289
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
47.4%
+7.4% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 252 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the claims elected 12/29/2025. Claims 1-11 and 17-20 are presented for examination. Election/Restrictions Claims 12-16 are withdrawn as they are directed to a nonelected invention, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 12/29/2025. Information Disclosure Statement The information disclosure statement (IDS) submitted 9/23/2024 has been considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-11 and 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”). Claim 1 Step 1: The claim recites “A method…comprising”; therefore, it is directed to the statutory category of a process. Step 2A Prong 1: The claim recites, inter alia: A method for automatically generating a variation of an in-app asset, comprising: automatically generate a variation of the in-app asset based on the contextual feature specified by the contextual communication; and conveying the variation of the in-app asset for human assessment: These limitations recite a mentally performable process with the aid of pen and paper of using judgement to independently, without another decision trigger (automatically), determining a variation of an in-app asset observed based on contextual feature specified by the observed contextual communication and by outputting the variation on the pen and paper conveying the determined variation of the in-app asset for further human assessment. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: providing a description of a reference version of an in-app asset to an artificial intelligence model; providing a contextual communication to the artificial intelligence model, the contextual communication specifying a contextual feature for generation of variations of the reference version of the in-app asset: These additional elements merely recite insignificant extra-solution activity of mere data gathering, e.g. providing a description of a reference version of an in-app asset and a contextual communication to the artificial intelligence model, and selecting a particular type of data to be manipulated, e.g. the contextual communication specifying a contextual feature for generation of variations of the reference version of the in-app asset, as all uses of the judicial exception of automatically generating a variation of an in-app asset require the provided description of a reference version of an in-app asset and the contextual communication. See MPEP 2106.05(g). executing the artificial intelligence model to: These additional elements are recited at a high level of generality and merely amount to invoking computers or other machinery merely as a tool to apply the underlying judicial exception corresponding to generating a variation of an in-app asset. See MPEP 2106.05(f). Step 2B: The additional elements from Step 2A Prong 2 include insignificant extra-solution activity of data gathering recited by “providing a description of a reference version of an in-app asset to an artificial intelligence model; providing a contextual communication to the artificial intelligence model, the contextual communication specifying a contextual feature for generation of variations of the reference version of the in-app asset”. These insignificant extra-solution activities are well-understood routine and conventional activities similar to presenting offers and gathering statistics see MPEP 2106.05(d)(II). Further, the additional elements include invoking computer machinery to apply the underlying judicial exception. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 2 Step 1: a process, as in claim 1. Step 2A Prong 1: The claim recites the abstract ideas of claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein the in-app asset is defined by multiple layers, wherein each of the multiple layers defines a different aspect of the in-app asset: These additional elements are recited at a high level of generality and represent nothing more than an attempt to generally link the use of the underlying judicial exception to a field of use or a technological environment defining the in-app asset. See MPEP 2106.05(h). Step 2B: The additional elements from Step 2A Prong 2 include generally linking the use of the judicial exception to a particular field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 3 Step 1: a process, as in claim 1. Step 2A Prong 1: The claim recites the abstract ideas of claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein the variation of the in-app asset includes a same set of layers as the reference version of the in-app asset, and wherein a layer of the variation of the in-app asset is defined differently than a corresponding layer of the reference version of the in-app asset: These additional elements are recited at a high level of generality and represent nothing more than an attempt to generally link the use of the underlying judicial exception to a field of use or a technological environment defining the in-app asset. See MPEP 2106.05(h). Step 2B: The additional elements from Step 2A Prong 2 include generally linking the use of the judicial exception to a particular field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 4 Step 1: a process, as in claim 1. Step 2A Prong 1: The claim recites the abstract ideas of claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein the variation of the in-app asset includes a different set of layers as compared to a reference set of layers that define the reference version of the in-app asset: These additional elements are recited at a high level of generality and represent nothing more than an attempt to generally link the use of the underlying judicial exception to a field of use or a technological environment defining the in-app asset. See MPEP 2106.05(h). Step 2B: The additional elements from Step 2A Prong 2 include generally linking the use of the judicial exception to a particular field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 5 Step 1: a process, as in claim 4. Step 2A Prong 1: The claim recites the abstract ideas of claim 4. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein the different set of layers includes more layers than the reference set of layers, or wherein the different set of layers includes less layers than the reference set of layers, or wherein the different set of layers includes one or more layers not present in the reference set of layers: These additional elements are recited at a high level of generality and represent nothing more than an attempt to generally link the use of the underlying judicial exception to a field of use or a technological environment defining the in-app asset. See MPEP 2106.05(h). Step 2B: The additional elements from Step 2A Prong 2 include generally linking the use of the judicial exception to a particular field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 6 Step 1: a process, as in claim 5. Step 2A Prong 1: The claim recites the abstract ideas of claim 5. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein at least one layer in the different set of layers is defined differently than an equivalent layer in the reference set of layers: These additional elements are recited at a high level of generality and represent nothing more than an attempt to generally link the use of the underlying judicial exception to a field of use or a technological environment defining the in-app asset. See MPEP 2106.05(h). Step 2B: The additional elements from Step 2A Prong 2 include generally linking the use of the judicial exception to a particular field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 7 Step 1: a process, as in claim 1. Step 2A Prong 1: The claim recites the abstract ideas of claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein the contextual communication is one or more of a text input to the artificial intelligence model and a graphical input to the artificial intelligence model: These additional elements are recited at a high level of generality and represent nothing more than an attempt to generally link the use of the underlying judicial exception to a field of use or a technological environment in which the in-app asset is defined based on. See MPEP 2106.05(h). Step 2B: The additional elements from Step 2A Prong 2 include generally linking the use of the judicial exception to a particular field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 8 Step 1: a process, as in claim 1. Step 2A Prong 1: The claim recites the abstract ideas of claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein conveying the variation of the in-app asset for human assessment includes rendering of the variation of the in-app asset through a graphical user interface: These additional elements are recited at a high level of generality and represent nothing more than an attempt to generally link the use of the underlying judicial exception to a field of use or a technological environment in which the in-app asset is conveyed. See MPEP 2106.05(h). Step 2B: The additional elements from Step 2A Prong 2 include generally linking the use of the judicial exception to a particular field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 9 Step 1: a process, as in claim 1. Step 2A Prong 1: The claim recites the abstract ideas of claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein the in-app asset is an audio asset: These additional elements are recited at a high level of generality and represent nothing more than an attempt to generally link the use of the underlying judicial exception to a field of use or a technological environment defining the in-app asset. See MPEP 2106.05(h). Step 2B: The additional elements from Step 2A Prong 2 include generally linking the use of the judicial exception to a particular field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 10 Step 1: a process, as in claim 1. Step 2A Prong 1: The claim recites the abstract ideas of claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein the in-app asset is a graphical asset: These additional elements are recited at a high level of generality and represent nothing more than an attempt to generally link the use of the underlying judicial exception to a field of use or a technological environment defining the in-app asset. See MPEP 2106.05(h). Step 2B: The additional elements from Step 2A Prong 2 include generally linking the use of the judicial exception to a particular field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 11 Step 1: Step 1: a process, as in claim 1. Step 2A Prong 1: The claim recites, inter alia: further comprising: automatically culling at least one variation of the in-app asset as generated by determining that at least one feature of the at least one variation of the in-app asset does not satisfy acceptance criteria for the in-app asset: These limitations recite further mentally performable processes with the aid of pen and paper of using judgement to independently, without another decision trigger (automatically), culling at least one determined variation of the in-app asset based on judgement determining that at least one feature of the in-app asset does not satisfy acceptance criteria for the in-app asset. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: by the artificial intelligence model to: These additional elements are recited at a high level of generality and merely amount to invoking computers or other machinery merely as a tool to apply the underlying judicial exception corresponding to culling at least one variation of the in-app asset. See MPEP 2106.05(f). Step 2B: The additional elements from Step 2A Prong 2 include invoking computer machinery to apply the underlying judicial exception. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 17 Step 1: The claim recites “A system…comprising”; therefore, it is directed to the statutory category of a machine. Step 2A Prong 1: The claim recites, inter alia: A system for automatically generating and auditioning variations of an in-app asset, comprising: automatically generate a variation of the in-app asset based on the contextual feature specified by the contextual communication; and convey the variation of the in-app asset: These limitations recite a mentally performable process with the aid of pen and paper of using judgement to independently, without another decision trigger (automatically), determining a variation of an in-app asset observed based on contextual feature specified by the observed contextual communication and by outputting the variation on the pen and paper conveying the determined variation of the in-app asset. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: an input processor configured to; an artificial intelligence model configured to: These additional elements are recited at a high level of generality and merely amount to invoking computers or other machinery merely as a tool to apply the underlying judicial exception corresponding to generating a variation of an in-app asset. See MPEP 2106.05(f). receive a reference version of an in-app asset and a contextual communication, the contextual communication specifying a contextual feature for generation of variations of the reference version of the in-app asset; receive the reference version of the in-app asset and the contextual communication as input; and an output processor configured to convey…to a client computing system: These additional elements merely recite insignificant extra-solution activity of mere data gathering, e.g. receive a reference version of an in-app asset and a contextual communication and receive the reference version of the in-app asset and the contextual communication as input, data outputting, e.g. an output processor configured to convey…to a client computing system, and selecting a particular type of data to be manipulated, e.g. the contextual communication specifying a contextual feature for generation of variations of the reference version of the in-app asset, as all uses of the judicial exception of automatically generating a variation of an in-app asset require the provided description of a reference version of an in-app asset and the contextual communication. See MPEP 2106.05(g). Step 2B: The additional elements from Step 2A Prong 2 include invoking computer machinery to apply the underlying judicial exception and insignificant extra-solution activity of data gathering and data outputting recited by “receive a reference version of an in-app asset and a contextual communication, the contextual communication specifying a contextual feature for generation of variations of the reference version of the in-app asset; receive the reference version of the in-app asset and the contextual communication as input; and an output processor configured to convey…to a client computing system”. These insignificant extra-solution activities are well-understood routine and conventional activities similar to presenting offers and gathering statistics and receiving or transmitting data over a network see MPEP 2106.05(d)(II). Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 18 Step 1: a machine, as in claim 17. Step 2A Prong 1: The claim recites the abstract ideas of claim 17. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein the in-app asset is defined by multiple layers, wherein each of the multiple layers defines a different aspect of the in-app asset, wherein the variation of the in-app asset includes a different set of layers as compared to a reference set of layers that define the reference version of the in-app asset and/or at least one different parameter setting within a layer common to both the reference version of the in-app asset and the variation of the in-app asset: These additional elements are recited at a high level of generality and represent nothing more than an attempt to generally link the use of the underlying judicial exception to a field of use or a technological environment defining the in-app asset. See MPEP 2106.05(h). Step 2B: The additional elements from Step 2A Prong 2 include generally linking the use of the judicial exception to a particular field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 19 Step 1: a machine, as in claim 17. Step 2A Prong 1: The claim recites the abstract ideas of claim 17. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: further comprising: a graphical user interface executed at the client computing system to provide for rendering and assessment of the variation of the in-app asset: These additional elements are recited at a high level of generality and represent nothing more than an attempt to generally link the use of the underlying judicial exception to a field of use or a technological environment involving the in-app asset. See MPEP 2106.05(h). Step 2B: The additional elements from Step 2A Prong 2 include generally linking the use of the judicial exception to a particular field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 20 Step 1: a machine, as in claim 17. Step 2A Prong 1: The claim recites the abstract ideas of claim 17. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: wherein the in-app asset is either an audio asset or a graphical asset: These additional elements are recited at a high level of generality and represent nothing more than an attempt to generally link the use of the underlying judicial exception to a field of use or a technological environment defining the in-app asset. See MPEP 2106.05(h). Step 2B: The additional elements from Step 2A Prong 2 include generally linking the use of the judicial exception to a particular field of use or technological environment. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 7, 10, 17 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gonzalez et al. (hereinafter Gonzalez), “Generating Gameplay-Relevant Art Assets with Transfer Learning” (2020). Gonzalez was disclosed in an IDS dated 9/23/2024. Regarding independent claim 1, Gonzalez discloses a method for automatically generating a variation of an in-app asset (page 1 Abstract and Introduction, page 2 System Overview discloses creating by a system for a user (a method for automatically generating) variations of Pokeman sprites/images from Pokeman game app (a variation of an in-app asset)), comprising: providing a description of a reference version of an in-app asset to an artificial intelligence model (page 3 Dataset Collection and Figure 1 discloses Pokeman image retrieval wherein Figure 1 “Input reshape: 32*32*3” block represents the pixel data of the Pokeman image in Hue, Saturation, Value HSV format (providing a description of a reference version of an in-app asset) that is provided the proposed convolutional VAE model (to an artificial intelligence model)); providing a contextual communication to the artificial intelligence model (page 2 Pokeman, page 3 Dataset Collection and Figure 1 disclose type information retrieved, e.g. fire-type Pokeman…grass-type ones, as shown in Figure 1 “Type info 1*18” block provided separately to the convolutional VAE model), the contextual communication specifying a contextual feature for generation of variations of the reference version of the in-app asset (pages 2 Pokeman Type, pages 3-4 disclose that the Pokeman type information retrieved, e.g. fire-type/grass-type, in Figure 2 specifying a contextual feature for generation of variations of Pokeman referenced); executing the artificial intelligence model to automatically generate a variation of the in-app asset based on the contextual feature specified by the contextual communication (page 2 System Overview, page 4 Evaluation disclose executing the proposed convolutional VAE system outputs, e.g. Figure 2, a variation of the input Pokeman based on the contextual feature, e.g. grass/fire types, specified by retrieved Pokeman type information); and conveying the variation of the in-app asset for human assessment (page 4 Figure 2 the variation, e.g. fire-type/grass-type Pokeman based on the reference Pokeman asset is output by the system for visual quality evaluation). Regarding dependent claim 7, Gonzalez discloses the method as recited in claim 1, wherein the contextual communication is one or more of a text input to the artificial intelligence model and a graphical input to the artificial intelligence model (page 2 Pokemon, page 3 Dataset Collection, retrieved type information from Subbiah 2018 indicates type in a Kaggle ‘pokemon.csv’ file wherein the types are textual values e.g. ‘Fire/Grass’ that is represented as Type info 1*18 into the CVAE per Figure 1). Regarding dependent claim 10, Gonzalez discloses the method as recited in claim 1, wherein the in-app asset is a graphical asset (page 1 Abstract, page 4 the Pokeman game visual is a graphical asset as shown in Figure 2). Regarding independent claim 17, Gonzalez discloses a system for automatically generating and auditioning variations of an in-app asset (page 2 System Overview, page 4 Evaluation disclose system outputting visual images of Pokeman image variations for evaluation of Pokeman game assets), comprising: an input processor configured to receive a reference version of an in-app asset and a contextual communication (page 3 Figure 1 the proposed convolutional variational autoencoder acts as an input processor configured to receive “Input reshape: 32*32*3” as a reference version of the Pokeman image retrieved and “Type info 1*18” representing the type information retrieved), the contextual communication specifying a contextual feature for generation of variations of the reference version of the in-app asset (pages 2 Pokeman Type, pages 3-4 disclose that the Pokeman type information retrieved, e.g. fire-type/grass-type, in Figure 2 specifying a contextual feature for generation of variations of Pokeman referenced); an artificial intelligence model configured to receive the reference version of the in-app asset and the contextual communication as input (page 3 Figure 1 the proposed convolutional variational autoencoder configured to receive “Input reshape: 32*32*3” as the reference version of the Pokeman image retrieved and “Type info 1*18” representing the type information retrieved) and automatically generate a variation of the in-app asset based on the reference version of the in-app asset and the contextual communication (page 4 the system outputs variations of the Pokeman image based on the referenced Pokeman input and the type information, e.g. fire, grass, water, and fairy); and an output processor configured to convey the variation of the in-app asset to a client computing system (page 4 Figure 2 the variation, e.g. fire-type/grass-type Pokeman based on the reference Pokeman asset is output by the system (and an output processor configured to convey the variation of the in-app asset) for visual quality evaluation to necessary a client workstation displaying the visual graphic output that is shown in Figure 2 (to a client computing system)). Regarding dependent claim 20, Gonzalez discloses the system as recited in claim 17, wherein the in-app asset is either an audio asset or a graphical asset (page 1 Abstract, page 4 the Pokeman game visual is a graphical asset as shown in Figure 2). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-6 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Gonzalez in view of Smirnov et al. (hereinafter Smirnov) “MarioNette: Self-Supervised Sprite Learning” (2021). Regarding dependent claim 2, Gonzalez teaches all the elements of claim 1. Gonzalez does not expressly teach wherein the in-app asset is defined by multiple layers, wherein each of the multiple layers defines a different aspect of the in-app asset. However, Smirnov teaches in-app asset is defined by multiple layers (page 1 describes an input frame (in-app asset) from frames of a sprite-based video game and Section 2 Method describes that each input frame is decomposed into depth layers (is defined by multiple layers)), wherein each of the multiple layers defines a different aspect of the in-app asset (Section 2 Method describes that that organized depth layers of each frame represents a set of sprites at different depths and the background as a special learnable sprite of the input game frame (different layer defines a different aspect of the in-app asset)). Because Gonzalez and Smirnov address in game sprites/images, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of an in-app asset being defined by multiple layers, wherein each of the multiple layers defines a different aspect of the in-app asset as suggested by Smirnov into Gonzalez’s method, with a reasonable expectation of success, such that system additionally handles various depth layers and the Pokeman sprite asset is further defined with a background layer special sprite to teach wherein the in-app asset is defined by multiple layers, wherein each of the multiple layers defines a different aspect of the in-app asset. This modification would have been motivated by the desire to enable decomposing visual content into semantically meaningful parts for analysis, synthesis, and editing to address a long-standing problem (Smirnov Section 1 Related Work). Regarding dependent claim 3, Gonzalez teaches all the elements of claim 1. Gonzalez does not expressly teach wherein the variation of the in-app asset includes a same set of layers as the reference version of the in-app asset, and wherein a layer of the variation of the in-app asset is defined differently than a corresponding layer of the reference version of the in-app asset. However, Smirnov teaches wherein variation of in-app asset includes a same set of layers as a reference version of the in-app asset (Sections 2.2-2.5 Figure 2 teaches applying different local spatial transformations around their anchors (wherein variation) to the same frame containing sprites layer (of in-app asset includes a same set of layers) to move a character sprite of the frame (as a reference version of the in-app asset)), and wherein a layer of the variation of the in-app asset is defined differently than a corresponding layer of the reference version of the in-app asset (Section 2.2-2.5 Figure 2 wherein layer 2 (wherein a layer) of the applied local transform of the frame (of the variation of the in-app asset) has differently defined local transforms than layer 1 (is defined differently than a corresponding layer) of the character sprite of the frame (of the reference version of the in-app asset)). Because Gonzalez and Smirnov address in game sprites/images, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings wherein variation of in-app asset includes a same set of layers as a reference version of the in-app asset, and wherein a layer of the variation of the in-app asset is defined differently than a corresponding layer of the reference version of the in-app asset as suggested by Smirnov into Gonzalez’s method, with a reasonable expectation of success, such that system additionally handles various depth layers and the Pokeman sprite asset’s frame further includes local spatial transforms around assigned anchors with another layer with differently defined local transforms than the corresponding layer containing the Pokeman sprite asset of the frame to teach wherein the variation of the in-app asset includes a same set of layers as the reference version of the in-app asset, and wherein a layer of the variation of the in-app asset is defined differently than a corresponding layer of the reference version of the in-app asset. This modification would have been motivated by the desire to enable decomposing visual content into semantically meaningful parts for analysis, synthesis, and editing to address a long-standing problem (Smirnov Section 1 Related Work). Regarding dependent claim 4, Gonzalez teaches all the elements of claim 1. Gonzalez does not expressly teach wherein the variation of the in-app asset includes a different set of layers as compared to a reference set of layers that define the reference version of the in-app asset. However, Smirnov teaches wherein a variation of an in-app asset includes a different set of layers as compared to a reference set of layers that define a reference version of the in-app asset (Section 2.2-2.5 Figure 2 teaches applying location transformations to an input frame (wherein a variation of an in-app asset) includes layer 2 set of layer interactions including place sprite at active anchors, apply local transforms, and apply random z-orders (includes a different set of layers) as compared with layer 1 set of layer interactions with a reference sprite character of the input frame (as compared to a reference set of layers that define a reference version of the in-app asset)). Because Gonzalez and Smirnov address in game sprites/images, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings wherein a variation of an in-app asset includes a different set of layers as compared to a reference set of layers that define a reference version of the in-app asset as suggested by Smirnov into Gonzalez’s method, with a reasonable expectation of success, such that system incorporates additionally applying location transformations to an input frame containing Pokeman to include a layer 2 set of layer interactions including place sprite at active anchors, apply local transforms, and apply random z-orders as compared with layer 1 set of layer interactions with the reference Pokeman of the input frame to teach wherein the variation of the in-app asset includes a different set of layers as compared to a reference set of layers that define the reference version of the in-app asset. This modification would have been motivated by the desire to enable decomposing visual content into semantically meaningful parts for analysis, synthesis, and editing to address a long-standing problem (Smirnov Section 1 Related Work). Regarding dependent claim 5, Gonzalez, in view of Smirnov, teach the method as recited in claim 4, wherein the different set of layers includes more layers than the reference set of layers, or wherein the different set of layers includes less layers than the reference set of layers, or wherein the different set of layers includes one or more layers not present in the reference set of layers (see Smirnov Figure 2 wherein layer 2 set of interactions is different from layer 1 (wherein the different set of layers) including layer 2 apply local transform interaction layer instance not present in the set of interactions layers in layer 1 containing the reference character sprite). Regarding dependent claim 6, Gonzalez, in view of Smirnov, teach the method as recited in claim 5, wherein at least one layer in the different set of layers is defined differently than an equivalent layer in the reference set of layers (see Smirnov Figure 2 wherein layer 2’s set of interactions includes apply local transforms around the anchor point wherein the transforms are offset in different directions than an equivalent apply local transformation shown for layer 1). Regarding dependent claim 18, Gonzalez teaches all the elements of claim 17. Gonzalez does not expressly teach wherein the in-app asset is defined by multiple layers, wherein each of the multiple layers defines a different aspect of the in-app asset, wherein the variation of the in-app asset includes a different set of layers as compared to a reference set of layers that define the reference version of the in-app asset and/or at least one different parameter setting within a layer common to both the reference version of the in-app asset and the variation of the in-app asset. However, Smirnov teaches wherein an in-app asset is defined by multiple layers (Section 2.2-2.5 Figure 2 teaches applying location transformations to an input frame (wherein an in-app asset) includes at layers 1 and 2 sets of layer interactions including place sprite at active anchors, apply local transforms, and apply random z-orders (is defined by multiple layers), as compared with layer 1 set of layer interactions with a reference sprite character of the input frame (as compared to a reference set of layers that define a reference version of the in-app asset)), wherein each of the multiple layers defines a different aspect of the in-app asset (Section 2.2-2.5 Figure 2 teaches each of layer 1 and layer 2 defines different aspects of different sprite objects of the input frame), wherein a variation of the in-app asset includes a different set of layers as compared to a reference set of layers that define a reference version of the in-app asset (Section 2.2-2.5 Figure 2 wherein the apply local transforms of the input frame (wherein a variation of the in-app asset) includes layer 2’s place sprites at active anchors and then apply local transforms (includes a different set of layers) as compared to layer 1 containing the reference character sprite wherein layer 1 also includes place sprites at active anchors and apply local transforms layer interaction instances (as compared to a reference set of layers that define a reference version of the in-app asset)). Because Gonzalez and Smirnov address in game sprites/images, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings wherein an in-app asset is defined by multiple layers, wherein each of the multiple layers defines a different aspect of the in-app asset, wherein a variation of the in-app asset includes a different set of layers as compared to a reference set of layers that define a reference version of the in-app asset as suggested by Smirnov into Gonzalez’s method, with a reasonable expectation of success, such that system incorporates additionally applying location transformations and associated interactions by layers to an input frame containing the Pokeman sprite image to include a layer 2 set of layer interactions including place sprite at active anchors, apply local transforms and similarly to different layer 1 set of layer interactions instance with the reference Pokeman sprite image of the input frame to teach wherein the in-app asset is defined by multiple layers, wherein each of the multiple layers defines a different aspect of the in-app asset, wherein the variation of the in-app asset includes a different set of layers as compared to a reference set of layers that define the reference version of the in-app asset and/or at least one different parameter setting within a layer common to both the reference version of the in-app asset and the variation of the in-app asset. This modification would have been motivated by the desire to enable decomposing visual content into semantically meaningful parts for analysis, synthesis, and editing to address a long-standing problem (Smirnov Section 1 Related Work). Claims 8 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Gonzalez. Regarding dependent claim 8, Gonzalez discloses the method as recited in claim 1, wherein conveying the variation of the in-app asset for human assessment includes rendering of the variation of the in-app asset (page 4 the variation, e.g. fire-type/grass-type Pokeman based on the reference Pokeman asset is output by the system for visual quality evaluation necessarily requires rendering of the graphical varied Pokeman shown in Figure 2). Gonzalez does not expressly disclose conveying the variation through a graphical user interface. However, Gonzalez does suggest a possible embodiment of conveying variations through a graphical user interface (page 2 Related Work Deep Generative Models of Artbreeder as a tool based on a deep convolutional GAN allowing its users to manipulate numerous parameters to adjust the creation of images). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize Gonzalez’s related work of Artbreeder graphical user interface for conveying the variation of the in-app asset for human assessment includes rendering of the variation of the in-app asset through a graphical user interface. This modification would have been motivated by the desire to provide automated or assisted visual design generation (Gonzalez page 1 Related Work). Regarding dependent claim 19, Gonzalez discloses the system as recited in claim 17, assessment of the variation of the in-app asset (page 4 the variation, e.g. fire-type/grass-type Pokeman based on the reference Pokeman asset is output by the system for visual quality evaluation). Gonzalez does not expressly disclose further comprising: a graphical user interface executed at the client computing system to provide for rendering and assessment of the variation. However, Gonzalez does suggest a possible embodiment of a graphical user interface executed at a client computing system to provide for rendering and assessment of variation (page 2 Related Work Deep Generative Models of Artbreeder as a tool based on a deep convolutional GAN allowing its users to manipulate numerous parameters to adjust the creation of images). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize Gonzalez’s related work of Artbreeder graphical user interface to further comprising: a graphical user interface executed at the client computing system to provide for rendering and assessment of the variation of the in-app asset. This modification would have been motivated by the desire to provide automated or assisted visual design generation (Gonzalez page 1 Related Work). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Gonzalez, as applied in the rejection of claim 1 above, in view of Barahona-Rios et al. (hereinafter Rios) “SpecSinGAN: Sound Effect Variation Synthesis Using Single-Image GANs” (2022). Regarding dependent claim 9, Gonzalez teaches all the elements of claim 1. Gonzalez does not expressly teach wherein the in-app asset is an audio asset. However, Rios teaches wherein an in-app asset is an audio asset (page 1 alternative way to generate sound assets for video games and related media). Because Gonzalez and Rios address the issue of generating video game assets, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings wherein an in-app asset is an audio asset as suggested by Rios into Gonzalez’s method, with a reasonable expectation of success, such that Gonzalez’s system for generating a variation of an in-app asset is further enhanced to provide a way to generate a sound asset for the Pokeman video game to teach wherein the in-app asset is an audio asset. This modification would have been motivated to address the challenge of balancing quality and quantity of audio content for ever-growing complexity and length of video games and related media (Rios Section 1). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Gonzalez, as applied in the rejection of claim 1 above, in view of Ramesh et al. (hereinafter Ramesh) “Zero-Shot Text-to-Image Generation” (2021). Regarding dependent claim 11, Gonzalez teaches all the elements of claim 1. Gonzalez does not expressly teach further comprising: automatically culling at least one variation of the in-app asset as generated by the artificial intelligence model by determining that at least one feature of the at least one variation of the in-app asset does not satisfy acceptance criteria for the in-app asset. However, Ramesh teaches automatically culling (Section 2.6 and only selecting the “top k” or the best of the 512 image samples thus discarding the non top k images) at least one variation of an in-app asset as generated by an artificial intelligence model (Section 2 Method and Section 2.6 Sample Generation describe a trained discrete variational autoencoder dVAE generating (generated by an artificial intelligence model) N=512 image samples (at least one variation) that match a caption (of an in-app asset)) by determining that at least one feature of the at least one variation of the in-app asset does not satisfy acceptance criteria for the in-app asset (Section 2.6 the sample images generated by dVAE that (of the at least one variation of the in-app asset) are not assigned a score (by determining that at least one feature does not) that would satisfy to be selected as the top k images of the sample images generated by dVAE (satisfy acceptance criteria for the in-app asset)). Because Gonzalez and Ramesh address the issue of generation of images based on a reference via variational autoencoders, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of automatically culling at least one variation of an in-app asset as generated by an artificial intelligence model by determining that at least one feature of the at least one variation of the in-app asset does not satisfy acceptance criteria for the in-app asset as suggested by Ramesh into Gonzalez’s method, with a reasonable expectation of success, such that the system uses a scoring evaluation criteria to automatically cull at least one variation generated of the in-app asset generated by determining that a score of a matching criteria is not satisfied of the at least one variation of the in-app asset to teach further comprising: automatically culling at least one variation of the in-app asset as generated by the artificial intelligence model by determining that at least one feature of the at least one variation of the in-app asset does not satisfy acceptance criteria for the in-app asset. This modification would have been motivated by the desire to perform complex task such as image-to-image translation at a rudimentary level (Ramesh Section 1 page 2). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Meadows et al., US 2023/0090253 A1 (Mar. 23, 2023) (ABSTRACT Systems and methods for digital avatars, specifically for fashion and consumer goods, are provided. This system is useful with an identified avatar, environment, and objects that a user may author, edit, and place. A user may deploy an avatar that resembles themselves via augmented reality, virtual reality, and other types of media. These systems include a user interface, administrative interface, economic systems and means of managing assets and protecting users' data. The systems incorporate mechanisms of controlling the avatar, means of integrating physical sensor data that interoperates with the virtual, and means of predicting related trends, choices, and behavior. Various features are employed for increased efficiency, accuracy, and believability. These features include machine learning to produce avatar features, AR map directions to interact with avatars, computer vision to enable the real-time translation of physical to virtual and social structures to enable groups of people to create and license digital assets). Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUANG FU CHEN whose telephone number is (571)272-1393. The examiner can normally be reached M-F 9:00-5:30pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached on (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KC CHEN/Primary Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Feb 14, 2023
Application Filed
Feb 06, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579425
PARAMETERIZED ACTIVATION FUNCTIONS TO ADJUST MODEL LINEARITY
2y 5m to grant Granted Mar 17, 2026
Patent 12566994
SYSTEMS AND METHODS TO CONFIGURE DEFAULTS BASED ON A MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12561593
METHOD FOR DETERMINING PRESENCE OF A SIGNATURE CONSISTENT WITH A PAIR OF MAJORANA ZERO MODES AND A QUANTUM COMPUTER
2y 5m to grant Granted Feb 24, 2026
Patent 12561561
Mapping User Vectors Between Embeddings For A Machine Learning Model for Authorizing Access to Resource
2y 5m to grant Granted Feb 24, 2026
Patent 12561497
AUTOMATED OPERATING MODE DETECTION FOR A MULTI-MODAL SYSTEM WITH MULTIVARIATE TIME-SERIES DATA
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+67.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 252 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month