Prosecution Insights
Last updated: April 19, 2026
Application No. 18/167,690

GENERATING COMPOSITE IMAGES USING USER INTERFACE FEATURES FOR AUTO-COMPOSITING AND COMPOSITE-AWARE SEARCH

Final Rejection §103
Filed
Feb 10, 2023
Examiner
AHN, CHRISTINE YERA
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Adobe Inc.
OA Round
4 (Final)
69%
Grant Probability
Favorable
5-6
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
11 granted / 16 resolved
+6.8% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
34 currently pending
Career history
50
Total Applications
across all art units

Statute-Specific Performance

§101
5.2%
-34.8% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
21.9%
-18.1% vs TC avg
§112
20.1%
-19.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment 2. The amendment filed December 19, 2025 has been entered. Claims 1-20 remain pending in the application. Applicant’s amendments to the Claims have overcome each and every objection. Response to Arguments 3. Applicant's arguments filed December 19, 2025 have been fully considered but they are not persuasive. 4. Applicant argues that Jonsson (United States Patent Application Publication No. 2014/0035950 A1) fails to disclose the combination of recited underlying models having neural networks and selectable options that cause their respective underlying models to analyze image features of the background image and modify a foreground object image based on the analysis. Examiner replies that Applicant’s arguments with respect to claim(s) 1, 10, and 17 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The new rejection uses Wang et al. (“Repopulating Street Scenes” – cited in Applicant’s IDS) to teach the combination of recited underlying models having neural networks and selectable options that cause their respective underlying models to analyze image features of the background image and modify a foreground object image based on the analysis. 5. Applicant argues that the dependent claims 3-9, 11-16, and 18-20 are now allowable due to the above arguments. Examiner replies that the dependent claims remain rejected in light of the new reference, Wang et al. (“Repopulating Street Scenes” – cited in IDS). 6. Conclusion: The rejections set in the previous Office Action are shown to have been proper, and the claims are rejected below. New citations and parenthetical remarks can be considered new grounds of rejection and such new grounds of rejection are necessitated by the Applicant’s amendments to the claims. Therefore, the present Office Action is made final. Claim Rejections - 35 USC § 103 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 8. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. 9. Claim(s) 1, 5, 7-10, 13-15, 17, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jonsson (United States Patent Application Publication No. 2014/0035950 A1) in view of Wang et al. (“Repopulating Street Scenes” – cited in IDS), hereinafter referred to as Wang. 10. Regarding claim 1, Jonsson teaches a computer-implemented method comprising: determining a background image and a foreground object image for use in generating a composite image (Paragraph 62, Figure 6B shows a user selecting a beach scene as the background and an image of a bird in the sky as the foreground image. Then the user superimposes the foreground image onto the background to create a composite image); providing, for display within a graphical user interface of a client device (Paragraph 61, Figures 6A-6I teaches a graphical user interface 600 of a display device) a plurality of selectable options for executing an auto-composite model for generating the composite image (Paragraph 61, Figures 6A-6I show, within the graphical user interface, multiple sliders and an ‘add light’ button which the user can select which are a plurality of selectable options. The user is also able to select multiple different foreground image objects which is shown on the left side of the graphical user interface 600 to execute the process of creating a composite image), wherein: the auto-composite model a plurality of underlying models (Paragraph 61, Figures 6A-6I teach the image compositing application 600, or auto-composite model, have sliders bars which can adjust the scale, color, contrast, and brightness to better merge the foreground image to the background. These are the underlying scale and harmonization models which are a plurality of underlying models; Paragraph 66, Figure 6I shows a button ‘add light’ which allows for shadow generation which is the underlying shadow generation model; Paragraph 87 mentions that any of the functions can be performed automatically), each underlying model from the plurality of underlying models corresponds to a selectable option from the one or more selectable options such that selection of the selectable option triggers execution of a backend workflow of the underlying model (Paragraph 24 teaches that the editing model can use artificial intelligence to refine any selections made by the user. Furthermore, the slider bars and manual inputs taught in Paragraph 61 and 66 to indicate lighting conditions taught by Jonsson are selectable options. Thus, upon any of the user’s selections, an artificial intelligence method may launch which is a backend workflow for the selected model and is triggered by the user selection of one of the selectable options) detecting, via the graphical user interface of the client device, a user selection of at least one selectable option from the plurality of selectable options (Paragraph 61 and 64, Figures 6A-6I show the user can select any of the sliders, which are the selectable options, to scale, color, or contrast the image for image composition purposes. The user selection of the options are detected and the image editing tools are invoked; Paragraph 66, Figure 6I also shows the ‘add light’ button which the user can select for shadow generation; Paragraph 76 and Figure 8 teach the graphical user interface runs on a computer system or client device 800); and generating, using the background image and the foreground object image, the composite image by executing at least one underlying model of the auto-composite model that corresponds to the user selection of the at least one selectable option based on one or more corresponding image features of the background image, the one or more corresponding image features comprising at least one of the scale of the background image, the lighting of the background image determined using the lighting estimation model, or the tone of the background image (Paragraph 64, Figure 6E discloses the composite image with a scaled bird when the user selects and interacts with the scaling slider. The bird is scaled based on image features in the rest of the image which an underlying model like harmonization executes its function based on; Paragraph 66, Figure 6I discloses the composite image with shadowing of the foreground object when the user selects the ‘add light’ button for shadow generation. The shadow model execution is based off the image feature specified by the user as the light source. The user selection of a light source into the image and then a shadow being generated based on it can be considered as part of the lighting estimation model that determines the lighting of the background image. The claim limitation only requires one of the image features to be taught). However, Jonsson fails to teach the method causing the underlying model to analyze one or more image features of the background image and modify the foreground object image based on the one or more image features; and the plurality of underlying models include a scale prediction model comprising a first neural network that determines a scale for the foreground object image based on a scale of the background image, a harmonization model comprising a second neural network that determines a lighting or tone for the foreground object image based on a lighting or tone of the background image, and a shadow generation model comprising a third neural network that generates a shadow for the foreground object image based on the lighting of the background image. Wang teaches the method causing the underlying model to analyze one or more image features of the background image and modify the foreground object image based on the one or more image features (Section 3 Paragraph 1-2 teaches using various networks, or underlying models, that analyze the background image features to adjust the lighting, scale, and placement of a foreground object image inserted; Section 3.3 teaches analyzing the background lighting to modify the objects added to the scene and synthesize shadows. This teaches analyzing image features like the lighting in the scene to modify the foreground object image like its shadow); and the plurality of underlying models include a scale prediction model comprising a first neural network that determines a scale for the foreground object image based on a scale of the background image (Section 3.4 teaches a scale prediction model that uses the neural network MiDaS to predict a depth map for the scene. This teaches determining a scale of the background image. Then, using the depth map, the inserted object is resized according to the calculated disparity), a harmonization model comprising a second neural network that determines a lighting or tone for the foreground object image based on a lighting or tone of the background image (Section 3.3 and Figure 4 teaches the sun estimation network which determines the lighting for an object based on the sun position and lighting detected in the scene. This teaches determining a neural network determining the lighting for the foreground object based on the lighting of the background image), and a shadow generation model comprising a third neural network that generates a shadow for the foreground object image based on the lighting of the background image (Section 3.5 teaches an insertion network that generates a shadow for the foreground object based on the sun lighting in the background image. The insertion network is the neural network that generates the shadow). Jonsson and Wang are considered analogous to the claimed invention as because both are in the same field of generating composite images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method of generating a composite image taught by Jonsson with the underlying models using neural networks taught by Wang in order to yield realistic images that improve upon prior work (Wang Section 1 Paragraph 4). 11. Regarding claim 5, Jonsson in view of Wang teaches the limitations of claim 1. Jonsson further teaches the computer-implemented method wherein: detecting the user selection of the at least one selectable option comprises detecting a user selection of a selectable option for executing the scale prediction model (Paragraph 64, Figure 6E shows the user can select the scaling slider to scale the object); and executing the auto-composite model using the background image and the foreground object image (Paragraph 64, Figure 6E shows the foreground object scaled to be more realistic in the scene; Paragraph 87 mentions any of the operations including scaling can be done automatically). However, Jonsson fails to clearly teach modifying, utilizing the scale prediction model, the scale of the foreground object image within the composite image based on the scale of the background image. Wang teaches modifying, utilizing the scale prediction model, the scale of the foreground object image within the composite image based on the scale of the background image (Section 3.4 teaches a scale prediction model that uses the neural network MiDaS to predict a depth map for the scene. This teaches determining a scale of the background image. Then, using the depth map, the inserted object is resized according to the calculated disparity). Jonsson and Wang are considered analogous to the claimed invention as because both are in the same field of generating composite images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method of generating a composite image taught by Jonsson with the underlying models using the scale prediction model taught by Wang in order to yield realistic images that improve upon prior work (Wang Section 1 Paragraph 4). 12. Regarding claim 7, Jonsson in view of Wang teaches the limitations of claim 1. Jonsson further teaches the computer-implemented method wherein: detecting the user selection of the at least one selectable option comprises detecting a user selection of a selectable option for executing the shadow generation model (Paragraph 66, Figure 6I shows a button called ‘add light’ which allows for adding shadows to the image); and executing the auto-composite model using the background image and the foreground object image comprises generating, utilizing the shadow generation model, the shadow associated with the foreground object image within the composite image based on the lighting of the background image (Paragraph 66, Figure 6I shows when the ‘add light’ button is selected, the foreground object will be rendered with a shadow in the composite image. The user selection of a light source into the image and then a shadow being generated based on it can be considered as part of the lighting estimation model that determines the lighting of the background image). 13. Regarding claim 8, Jonsson in view of Wang teaches the limitations of claim 1. Jonsson further teaches the computer-implemented method further comprising: generating an initial composite image utilizing the background image and the foreground object image (Paragraph 61, Figure 6B shows an initial composite image generated with a foreground and background image); and providing the initial composite image for display within the graphical user interface of the client device (Paragraph 61, Figure 6B shows the image displayed in the graphical user interface 600), wherein generating the composite image by executing the auto-composite model comprises generating the composite image by modifying the initial composite image within the graphical user interface of the client device via the auto-composite model (Paragraph 61, Figure 6A-6I show the user can use the graphical user interface 600 to complete the scene by using the slider options to adjust the scale and color and the ‘add light’ button for shadows). 14. Regarding claim 9, Jonsson in view of Wang teaches the limitations of claim 1. Jonsson further teaches the computer-implemented method further comprising: providing, for display within the graphical user interface of the client device along with the plurality of selectable options before generating the composite image, the background image, search results comprising a plurality of foreground object images, and the foreground object image selected from the search results, the foreground object image being displayed separately from the search results (Figure 3A-3E teach search results comprising of a plurality of foreground object images for selection in the large central frame of the window. When a foreground object image is selected from the search results, it appears on the left side of the window which discloses displaying it separately from the search results); and providing, for display within the graphical user interface of the client device after generating the composite image, the background image, the foreground object image, the search results, the plurality of selectable options, and the composite image generated from the background image and the foreground object image (Paragraph 35 and Figures 3A-3E teach displaying within the graphical user interface the search results; Figure 6A-6I and Paragraph 61 teach displaying within the graphical user interface the composite image which contains the background image and foreground object images. It also shows the slider bars on the right side of the window which are the plurality of selectable options). 15. Regarding claim 10, Jonsson teaches a non-transitory computer-readable medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform operations comprising (Paragraph 79, Figure 8 shows a processor 830 which executes instructions from the memory 810): providing, for display within a graphical user interface of a client device, at least one interactive element for providing search input (Paragraph 35, Figure 3A-3E show an interactive image search which allows for search input within the graphical user interface 300) and a plurality of additional interactive elements for executing an auto-composite model (Paragraph 61, Figures 6A-6I show within the graphical user interface multiple sliders and an ‘add light’ button which the user can select which are selectable options that are a plurality of additional interactive elements. The user is also able to select multiple different foreground image objects which is shown on the left side of the graphical user interface 600 to execute the process of creating a composite image. This selection is also an interactive element), wherein: the auto-composite model includes a plurality of underlying models (Paragraph 61, Figures 6A-6I show the image compositing application 600, or auto-composite model, have sliders bars which can adjust the scale, color, contrast, and brightness to better merge the foreground image to the background. These are the underlying scale and harmonization models; Paragraph 66, Figure 6I shows a button ‘add light’ which allows for shadow generation which is the underlying shadow generation model; Paragraph 87 mentions that any of the functions can be performed automatically), each underlying model from the plurality of underlying models corresponds to an additional interactive element from the plurality of additional interactive elements such that selection of the additional interactive element triggers execution of a backend workflow of the underlying model (Paragraph 24 teaches that the editing model can use artificial intelligence to refine any selections made by the user. Furthermore, the slider bars and manual inputs taught in Paragraph 61 and 66 to indicate lighting conditions taught by Jonsson are selectable options or interactive elements. Thus, upon any of the user’s selections of the interactive elements, an artificial intelligence method may launch which is a backend workflow for the selected model and is triggered by the user selection of one of the interactive elements) receiving, via the graphical user interface of the client device, a user interaction with the at least one interactive element and an additional user interaction with the providing a user selection of at least one additional interactive element from the plurality of additional interactive elements (Paragraph 36, Figures 3A show the graphical user interface 300 receiving the user interaction to search a foreground object with the search results shown in Figure 3B which is the at least one interactive element; Paragraph 61, Figures 6A-6I show the graphical user interface 600 receiving an additional user interaction and selection through the sliders and foreground object image selection, which are the additional interactive elements, to create the composite image); retrieving at least one foreground object image for use in generating a composite image with at least one background image in accordance with the user interaction with the at least one interactive element (Paragraph 39, Figure 3D shows the user selecting a foreground image from the image search results to use with a background image); generating the composite image utilizing the at least one foreground object image and the at least one background image by executing at least one underlying model of the auto-composite model in accordance with the additional user interaction with the at least one additional interactive element based on one or more corresponding image features of the at least one background image, the one or more corresponding image features comprising at least one of (Paragraph 64, Figure 6E shows the composite image with a scaled bird when the user selects and interacts with the scaling slider. The bird is scaled based on image features in the rest of the image which an underlying model like harmonization executes its function based on; Paragraph 66, Figure 6I shows the composite image with shadowing of the foreground object when the user selects the ‘add light’ button for shadow generation. The shadow model execution is based off the image feature specified by the user as the light source. The user selection of a light source into the image and then a shadow being generated based on it can be considered as part of the lighting estimation model that determines the lighting of the background image. The claim limitation only requires one of the image features to be taught). and providing the composite image for display within the graphical user interface of the client device (Paragraph 66, Figure 6I shows the composite image after all the user interactions in the graphical user interface 600). However, Jonsson fails to teach the operations causing the underlying model to analyze one or more image features of the background image and modify the foreground object image based on the one or more image features; and the plurality of underlying models include a scale prediction model comprising a first neural network that determines a scale for the foreground object image based on a scale of the background image, a harmonization model comprising a second neural network that determines a lighting or tone for the foreground object image based on a lighting or tone of the background image, and a shadow generation model comprising a third neural network that generates a shadow for the foreground object image based on the lighting of the background image. Wang teaches the operation causing the underlying model to analyze one or more image features of the background image and modify the foreground object image based on the one or more image features (Section 3 Paragraph 1-2 teaches using various networks, or underlying models, that analyze the background image features to adjust the lighting, scale, and placement of a foreground object image inserted; Section 3.3 teaches analyzing the background lighting to modify the objects added to the scene and synthesize shadows. This teaches analyzing image features like the lighting in the scene to modify the foreground object image like its shadow); and the plurality of underlying models include a scale prediction model comprising a first neural network that determines a scale for the foreground object image based on a scale of the background image (Section 3.4 teaches a scale prediction model that uses the neural network MiDaS to predict a depth map for the scene. This teaches determining a scale of the background image. Then, using the depth map, the inserted object is resized according to the calculated disparity), a harmonization model comprising a second neural network that determines a lighting or tone for the foreground object image based on a lighting or tone of the background image (Section 3.3 and Figure 4 teaches the sun estimation network which determines the lighting for an object based on the sun position and lighting detected in the scene. This teaches determining a neural network determining the lighting for the foreground object based on the lighting of the background image), and a shadow generation model comprising a third neural network that generates a shadow for the foreground object image based on the lighting of the background image (Section 3.5 teaches an insertion network that generates a shadow for the foreground object based on the sun lighting in the background image. The insertion network is the neural network that generates the shadow). Jonsson and Wang are considered analogous to the claimed invention as because both are in the same field of generating composite images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the non-transitory computer-readable medium storing instructions of generating a composite image taught by Jonsson with the underlying models using neural networks taught by Wang in order to yield realistic images that improve upon prior work (Wang Section 1 Paragraph 4). 16. Regarding claim 13, Jonsson in view of Wang teaches the limitations of claim 10. Jonsson further teaches the non-transitory computer-readable medium wherein retrieving the at least one foreground object image for use in generating the composite image comprises retrieving the at least one foreground object image utilizing a composite object search engine that includes one or more of a compositing-aware search engine, a text engine, or an image search engine (Paragraph 31-32, Figures 3A-3E teaches using when using the image search module, the user can input keywords or a reference image to retrieve possible foreground objects that matches the colors, lighting, content and more for the user to select. This allows the user to find a foreground object that is compatible with the background image by matching the attributes listed above. Thus, Jonsson teaches a text search, image search, and composite-aware search engine). 17. Regarding claim 14, Jonsson in view of Wang teaches the limitations of claim 10. Jonsson further teaches the non-transitory computer-readable medium further comprising instructions that, when executed by the at least one processor, cause the at least one processor to perform operations comprising: performing a composite-aware search to retrieve an additional foreground object image based on a compatibility of the additional foreground object image with the composite image (Paragraph 31-32 mentions when using the search engine, the user can input keywords or a reference image to find an image that matches the colors, lighting, content and more. This allows the user to find a foreground object that is compatible with the background image by matching the attributes listed above. Since there are no limits listed, the user can use the search engine more than once to find an additional foreground object image); and modifying the composite image to include the additional foreground object image (Paragraph 66, Figures 6H-I teach modifying the existing composite image to include another foreground object). 18. Regarding claim 15, Jonsson in view of Wang teaches the limitations of claim 10. Jonsson further teaches the non-transitory computer-readable medium wherein: receiving the additional user interaction with the at least one additional interactive element for executing the auto-composite model comprises receiving a plurality of user interactions for executing the scale prediction model, the harmonization model, and the shadow generation model (Paragraph 61, Figure 6A-6I show a graphical user interface 600 which has multiple sliders including adjusting scale, color, contrast, brightness, and a button ‘add light’ to add a shadow. The figures show the user’s plurality of user interactions with the sliders and button); and generating the composite image utilizing the at least one foreground object image and the at least one background image by executing the auto-composite model in accordance with the additional user interaction with the at least one additional interactive element comprises generating the composite image by executing the scale prediction model, the harmonization model, and the shadow generation model utilizing the at least one foreground object image and the at least one background image (Paragraph 66, Figures 6A-6I show that after all the user interactions, the scene is rendered and generated in the graphical user interface 600). However, Jonsson fails to clearly teach the scale prediction model and harmonization model in an auto-generated way. Wang teaches the scale prediction model and harmonization model in an auto-generated way (Section 3.4 teaches a scale prediction model that uses the neural network MiDaS to predict a depth map for the scene. This teaches determining a scale of the background image. Then, using the depth map, the inserted object is resized according to the calculated disparity; Section 3.3 and Figure 4 teaches the sun estimation network which determines the lighting for an object based on the sun position and lighting detected in the scene. This teaches determining a neural network determining the lighting for the foreground object based on the lighting of the background image). Jonsson and Wang are considered analogous to the claimed invention as because both are in the same field of generating composite images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the non-transitory computer-readable medium storing instructions of generating a composite image taught by Jonsson with the underlying models using neural networks taught by Wang in order to yield realistic images that improve upon prior work (Wang Section 1 Paragraph 4). 19. Regarding claim 17, Jonsson teaches a system comprising: one or more memory devices (Paragraph 79, Figure 8 shows a memory 810); and one or more processors configured to cause the system to (Paragraph 79, Figure 8 shows a processor 830): provide, for display within a graphical user interface of a client device, a background image and a plurality of selectable options for executing an auto-composite model (Paragraph 61, Figures 6A-6I show a graphical user interface 600 that has the background image displayed for the composite image; Paragraph 61, Figures 6A-6I show, within the graphical user interface, multiple sliders and an ‘add light’ button which the user can select which are a plurality of selectable options. The user is also able to select multiple different foreground image objects which is shown on the left side of the graphical user interface 600 to execute the process of creating a composite image); receive, via the graphical user interface of the client device, user input selecting a location within the background image to position a foreground object image for the composite image (Paragraph 62, Figures 6A-6B teach the user can select a location and position the foreground object and superimpose it onto the background image of the beach scene); determine, utilizing a composite object search engine, the foreground object image for use in generating the composite image based on the location within the background image selected via the user input (Paragraph 32 mentions when using the search engine, the user can input keywords or a reference image to match the color, color distribution, and content of the image. This allows the user to use a portion of the background image as a reference image; Paragraph 39, Figure 3A-E teaches the user can select one of the foreground objects and then in Paragraph 62, select a position/location for the searched image which is shown in Figure 6A-B.); receive, via the graphical user interface of the client device, additional user input selecting at least one selectable option from the plurality of selectable options for executing an auto-composite model for the composite image based on the background image and the foreground object image (Paragraph 61, Figures 6A-6I show a graphical user interface 600 which is able to receive input from user interaction with multiple sliders. The sliders, which are the plurality of selectable options, include adjusting the scale, color, contrast, and brightness to merge the foreground image and background image; Paragraph 66, Figure 6I shows a button ‘add light’ in the graphical user interface 600 which the user can interact with for shadow generation), wherein: the auto-composite model includes a plurality underlying models (Paragraph 61, Figures 6A-6I show the image compositing application 600, or auto-composite model, have sliders bars which can adjust the scale, color, contrast, and brightness to better merge the foreground image to the background. These are the underlying scale and harmonization models; Paragraph 66, Figure 6I shows a button ‘add light’ which allows for shadow generation which is the underlying shadow generation model; Paragraph 87 mentions that any of the functions can be performed automatically), each underlying model from the plurality of underlying models corresponds to a selectable option from the plurality of selectable options such that selection of the selectable option triggers execution of a backend workflow of the underlying model (Paragraph 24 teaches that the editing model can use artificial intelligence to refine any selections made by the user. Furthermore, the slider bars and manual inputs taught in Paragraph 61 and 66 to indicate lighting conditions taught by Jonsson are selectable options. Thus, upon any of the user’s selections, an artificial intelligence method may launch which is a backend workflow for the selected model and is triggered by the user selection of one of the selectable options) and executing the auto-composite model comprises executing at least one underlying model of the auto-composite model in accordance with the additional user input and based on one or more corresponding image features of the background image, the one or more corresponding image features comprising at least one of (Paragraph 64, Figure 6E shows the composite image with a scaled bird when the user selects and interacts with the scaling slider. The bird is scaled based on image features in the rest of the image which an underlying model like harmonization executes its function based on; Paragraph 66, Figure 6I shows the composite image with shadowing of the foreground object when the user selects the ‘add light’ button for shadow generation. The shadow model execution is based off the image feature specified by the user as the light source. The user selection of a light source into the image and then a shadow being generated based on it can be considered as part of the lighting estimation model that determines the lighting of the background image based on the direct ion of the light from the light source. The claim limitation only requires one of the image features to be taught). and generate the composite image using the background image and the foreground object image in accordance with the user input and the additional user input (Paragraph 66, Figure 6I shows the completed image displayed in the graphical user interface after the additional user inputs with the sliders and ‘add light’ button interaction). However, Jonsson fails to teach the system causing the underlying model to analyze one or more image features of the background image and modify the foreground object image based on the one or more image features; and the plurality of underlying models include a scale prediction model comprising a first neural network that determines a scale for the foreground object image based on a scale of the background image, a harmonization model comprising a second neural network that determines a lighting or tone for the foreground object image based on a lighting or tone of the background image, and a shadow generation model comprising a third neural network that generates a shadow for the foreground object image based on the lighting of the background image. Wang teaches the system causing the underlying model to analyze one or more image features of the background image and modify the foreground object image based on the one or more image features (Section 3 Paragraph 1-2 teaches using various networks, or underlying models, that analyze the background image features to adjust the lighting, scale, and placement of a foreground object image inserted; Section 3.3 teaches analyzing the background lighting to modify the objects added to the scene and synthesize shadows. This teaches analyzing image features like the lighting in the scene to modify the foreground object image like its shadow); and the plurality of underlying models include a scale prediction model comprising a first neural network that determines a scale for the foreground object image based on a scale of the background image (Section 3.4 teaches a scale prediction model that uses the neural network MiDaS to predict a depth map for the scene. This teaches determining a scale of the background image. Then, using the depth map, the inserted object is resized according to the calculated disparity), a harmonization model comprising a second neural network that determines a lighting or tone for the foreground object image based on a lighting or tone of the background image (Section 3.3 and Figure 4 teaches the sun estimation network which determines the lighting for an object based on the sun position and lighting detected in the scene. This teaches determining a neural network determining the lighting for the foreground object based on the lighting of the background image), and a shadow generation model comprising a third neural network that generates a shadow for the foreground object image based on the lighting of the background image (Section 3.5 teaches an insertion network that generates a shadow for the foreground object based on the sun lighting in the background image. The insertion network is the neural network that generates the shadow). Jonsson and Wang are considered analogous to the claimed invention as because both are in the same field of generating composite images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the system of generating a composite image taught by Jonsson with the underlying models using neural networks taught by Wang in order to yield realistic images that improve upon prior work (Wang Section 1 Paragraph 4). 20. Regarding claim 19, Jonsson in view of Wang teaches the limitations of claim 17. Jonsson further teaches the system wherein the one or more processors are further configured to cause the system to: receive, via the graphical user interface of the client device, further user input selecting an additional location within the composite image to position an additional foreground object image (Paragraph 65, Figure 6F-6G shows the user adding an additional foreground object and selecting a location to place it at); determine, utilizing the composite object search engine, the additional foreground object image for use in modifying the composite image based on the additional location (Paragraph 32 mentions when using the search engine, the user can input keywords or a reference image to match the color, color distribution, and content of the image; Paragraph 39, Figure 3A-E teaches the user can select one of the foreground objects and then in Paragraph 62, select a position/location for the searched image which is shown in Figure 6A-B); and modify the composite image utilizing the additional foreground object image based on the additional location (Paragraph 66, Figure 6H-6I teach the user can modify the additional foreground object at its location by adding a shadow through the ‘add light’ button). 21. Regarding claim 20, Jonsson in view of Wang teaches the limitations of claim 17. Jonsson further teaches the system wherein the one or more processors are configured to cause the system to: receive the additional user input for executing the auto-composite model by receiving user selections for executing the scale prediction model, the harmonization model, and the shadow generation model (Paragraph 61, Figures 6A-6I show the sliders in the graphical user interface 600 which allow for user input to adjust the scale, color, contrast, brightness, and a button ‘add light’ to add a shadow); and generate the composite image using the background image and the foreground object image in accordance with the additional user input by: modifying, utilizing the scale prediction model, a scale of the foreground object image within the composite image based on a scale of the background image (Paragraph 64, Figure 6E shows the user can select the scaling slider to scale the object and the generated composite image with the scaled foreground object; Paragraph 87 mentions that any of the operations including harmonization can be performed automatically); modifying, utilizing the harmonization model, a lighting of the foreground object image within the composite image based on a lighting of the background image (Paragraph 64, Figures 6A-6I show sliders for brightness, color, and contrast for harmonizing the image. It teaches that when the user selects those sliders, the foreground object image lighting can be adjusted accordingly to match the background; Paragraph 87 mentions that any of the operations including harmonization can be performed automatically); and generating, utilizing the shadow generation model, a shadow associated with the foreground object image within the composite image (Paragraph 66, Figure 6I shows a button called ‘add light’ which allows for adding shadows to the image. When selected, a shadow is generated for the foreground object). However, Jonsson fails to teach the scale prediction model and harmonization model in an auto-generated way. Wang teaches the scale prediction model (Page 4, Section 2C mentions that various models exist that use deep learning techniques to predict the placement and size of the object) and the harmonization model (Page 6-7, Section 4C mentions various harmonization models exist which automatically executes image harmonization). Jonsson and Wang are considered analogous to the claimed invention as because both are in the same field of generating composite images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the system of generating a composite image taught by Jonsson with the underlying models using neural networks taught by Wang in order to yield realistic images that improve upon prior work (Wang Section 1 Paragraph 4). 22. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jonsson (United States Patent Application Publication No. 2014/0035950 A1) in view of Wang et al. (“Repopulating Street Scenes” – cited in IDS), hereinafter referred to as Wang, as applied to claim 1 above, and further in view of Neural Filters (https://web.archive.org/web/20201205000839/https://helpx.adobe.com/photoshop/using/neural-filters.html). Regarding claim 2, Jonsson in view of Wang teaches the limitations of claim 1. However, Jonsson and Wang fail to teach the method wherein providing the plurality of selectable options for executing the auto-composite model comprises providing a plurality toggles for executing the auto-composite model, each toggle corresponding to an underlying model of the auto-composite model; and detecting the user selection of the at least one selectable option comprises detecting a user interaction switching a toggle of the plurality of toggles from an inactive position to an active position. Neural Filters teaches the method wherein providing the plurality of selectable options for executing the auto-composite model comprises providing a plurality toggles for executing the auto-composite model, each toggle corresponding to an underlying model of the auto-composite model; and detecting the user selection of the at least one selectable option comprises detecting a user interaction switching a toggle of the plurality of toggles from an inactive position to an active position (Page 4 teaches the user can select one of the toggles and move it to an active position like with the ‘Style Transfer’ filter or move it back to inactive like for ‘Skin Smoothing’). Jonsson, Wang, and Neural Filters are considered analogous to the claimed invention because both are in the same field of image editing. Jonsson teaches sliders used as selectable options to execute the underlying models of scale determination, harmonization, and shadow generation. Neural Filters teaches using toggles to execute any underlying model offered in the GUI. A person holding ordinary skill in the art before the effect filing date would have recognized that the sliders taught in Jonsson could be substituted with the toggles taught in Neural Filters because both sliders and toggles serve the purpose of executing an underlying model and adjusting a composite image. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method of creating composite images by Jonsson in view of Wang with the toggles taught by Neural Filters in order to help reduce difficult workflows to a few clicks (Neural Filters Page 1). 23. Claim(s) 3-4, and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jonsson (United States Patent Application Publication No. 2014/0035950 A1) in view of Wang et al. (“Repopulating Street Scenes” – cited in IDS), hereinafter referred to as Wang, as applied to claim 1 and 10 above, and further in view of Zhao (“Unconstrained Foreground Object Search” -- IDS). 24. Regarding claim 3, Jonsson in view of Wang teaches the limitations of claim 1. However, Jonsson fails to teach the method further comprising providing the background image for display via the graphical user interface of the client device; and receiving, via the graphical user interface of the client device, search input including an additional user selection of a location within the background image for positioning the foreground object image. Zhao teaches the method further comprising providing the background image for display via the graphical user interface of the client device; and receiving, via the graphical user interface of the client device, search input including an additional user selection of a location within the background image for positioning the foreground object image (Page 1, Figure 1 shows the background image is analyzed with a hole in the image which indicates the desired positioning of the foreground object. From that analysis, the search engine is able to provide compatible foreground objects for the user selection of a location in the background image for positioning the foreground object). Jonsson, Wang, and Zhao are considered analogous to the claimed invention as because all are in the same field of compositing images. it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method with the image search module taught by Jonsson in view of Wang with the addition of allowing the search module to receive input for a location in the background as taught by Zhao in order to retrieve foreground objects that are semantically compatible with the background image (Zhao Page 1, Column 2, Paragraph 2). 25. Regarding claim 4, Jonsson, Wang, and Zhao together teach the limitations of claim 3. However, Jonsson fails to teach the method wherein receiving, via the graphical user interface of the client device, the search input further comprises receiving, via the graphical user interface of the client device, user input within the background image indicating an intended scale for the foreground object image, the user selection of the location and the user input indicating the intended scale corresponding to a query bounding box received within the background image via one or more user interactions. Zhao teaches the method wherein receiving, via the graphical user interface, the search input further comprises receiving, via the graphical user interface of the client device, user input within the background image indicating an intended scale for the foreground object image, the user selection of the location and the user input indicating the intended scale corresponding to a query bounding box received within the background image via one or more user interactions (Page 1, Figure 1 shows the background image is analyzed with a hole/bounding box in the image which indicates the desired positioning of the foreground object; Page 5, Column 1, Baselines section mentions that the search must follow the aspect ratio of the hole. This indicates the user is able to indicate an intended scale through the hole in the background image which can be considered a bounding box). Jonsson, Wang, and Zhao are considered analogous to the claimed invention as because all are in the same field of compositing images. It would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method with the image search module taught by Jonsson in view of Wang with the addition of allowing the search module to receive input for a location and scale through a bounding box in the background as taught by Zhao in order to retrieve foreground objects that are semantically compatible with the background image (Zhao Page 1, Column 2, Paragraph 2). 26. Regarding claim 12, Jonsson in view of Wang teaches the limitations of claim 10. However, Jonsson fails to teach the non-transitory computer-readable medium wherein: providing, for display within the graphical user interface of the client device, the at least one interactive element for providing the search input comprises providing the at least one background image for display within the graphical user interface of the client device; and receiving, via the graphical user interface of the client device, the user interaction with the at least one interactive element comprises receiving, within the background image displayed on the graphical user interface of the client device, a bounding box indicating a scale of foreground object images to be retrieved and a portion of the background image for which the foreground object images are to be compatible. Zhao teaches the non-transitory computer-readable medium wherein: providing, for display within the graphical user interface of the client device, the at least one interactive element for providing the search input comprises providing the at least one background image for display within the graphical user interface of the client device; and receiving, via the graphical user interface of the client device, the user interaction with the at least one interactive element comprises receiving, within the background image displayed on the graphical user interface of the client device, a bounding box indicating a scale of foreground object images to be retrieved and a portion of the background image for which the foreground object images are to be compatible (Page 1, Figure 1 shows the background image is analyzed with a hole/bounding box in the image which indicates the desired positioning of the foreground object; Page 5, Column 1, Baselines section mentions under the Shape baseline that the search must follow the aspect ratio of the hole to find a foreground object compatible with the background. This indicates the user is able to indicate a scale through the hole in the background image which can be considered a bounding box). Jonsson, Wang, and Zhao are considered analogous to the claimed invention as because all are in the same field of compositing images. It would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method with the image search module taught by Jonsson in view of Wang with the addition of allowing the search module to receive input for a location and scale through a bounding box in the background as taught by Zhao in order to retrieve foreground objects that are semantically compatible with the background image (Zhao Page 1, Column 2, Paragraph 2). 27. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jonsson (United States Patent Application Publication No. 2014/0035950 A1) in view of Wang et al. (“Repopulating Street Scenes” – cited in IDS), hereinafter referred to as Wang, as applied to claim 1 above, and further in view of Guo et al. ("Intrinsic Image Harmonization"), hereinafter referred to as Guo. Regarding claim 6, Jonsson in view of Wang teaches the limitations of claim 1. Jonsson further teaches the computer-implemented method wherein: detecting the user selection of the at least one selectable option comprises detecting a user selection a selectable option for executing the harmonization model (Paragraph 64, Figures 6A-6I show sliders for brightness, color, and contrast which are at least one selectable options for harmonizing the image); and executing the auto-composite model using the background image and the foreground object image (Paragraph 64, Figures 6A-6I teach that when the user selects those sliders, the foreground object image lighting will be adjusted accordingly to match the background; Paragraph 87 mentions that any of the operations including harmonization can be performed automatically) However, Jonsson fails to teach modifying, utilizing the harmonization model, the lighting of the foreground object image within the composite image based on the lighting of the background image determined using the lighting estimation model. Guo teaches modifying, utilizing the harmonization model, the lighting of the foreground object image within the composite image based on the lighting of the background image (Section 3.1 Paragraph 2 teaches extracting the lighting of the background image through R ~ and I ~ which represent the reflectance and illumination respectively. Section 3.1 ‘Reflectance Harmonization’ subsection teaches using the extracted reflectance and illumination to harmonize the foreground object so that the reflectance or lighting will be harmonized to be compatible with the background image. The extraction process of the reflectance and illumination from the background image can be considered as using a lighting estimation model which then affects the lighting of the foreground object image). Jonsson, Wang, and Guo are considered analogous to the claimed invention because both are in the same field of harmonizing composite images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the method of creating composite images taught by Jonsson in view of Wang with modifying the lighting of the foreground object image as taught by Guo in order to solve the inharmony in composite images that the human visual system is sensitive to (Introduction Paragraph 1 and 2). 28. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jonsson (United States Patent Application Publication No. 2014/0035950 A1) in view of Wang et al. (“Repopulating Street Scenes” – cited in IDS), hereinafter referred to as Wang, as applied to claim 10 above, and further in view of Barber (United States Patent No. 5,579,471 A). Regarding claim 11, Jonsson in view of Wang teaches the limitations of claim 10. Jonsson further teaches the non-transitory computer-readable medium wherein: providing, for display within the graphical user interface of the client device, the at least one interactive element for providing the search input comprises providing the at least one background image for display within the graphical user interface of the client device (Paragraph 62, Figures 6A-6I show the background image displayed in the graphical user interface 600). However, Jonsson fails to teach at least one interactive element for providing the search input comprises receiving, via the graphical user interface of the client device, the user interaction with the at least one interactive element comprises receiving, within the at least one background image displayed on the graphical user interface of the client device, a sketch input indicating a category of foreground object images to be retrieved. Barber teaches the at least one interactive element for providing the search input comprises receiving, via the graphical user interface of the client device, the user interaction with the at least one interactive element comprises receiving, within the at least one background image displayed on the graphical user interface of the client device, a sketch input indicating a category of foreground object images to be retrieved (Column 14, lines 15-41 mention allowing the user to draw on the image and indicates which regions are of interest; Column 14, lines 46-67 mention using the query input and retrieving images that are most similar to the regions of interest). Jonsson, Wang, and Barber are considered analogous to the claimed invention as because all are in the same field of creating composite images. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the non-transitory computer-readable medium with instructions to create a composite image in Jonsson in view of Wang with the interactive element of using a sketch input to retrieve foreground objects as taught in Barber to find foreground images that are most similar to the user’s query (Barber Column 14, lines 46-67). 29. Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jonsson (United States Patent Application Publication No. 2014/0035950 A1) in view of Wang et al. (“Repopulating Street Scenes” – cited in IDS), hereinafter referred to as Wang, as applied to claim 10 above, and further in view of Zhang (“Learning Object Placement by Inpainting for Compositional Data Augmentation” – IDS). Regarding claim 16, Jonsson in view of Wang teaches the limitations of claim 10. However, Jonsson fails to teach the non-transitory computer-readable medium further comprising instructions that, when executed by the at least one processor, cause the at least one processor to perform operations comprising determining a recommended location and a recommended scale for the at least one foreground object image within the composite image, wherein generating the composite image utilizing the at least one foreground object image and the at least one background image comprises inserting the at least one foreground object image into the at least one background image at the recommended location using the recommended scale. Zhang teaches the non-transitory computer-readable medium further comprising instructions that, when executed by the at least one processor, cause the at least one processor to perform operations comprising determining a recommended location and a recommended scale for the at least one foreground object image within the composite image (Page 2, Paragraph 4 mentions the algorithm ‘PlaceNet’ which determines both a location and scale for the object to be placed in the background image), wherein generating the composite image utilizing the at least one foreground object image and the at least one background image comprises inserting the at least one foreground object image into the at least one background image at the recommended location using the recommended scale (Page 8, Figure 4 shows the composite image is generated using the retrieved foregrounds placed as the predicted locations and scaling). Jonsson, Wang, and Zhang are considered analogous to the claimed invention as because all are in the same field of merging foreground objects with a background to create a composite image. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the non-transitory computer-readable medium comprising instructions of placing and scaling a foreground object in Jonsson in view of Wang with the instructions to recommend the location and scale in Zhang in order to create natural and more varied scenes (Zhang Page 7, Section 3.3 Data Augmentation, Paragraph 1). 30. Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jonsson (United States Patent Application Publication No. 2014/0035950 A1) in view of Wang et al. (“Repopulating Street Scenes” – cited in IDS), hereinafter referred to as Wang, as applied to claim 17 above, and further in view of Niu ("Making Images Real Again: A Comprehensive Survey on Deep Image Composition" -- IDS). Regarding claim 18, Jonsson in view of Wang teaches the limitations of claim 17. However, Jonsson fails to teach the system wherein the one or more processors are further configured to cause the system to: determine a recommended scale for the foreground object image within the composite image based on the location selected by the user input; and generate the composite image using the background image and the foreground object image by positioning the foreground object image within the composite image at the location selected by the user input and using the recommended scale. Niu teaches the system wherein the one or more processors are further configured to cause the system to: determine a recommended scale for the foreground object image within the composite image based on the location selected by the user input; and generate the composite image using the background image and the foreground object image by positioning the foreground object image within the composite image at the location selected by the user input and using the recommended scale (Page 4, Section 2C, Paragraph 1 mentions that various models exist that use deep learning techniques to predict the size and placement of the object and generate the composite image automatically. It teaches that a model exists which uses deep learning models to analyze the background to determine the size of the object depending on the depth of the selected position which is the recommended scale). Jonsson, Wang, and Niu are considered analogous to the claimed invention as because both are in the same field of creating and editing a composite image to better combine the foreground and background. Thus, it would have been obvious to a person holding ordinary skill in the art before the effective filing date to modify the system of using scaling sliders in Jonsson in view of Wang with the models that automatically scale/recommend scales the foreground image depending on a selected position in Niu to prevent geometry inconsistencies between the foreground and background which would make the image look unrealistic and of bad quality (Niu Page 1, Introduction section, 2nd paragraph). Conclusion 31. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 32. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTINE Y AHN whose telephone number is (571)272-0672. The examiner can normally be reached M-F 8-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571)272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTINE YERA AHN/Examiner, Art Unit 2615 /ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Feb 10, 2023
Application Filed
Feb 19, 2025
Non-Final Rejection — §103
Apr 25, 2025
Interview Requested
May 01, 2025
Applicant Interview (Telephonic)
May 01, 2025
Examiner Interview Summary
May 15, 2025
Response Filed
Jun 16, 2025
Final Rejection — §103
Aug 20, 2025
Interview Requested
Aug 27, 2025
Examiner Interview Summary
Aug 27, 2025
Applicant Interview (Telephonic)
Aug 29, 2025
Request for Continued Examination
Sep 03, 2025
Response after Non-Final Action
Sep 22, 2025
Non-Final Rejection — §103
Dec 02, 2025
Interview Requested
Dec 09, 2025
Applicant Interview (Telephonic)
Dec 09, 2025
Examiner Interview Summary
Dec 19, 2025
Response Filed
Feb 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602877
BODY MODEL PROCESSING METHODS AND APPARATUSES, ELECTRONIC DEVICES AND STORAGE MEDIA
2y 5m to grant Granted Apr 14, 2026
Patent 12548187
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12456274
FACIAL EXPRESSION AND POSE TRANSFER UTILIZING AN END-TO-END MACHINE LEARNING MODEL
2y 5m to grant Granted Oct 28, 2025
Patent 12450810
ANIMATED FACIAL EXPRESSION AND POSE TRANSFER UTILIZING AN END-TO-END MACHINE LEARNING MODEL
2y 5m to grant Granted Oct 21, 2025
Patent 12439025
APPARATUS, SYSTEM, METHOD, STORAGE MEDIUM, AND FILE FORMAT
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+37.5%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month