Prosecution Insights
Last updated: April 19, 2026
Application No. 18/240,285

GENERATING ADDITIONAL IMAGES FROM PREVIOUSLY GENERATED IMAGES

Non-Final OA §103
Filed
Aug 30, 2023
Examiner
LIU, GORDON G
Art Unit
2618
Tech Center
2600 — Communications
Assignee
The Toronto-Dominion Bank
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
98%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
556 granted / 673 resolved
+20.6% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
29 currently pending
Career history
702
Total Applications
across all art units

Statute-Specific Performance

§101
6.7%
-33.3% vs TC avg
§103
73.3%
+33.3% vs TC avg
§102
3.0%
-37.0% vs TC avg
§112
5.7%
-34.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 673 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending under this Office action. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7-13, and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Mital, etc. (US 20190354599 A2 in view of Daha, etc. (US 20240296595 A2), further in view of Lyman, etc. (US 20200357117 A1) and Vasu (US 20220014554 A1). Regarding claim 1, Mital teaches that a computing system (See Mital: Figs. 1-24, and [0038], "Referring now to FIGS. 1 through 24, examples are illustrated showing various functional features of the Al canvas. FIG. 1 illustrates a user interface 102, which in this example is a graphical user interface graphically displaying the Al canvas 100. In the Al canvas 100, the user can add data, add intelligence (e.g., Al models) perform queries on outputs from the intelligence, view results from the queries, create new datasets from the results, and share data from the Al canvas 102 with other users") comprising: a memory (See Mital: Figs. 1-24, and [0049], "In some embodiments, a computing entity may be a processor, memory, and/or other computer hardware that are configured with computer executable instructions such that the computer hardware is configured to apply Al models to input datasets to obtain output Al model data"); a display (See Mital: Figs. 1-24, and [0065], "For example, in the example illustrated, the user selects the cluster 128-1 causing the Al canvas 100 to display, as illustrated in FIG. 23, the information box 130-1 showing a visualization of query results that are based on selection of a particular cluster"); and a processor coupled to the memory and the display, the processor (See Mital: Figs. 1-24, and [0076], "Further, the methods may be practiced by a computer system including one or more processors and computer-readable media such as computer memory. In particular, the computer memory may store computer-executable instructions that when executed by one or more processors cause various functions to be performed, such as the acts recited in the embodiments") configured to: train a generative artificial intelligence (GenAI) model to generate images based on user data using a dataset of images, execute the GenAI model based on input data from a user interface of a software application stored in the memory and running on the computing system to generate an image corresponding to the input data (See Mital: Figs. 1-24, and [0041], "Adding a source of data to the user interface as an input dataset connects the source of data to the Al canvas 100 and allows the data in the source of data to be visualized in the Al canvas 100". Note that visualization of the new dataset in the Al canvas may be corresponding to "generate an image corresponding to the input data"), and display the image on the display via the user interface of the software application (See Mital: Figs. 1-24, and [0042], "Reference is now made to FIG. 4 which illustrates the creatives' dataset as an input dataset 112-1. Embodiments may be implemented where any action by a user causes a reaction by the Al canvas where the reaction is a sort of history of one or more previous actions by the user. The example illustrated in FIG. 4 is one such example. In particular, the user adding the input dataset 112-1 causes suggested queries 114 to be displayed. In this example, the suggested queries 114 are queries that are relevant to the data in the input dataset 112-1. In particular, one of the suggested queries suggests that the user can search for 'art directors at Publicis'. Another suggested query illustrated in FIG. 4 is 'designers at Publicis'. Note that the user does not need to select one of these suggested queries, but rather could input their own query in the search box 116"), receive feedback about the image via the user interface (See Mital: Figs. 1-24, and [0066], "In some embodiments, feedback is provided to the user is based on new semantics added into a semantic space. In particular, the Al canvas 100, which is a computer implemented processor that includes data processors and data analyzers, along with a graphical user interface, is able to identify what words are added to a new or existing semantic space. These may have been added as the result of the user adding new data sources to the Al canvas and/or the result of adding new Al models to the Al canvas 100"), and retrain the GenAI model based on the image and the feedback about the image by an input from a user via the user interface to generate a retrained GenAI model. However, Mital fails to explicitly disclose that train a generative artificial intelligence (GenAI) model to generate images based on user data using a dataset of images; and retrain the GenAI model based on the image and the feedback about the image by an input from a user via the user interface to generate a retrained GenAI model. However, Daha teaches that train a generative artificial intelligence (GenAI) model to generate images based on user data using a dataset of images (See Daha: Figs. 1A-C, and [0028], "During training, the model is shown a large dataset of textual descriptions and the corresponding images, and it learns to predict the next token in the sequence given the previous tokens. By repeatedly predicting the next token and updating the model's parameters based on the accuracy of its predictions, DALL-E becomes better at generating images that match the textual descriptions. In essence, DALL-E is trained to maximize the likelihood that the generated tokens match the ground-truth tokens in the training data, and thus generate images that are consistent with the textual descriptions. This training procedure allows DALL-E to not only generate an image from scratch, but also to regenerate any rectangular region of an existing image that extends to the bottom-right corner, in a way that is consistent with the text prompt "; and [0040], “The fine-tuning mechanism 114 is thus transmitted to the image generating AI engine 112 and implemented, as described above. The command 106 is then executed by the image generating AI engine 112”. Note that training the first model and adding the fine-tuning mechanism is also a kind of training). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention was effectively filed to modify Mital to have train a generative artificial intelligence (GenAI) model to generate images based on user data using a dataset of images as taught by Daha in order to provide the technical solution for generating imagery with elements that are more specific or customized for the user than are available with the vast training set underlying the machine learning model of the image-generating artificial intelligence engine (See Daha: Figs. 1A-C, and [0032], "Hence, there is a need for improved systems and methods that provide a technical solution for generating imagery with elements that are more customized or personalized for the user than are available with the vast training set typically underlying an image-generating Al engine"). Mital teaches a method and system that may generate an updated visualization of user data inputs and give feedbacks to the users based on the artificial intelligence models; while Daha teaches a system and method that may generate customized images for the users based on the user input by training and re-training the artificial intelligence models. Therefore, it is obvious to one of ordinary skill in the art to modify Mital by Daha to generate visualization representations or images of the user inputs and give feedbacks to the users by training and re-training the Al models. The motivation to modify Mital by Daha is "Use of known technique to improve similar devices (methods, or products) in the same way". However, Mital, modified by Daha, fails to explicitly disclose that retrain the GenAI model based on the image and the feedback about the image by an input from a user via the user interface to generate a retrained GenAI model. However, Lyman teach that retrain the GenAI model based on the image and the feedback about the image (See Lyman: Figs. 1 and 6A-B, and [0115], “The remediation step 1140 can include automatically updating an identified medical inference function 1105. This can include automatically retraining identified medical inference function 1105 on the same training set or on a new training set that includes new data, data with higher corresponding confidence scores, or data selected based on new training set criteria. The identified medical inference function 1105 can also be updated and/or changed based on the review data received from the client device. For example, the medical scan and expert feedback data can be added to the training set of the medical scan inference function 1105, and the medical scan inference function 1105 can be retrained on the updated training set. Alternatively or in addition, the expert user can identify additional parameters and/or rules in the expert feedback data based on the errors made by the inference function in generating the inference data 1110 for the medical scan, and these parameters and/or rules can be applied to update the medical scan inference function, for example, by updating the model type data 622 and/or model parameter data 623”. Note that the model is retrained based on the medical scan and the feedback about the medical scan, the medical scans may be the images) by an input from a user via the user interface to generate a retrained GenAI model. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention was effectively filed to modify Mital to have r retrain the GenAI model based on the image and the feedback about the image as taught by Lyman in order to configure for secure and authenticated communication between the medical scan subsystem, the client device and the database storage system to protect the data stored in the database storage system and the data communicated between the medical scan subsystems (See Lyman: Fig. 1, and [0032], " Some or all of the web sites presented can correspond to multiple subsystems, for example, where the multiple subsystems share the server presenting the web site. Furthermore, the network 150 can be configured for secure and/or authenticated communications between the medical scan subsystems 101, the client devices 120 and the database storage system 140 to protect the data stored in the database storage system and the data communicated between the medical scan subsystems 101, the client devices 120 and the database storage system 140 from unauthorized access"). Mital teaches a method and system that may generate an updated visualization of user data inputs and give feedbacks to the users based on the artificial intelligence models; while Lyman teaches a system and method that may generate the retrained generative Al model to mitigate the artifacts in the Al model generated heat map by retraining the model using the user feedback inputs and the image. Therefore, it is obvious to one of ordinary skill in the art to modify Mital by Lyman to generate the retrained Al image generation model by the retraining process using the user feedback input and the images. The motivation to modify Mital by Lyman is "Use of known technique to improve similar devices (methods, or products) in the same way". However, Mital, modified by Daha and Lyman, fails to explicitly disclose that the feedback about the image by an input from a user via the user interface to generate a retrained GenAI model. However, Vasu teach that the feedback about the image by an input from a user via the user interface to generate a retrained GenAI model (See Vasu: Fig. 1, and [0032], "In an embodiment, program 150 logs the one or more analyzed network inputs into corpus 124. In an example embodiment, program 150 may receive user feedback through a graphical user interface (not depicted) on a computing device (not depicted). For example, after program 150 analyzes the network input, the user can provide feedback for the generated image on the user interface. In an embodiment, feedback may include a simple positive or negative response. For example, if program 150 incorrectly identifies one or more anomalies and associated generated images, the user can provide negative feedback and correct the image (e.g., before transmission). In an embodiment, program 150 feeds the user feedback and the corrected image into network input image model 152 allowing the adjustment of said model. In another embodiment, program 150 may use one or more techniques of NLP to log whether the response of the user is positive or negative. In various embodiments, program 150 combines the deep learning process explained above with traditional network intrusion network signature analysis to bolster security profile predictions. In this embodiment, program 150 updates a network input signature, a unique identifier for a network pattern or sequence of network inputs associated with a network description (e.g., malicious, authorized, etc.). In a further embodiment, program 150 applies the updated network input signatures to one or more downstream or upstream network devices such as an IDS. In another embodiment, program 150 retrains a plurality of associated models and networks with the calculated prediction and associated feedback”. Note that the feedbacks about the generated images are inputted by the user via a user graphical interface). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention was effectively filed to modify Mital to have the feedback about the image by an input from a user via the user interface to generate a retrained GenAI model as taught by Vasu in order to allow the adjustment of said model (See Vasu: Fig. 1, and [0032], "In an embodiment, program 150 feeds the user feedback and the corrected image into network input image model 152 allowing the adjustment of said model"). Mital teaches a method and system that may generate an updated visualization of user data inputs and give feedbacks to the users based on the artificial intelligence models; while Vasu teaches a system and method that may receive user feedbacks vis user interface to adjust or retrain the image generating model. Therefore, it is obvious to one of ordinary skill in the art to modify Mital by Vasu to obtain user feedbacks about the generated images vis a graphical user interface to retrain the AI model. The motivation to modify Mital by Vasu is "Use of known technique to improve similar devices (methods, or products) in the same way". Regarding claim 2, Mital, Daha, Lyman, and Vasu teach all the features with respect to claim 1 as outlined above. Further, Daha teaches that the computing system of claim 1, wherein the processor is further configured to: receive an identifier of a product via the user interface (See Daha: Fig. 6, and [0080], "In some examples, the communication components 664 may detect identifiers or include components adapted to detect identifiers. For example, the communication components 864 may include Radio Frequency Identification (RFID) tag readers, NFC detectors, optical sensors (for example, one- or multi-dimensional bar codes, or other optical codes), and/or acoustic detectors (for example, microphones to identify tagged audio signals). In some examples, location information may be determined based on information from the communication components 664, such as, but not limited to, gee-location via Internet Protocol (IP) address, location via Wi-Fi, cellular, NFC, Bluetooth, or other wireless station identification and/or signal triangulation"), and generate the image via the GenAI model based on the identifier of the product (See Daha: Fig. 1, and [0040], "The fine-tuning mechanism 114 is thus transmitted to the image generating Al engine 112 and implemented, as described above. The command 106 is then executed by the image generating Al engine 112. In executing the command, the image generating Al engine 112 can use the tokens 108 and NLP layer 118 of the fine-tuning mechanism to personalize or customize the output image 116 without comprising the ability learned from its underlying training set that informs how an image should appear based on the command, e.g., how a person looks holding a dog"). Regarding claim 3, Mital, Daha, Lyman, and Vasu teach all the features with respect to claim 1 as outlined above. Further, Daha teaches that the computing system of claim 1, wherein the processor is further configured to: receive an identifier of a goal of the user via the user interface (See Daha: Fig. 6, and [0080], "In some examples, the communication components 664 may detect identifiers or include components adapted to detect identifiers. For example, the communication components 864 may include Radio Frequency Identification (RFID) tag readers, NFC detectors, optical sensors (for example, one- or multi-dimensional bar codes, or other optical codes), and/or acoustic detectors (for example, microphones to identify tagged audio signals). In some examples, location information may be determined based on information from the communication components 664, such as, but not limited to, gee-location via Internet Protocol (IP) address, location via Wi-Fi, cellular, NFC, Bluetooth, or other wireless station identification and/or signal triangulation"), and generate the image via the GenAI model based on the identifier of the goal of the user (See Daha: Fig. 1, and [0040], "The fine-tuning mechanism 114 is thus transmitted to the image generating Al engine 112 and implemented, as described above. The command 106 is then executed by the image generating Al engine 112. In executing the command, the image generating Al engine 112 can use the tokens 108 and NLP layer 118 of the fine-tuning mechanism to personalize or customize the output image 116 without comprising the ability learned from its underlying training set that informs how an image should appear based on the command, e.g., how a person looks holding a dog"). Regarding claim 4, Mital, Daha, Lyman, and Vasu teach all the features with respect to claim 1 as outlined above. Further, Mital teaches that the computing system of claim 1, wherein the processor is further configured to: analyze an account history of the user (See Mital: Fig. 1, and [0037], "As a result, the Al canvas 100 will provide various suggested queries 114-4 to the user, where the suggested queries are dependent on the history of actions performed by the user"), generate a plurality of prompts based on the analyzed account history of the user (See Mital: Fig. 1, and [0037], "For example, the Al canvas 100 provides the suggested queries 'bright portfolios with upbeat music', 'creatives with intense motion graphics', 'creatives with light and bright videos', and 'creatives with cinematic music'. The first suggested query, i.e. 'bright portfolios with upbeat music' is a result of the addition of the 'style recognition' and 'music analysis' Al models 120-1 and 120-3. The fourth suggestion, i.e. creatives with cinematic music' is provided based only on the addition of the 'music analysis' Al model 120-2. Thus, some suggestions provided by the Al canvas may be based only on the last action performed by the user on the Al canvas". Note that the suggested queries may be the prompts), and display the plurality of prompts on the user interface (See Mital: Fig. 4, and [0042], "Reference is now made to FIG. 4 which illustrates the creatives' dataset as an input dataset 112-1. Embodiments may be implemented where any action by a user causes a reaction by the Al canvas where the reaction is a sort of history of one or more previous actions by the user. The example illustrated in FIG. 4 is one such example. In particular, the user adding the input dataset 112-1 causes suggested queries 114 to be displayed. In this example, the suggested queries 114 are queries that are relevant to the data in the input dataset 112-1"). Regarding claim 5, Mital, Daha, Lyman, and Vasu teach all the features with respect to claim 4 as outlined above. Further, Daha teaches that the computing system of claim 4, The computing system of claim 4, wherein the processor is further configured to: receive the user data via the plurality of prompts displayed on the user interface (See Daha: Figs. 1A-C, and [0034], "Wanting the customized image described, the user may input the textual prompt 106 of "Username holding his dog" or "Me holding my dog." Without the technical solution described herein, however, the image generating Al engine 112 will be unable to correctly interpret "Username," "his dog," "me" or "my dog." With the general training of the typical Al engine, these terms have no connection to the specific appearance of the user or his dog"). Regarding claim 7, Mital, Daha, Lyman, and Vasu teach all the features with respect to claim 1 as outlined above. Further, Lyman teaches that the computing system of claim 1, wherein the processor is further configured to: receive additional user data collected from the user interface (See Lyman: Figs. 6A-B, and [0115], "Alternatively or in addition, the expert user can identify additional parameters and/or rules in the expert feedback data based on the errors made by the inference function in generating the inference data 1110 for the medical scan, and these parameters and/or rules can be applied to update the medical scan inference function, for example, by updating the model type data 622 and/or model parameter data 623"), execute the retrained GenAI model based on the additional user data to generate a second image (See Lyman: Figs. 6A-B, and [0167], "The image type parameters can be determined by the central server system to dictate characteristics of the set of de-identified medical scans to be received to train and/or retrain the model. For example, the image type parameters can correspond to one or more scan categories, can indicate scan classifier data 420, can indicate one or more scan modalities, one or more anatomical regions, a date range, and/or other parameters"; and Figs. 8-9, and [0185], "FIG. 9 presents a flowchart illustrating a method for execution by a medical picture archive integration system 2600 that includes a first memory and a second memory that store executional instructions that, when executed by at least one first processor and at least one second processor, respectfully, cause the medical picture archive integration system to perform the steps below"), and display the second image via the user interface of the software application (See Lyman: Figs. 12A-C, and [0277], "Post-process heat maps displayed to users can improve the technology of medical image reviewing tools by more quickly drawing the user's attention to the regions of interest and/or emphasizing areas of most concern to the patient. In particular, post-processing techniques can target the user's attention to the right part of the scan, while maintaining the user's confidence in the underlying Al model and not confusing users based on extraneous findings that might be included in the preliminary heat map visualization data. Furthermore, post-processing can facilitate a better user experience or convey certain information instead of linking the level-of-heat directly to the model's indication of probability"). Regarding claim 8, Mital, Daha, Lyman, and Vasu teach all the features with respect to claim 7 as outlined above. Further, Mital and Lyman teach that the computing system of claim 7, wherein the processor is further configured to: receive new feedback about the second image via the user interface (See Mital: Fig. 1, and [0066], "In some embodiments, feedback is provided to the user is based on new semantics added into a semantic space. In particular, the Al canvas 100, which is a computer implemented processor that includes data processors and data analyzers, along with a graphicaI user interface, is able to identify what words are added to a new or existing semantic space. These may have been added as the result of the user adding new data sources to the Al canvas and/or the result of adding new Al models to the Al canvas 100"), and retrain the retrained GenAI model based on the new feedback about the second image (See Lyman: Fig. 6A-B, and [0115], "The remediation step 1140 can include automatically updating an identified medical inference function 1105. This can include automatically retraining identified medical inference function 1105 on the same training set or on a new training set that includes new data, data with higher corresponding confidence scores, or data selected based on new training set criteria. The identified medical inference function 1105 can also be updated and/or changed based on the review data received from the client device. For example, the medical scan and expert feedback data can be added to the training set of the medical scan inference function 1105, and the medical scan inference function 1105 can be retrained on the updated training set. Alternatively or in addition, the expert user can identify additional parameters and/or rules in the expert feedback data based on the errors made by the inference function in generating the inference data 1110 for the medical scan, and these parameters and/or rules can be applied to update the medical scan inference function, for example, by updating the model type data 622 and/or model parameter data 623 "). Regarding claim 9, Mital, Daha, Lyman, and Vasu teach all the features with respect to claim 1 as outlined above. Further, Mital, Daha, Lyman, and Vasu teach that a method (See Mital: Figs. 1-24, and [0038], "Referring now to FIGS. 1 through 24, examples are illustrated showing various functional features of the Al canvas. FIG. 1 illustrates a user interface 102, which in this example is a graphical user interface graphically displaying the Al canvas 100. In the Al canvas 100, the user can add data, add intelligence (e.g., Al models) perform queries on outputs from the intelligence, view results from the queries, create new datasets from the results, and share data from the Al canvas 102 with other users") comprising: training a generative artificial intelligence (GenAI) model to generate images (See Daha: Figs. 1A-C, and [0028], "During training, the model is shown a large dataset of textual descriptions and the corresponding images, and it learns to predict the next token in the sequence given the previous tokens. By repeatedly predicting the next token and updating the model's parameters based on the accuracy of its predictions, DALL-E becomes better at generating images that match the textual descriptions. In essence, DALL-E is trained to maximize the likelihood that the generated tokens match the ground-truth tokens in the training data, and thus generate images that are consistent with the textual descriptions. This training procedure allows DALL-E to not only generate an image from scratch, but also to regenerate any rectangular region of an existing image that extends to the bottom-right corner, in a way that is consistent with the text prompt "; and [0040], “The fine-tuning mechanism 114 is thus transmitted to the image generating AI engine 112 and implemented, as described above. The command 106 is then executed by the image generating AI engine 112”. Note that training the first model and adding the fine-tuning mechanism is also a kind of training); executing the GenAI model based on input data from a user interface of a software application to generate an image corresponding to the input data (See Mital: Figs. 1-24, and [0041], "Adding a source of data to the user interface as an input dataset connects the source of data to the Al canvas 100 and allows the data in the source of data to be visualized in the Al canvas 100". Note that visualization of the new dataset in the Al canvas may be corresponding to "generate an image corresponding to the input data"), and displaying the image via the user interface of the software application (See Mital: Figs. 1-24, and [0042], "Reference is now made to FIG. 4 which illustrates the creatives' dataset as an input dataset 112-1. Embodiments may be implemented where any action by a user causes a reaction by the Al canvas where the reaction is a sort of history of one or more previous actions by the user. The example illustrated in FIG. 4 is one such example. In particular, the user adding the input dataset 112-1 causes suggested queries 114 to be displayed. In this example, the suggested queries 114 are queries that are relevant to the data in the input dataset 112-1. In particular, one of the suggested queries suggests that the user can search for 'art directors at Publicis'. Another suggested query illustrated in FIG. 4 is 'designers at Publicis'. Note that the user does not need to select one of these suggested queries, but rather could input their own query in the search box 116"); receiving feedback about the image via the user interface (See Mital: Figs. 1-24, and [0066], "In some embodiments, feedback is provided to the user is based on new semantics added into a semantic space. In particular, the Al canvas 100, which is a computer implemented processor that includes data processors and data analyzers, along with a graphical user interface, is able to identify what words are added to a new or existing semantic space. These may have been added as the result of the user adding new data sources to the Al canvas and/or the result of adding new Al models to the Al canvas 100"); and retraining the GenAI model based on the image and the feedback about the image (See Lyman: Figs. 1 and 6A-B, and [0115], “The remediation step 1140 can include automatically updating an identified medical inference function 1105. This can include automatically retraining identified medical inference function 1105 on the same training set or on a new training set that includes new data, data with higher corresponding confidence scores, or data selected based on new training set criteria. The identified medical inference function 1105 can also be updated and/or changed based on the review data received from the client device. For example, the medical scan and expert feedback data can be added to the training set of the medical scan inference function 1105, and the medical scan inference function 1105 can be retrained on the updated training set. Alternatively or in addition, the expert user can identify additional parameters and/or rules in the expert feedback data based on the errors made by the inference function in generating the inference data 1110 for the medical scan, and these parameters and/or rules can be applied to update the medical scan inference function, for example, by updating the model type data 622 and/or model parameter data 623”. Note that the model is retrained based on the medical scan and the feedback about the medical scan, the medical scans may be the images) by an input from a user via the user interface to generate a retrained GenAI model (See Vasu: Fig. 1, and [0032], "In an embodiment, program 150 logs the one or more analyzed network inputs into corpus 124. In an example embodiment, program 150 may receive user feedback through a graphical user interface (not depicted) on a computing device (not depicted). For example, after program 150 analyzes the network input, the user can provide feedback for the generated image on the user interface. In an embodiment, feedback may include a simple positive or negative response. For example, if program 150 incorrectly identifies one or more anomalies and associated generated images, the user can provide negative feedback and correct the image (e.g., before transmission). In an embodiment, program 150 feeds the user feedback and the corrected image into network input image model 152 allowing the adjustment of said model. In another embodiment, program 150 may use one or more techniques of NLP to log whether the response of the user is positive or negative. In various embodiments, program 150 combines the deep learning process explained above with traditional network intrusion network signature analysis to bolster security profile predictions. In this embodiment, program 150 updates a network input signature, a unique identifier for a network pattern or sequence of network inputs associated with a network description (e.g., malicious, authorized, etc.). In a further embodiment, program 150 applies the updated network input signatures to one or more downstream or upstream network devices such as an IDS. In another embodiment, program 150 retrains a plurality of associated models and networks with the calculated prediction and associated feedback”. Note that the feedbacks about the generated images are inputted by the user via a user graphical interface). Regarding claim 10, Mital, Daha, Lyman, and Vasu teach all the features with respect to claim 9 as outlined above. Further, Daha teaches that the method of claim 9, wherein the executing further comprises: receiving an identifier of a product via the user interface (See Daha: Fig. 6, and [0080], "In some examples, the communication components 664 may detect identifiers or include components adapted to detect identifiers. For example, the communication components 864 may include Radio Frequency Identification (RFID) tag readers, NFC detectors, optical sensors (for example, one- or multi-dimensional bar codes, or other optical codes), and/or acoustic detectors (for example, microphones to identify tagged audio signals). In some examples, location information may be determined based on information from the communication components 664, such as, but not limited to, gee-location via Internet Protocol (IP) address, location via Wi-Fi, cellular, NFC, Bluetooth, or other wireless station identification and/or signal triangulation"); and generating the image via the GenAI model based on the received identifier of the product (See Daha: Fig. 1, and [0040], "The fine-tuning mechanism 114 is thus transmitted to the image generating Al engine 112 and implemented, as described above. The command 106 is then executed by the image generating Al engine 112. In executing the command, the image generating Al engine 112 can use the tokens 108 and NLP layer 118 of the fine-tuning mechanism to personalize or customize the output image 116 without comprising the ability learned from its underlying training set that informs how an image should appear based on the command, e.g., how a person looks holding a dog"). Regarding claim 11, Mital, Daha, Lyman, and Vasu teach all the features with respect to claim 9 as outlined above. Further, Daha teaches that the method of claim 9, wherein the executing further comprises: receiving an identifier of a goal of the user via the user interface (See Daha: Fig. 6, and [0080], "In some examples, the communication components 664 may detect identifiers or include components adapted to detect identifiers. For example, the communication components 864 may include Radio Frequency Identification (RFID) tag readers, NFC detectors, optical sensors (for example, one- or multi-dimensional bar codes, or other optical codes), and/or acoustic detectors (for example, microphones to identify tagged audio signals). In some examples, location information may be determined based on information from the communication components 664, such as, but not limited to, gee-location via Internet Protocol (IP) address, location via Wi-Fi, cellular, NFC, Bluetooth, or other wireless station identification and/or signal triangulation"); and generating the image via the GenAI model based on the received identifier of the goal of the user (See Daha: Fig. 1, and [0040], "The fine-tuning mechanism 114 is thus transmitted to the image generating Al engine 112 and implemented, as described above. The command 106 is then executed by the image generating Al engine 112. In executing the command, the image generating Al engine 112 can use the tokens 108 and NLP layer 118 of the fine-tuning mechanism to personalize or customize the output image 116 without comprising the ability learned from its underlying training set that informs how an image should appear based on the command, e.g., how a person looks holding a dog"). Regarding claim 12, Mital, Daha, Lyman, and Vasu teach all the features with respect to claim 9 as outlined above. Further, Mital teaches that the method of claim 9, wherein the method further comprises: analyzing an account history of the user (See Mital: Fig. 1, and [0037], "As a result, the Al canvas 100 will provide various suggested queries 114-4 to the user, where the suggested queries are dependent on the history of actions performed by the user"); generating a plurality of prompts based on the analyzed account history of the user (See Mital: Fig. 1, and [0037], "For example, the Al canvas 100 provides the suggested queries 'bright portfolios with upbeat music', 'creatives with intense motion graphics', 'creatives with light and bright videos', and 'creatives with cinematic music'. The first suggested query, i.e. 'bright portfolios with upbeat music' is a result of the addition of the 'style recognition' and 'music analysis' Al models 120-1 and 120-3. The fourth suggestion, i.e. creatives with cinematic music' is provided based only on the addition of the 'music analysis' Al model 120-2. Thus, some suggestions provided by the Al canvas may be based only on the last action performed by the user on the Al canvas". Note that the suggested queries may be the prompts"); and displaying the plurality of prompts on the user interface (See Mital: Fig. 4, and [0042], "Reference is now made to FIG. 4 which illustrates the creatives' dataset as an input dataset 112-1. Embodiments may be implemented where any action by a user causes a reaction by the Al canvas where the reaction is a sort of history of one or more previous actions by the user. The example illustrated in FIG. 4 is one such example. In particular, the user adding the input dataset 112-1 causes suggested queries 114 to be displayed. In this example, the suggested queries 114 are queries that are relevant to the data in the input dataset 112-1"). Regarding claim 13, Mital, Daha, Lyman, and Vasu teach all the features with respect to claim 12 as outlined above. Further, Daha teaches that the method of claim 12, wherein the method further comprises: receiving the user data via the plurality of prompts displayed on the user interface (See Daha: Figs. 1A-C, and [0034], "Wanting the customized image described, the user may input the textual prompt 106 of "Username holding his dog" or "Me holding my dog." Without the technical solution described herein, however, the image generating Al engine 112 will be unable to correctly interpret "Username," "his dog," "me" or "my dog." With the general training of the typical Al engine, these terms have no connection to the specific appearance of the user or his dog"). Regarding claim 15, Mital, Daha, Lyman, and Vasu teach all the features with respect to claim 9 as outlined above. Further, Lyman teaches that the method of claim 9, wherein the method further comprises: receiving additional user data collected from the user interface (See Lyman: Figs. 6A-B, and [0115], "Alternatively or in addition, the expert user can identify additional parameters and/or rules in the expert feedback data based on the errors made by the inference function in generating the inference data 1110 for the medical scan, and these parameters and/or rules can be applied to update the medical scan inference function, for example, by updating the model type data 622 and/or model parameter data 623"); executing the retrained GenAI model based on the additional user data to generate a second image (See Lyman: Figs. 6A-B, and [0167], "The image type parameters can be determined by the central server system to dictate characteristics of the set of de-identified medical scans to be received to train and/or retrain the model. For example, the image type parameters can correspond to one or more scan categories, can indicate scan classifier data 420, can indicate one or more scan modalities, one or more anatomical regions, a date range, and/or other parameters"; and Figs. 8-9, and [0185], "FIG. 9 presents a flowchart illustrating a method for execution by a medical picture archive integration system 2600 that includes a first memory and a second memory that store executional instructions that, when executed by at least one first processor and at least one second processor, respectfully, cause the medical picture archive integration system to perform the steps below"); and displaying the second image via the user interface of the software application (See Lyman: Figs. 12A-C, and [0277], "Post-process heat maps displayed to users can improve the technology of medical image reviewing tools by more quickly drawing the user's attention to the regions of interest and/or emphasizing areas of most concern to the patient. In particular, post-processing techniques can target the user's attention to the right part of the scan, while maintaining the user's confidence in the underlying Al model and not confusing users based on extraneous findings that might be included in the preliminary heat map visualization data. Furthermore, post-processing can facilitate a better user experience or convey certain information instead of linking the level-of-heat directly to the model's indication of probability"). Regarding claim 16, Mital, Daha, Lyman, and Vasu teach all the features with respect to claim 15 as outlined above. Further, Mital and Lyman teach that the method of claim 15, wherein the method further comprises: receiving new feedback about the second image via the user interface (See Mital: Fig. 1, and [0066], "In some embodiments, feedback is provided to the user is based on new semantics added into a semantic space. In particular, the Al canvas 100, which is a computer implemented processor that includes data processors and data analyzers, along with a graphicaI user interface, is able to identify what words are added to a new or existing semantic space. These may have been added as the result of the user adding new data sources to the Al canvas and/or the result of adding new Al models to the Al canvas 100"); and retraining the retrained GenAI model based on the new feedback about the second image (See Lyman: Fig. 6A-B, and [0115], "The remediation step 1140 can include automatically updating an identified medical inference function 1105. This can include automatically retraining identified medical inference function 1105 on the same training set or on a new training set that includes new data, data with higher corresponding confidence scores, or data selected based on new training set criteria. The identified medical inference function 1105 can also be updated and/or changed based on the review data received from the client device. For example, the medical scan and expert feedback data can be added to the training set of the medical scan inference function 1105, and the medical scan inference function 1105 can be retrained on the updated training set. Alternatively or in addition, the expert user can identify additional parameters and/or rules in the expert feedback data based on the errors made by the inference function in generating the inference data 1110 for the medical scan, and these parameters and/or rules can be applied to update the medical scan inference function, for example, by updating the model type data 622 and/or model parameter data 623"). Regarding claim 17, Mital, Daha, Lyman, and Vasu teach all the features with respect to claim 1 as outlined above. Further, Mital, Daha, Lyman, and Vasu teach that a non-transitory computer- readable medium comprising instructions stored therein which when executed by a processor cause the processor to perform (See Mital: Figs. 1-24, and [0038], "Referring now to FIGS. 1 through 24, examples are illustrated showing various functional features of the Al canvas. FIG. 1 illustrates a user interface 102, which in this example is a graphical user interface graphically displaying the Al canvas 100. In the Al canvas 100, the user can add data, add intelligence (e.g., Al models) perform queries on outputs from the intelligence, view results from the queries, create new datasets from the results, and share data from the Al canvas 102 with other users"): training a generative artificial intelligence (GenAI) model to generate images (See Daha: Figs. 1A-C, and [0028], "During training, the model is shown a large dataset of textual descriptions and the corresponding images, and it learns to predict the next token in the sequence given the previous tokens. By repeatedly predicting the next token and updating the model's parameters based on the accuracy of its predictions, DALL-E becomes better at generating images that match the textual descriptions. In essence, DALL-E is trained to maximize the likelihood that the generated tokens match the ground-truth tokens in the training data, and thus generate images that are consistent with the textual descriptions. This training procedure allows DALL-E to not only generate an image from scratch, but also to regenerate any rectangular region of an existing image that extends to the bottom-right corner, in a way that is consistent with the text prompt "; and [0040], “The fine-tuning mechanism 114 is thus transmitted to the image generating AI engine 112 and implemented, as described above. The command 106 is then executed by the image generating AI engine 112”. Note that training the first model and adding the fine-tuning mechanism is also a kind of training); executing the GenAI model based on input data from a user interface of a software application to generate an image corresponding to the input data (See Mital: Figs. 1-24, and [0041], "Adding a source of data to the user interface as an input dataset connects the source of data to the Al canvas 100 and allows the data in the source of data to be visualized in the Al canvas 100". Note that visualization of the new dataset in the Al canvas may be corresponding to "generate an image corresponding to the input data"), and displaying the image via the user interface of the software application (See Mital: Figs. 1-24, and [0042], "Reference is now made to FIG. 4 which illustrates the creatives' dataset as an input dataset 112-1. Embodiments may be implemented where any action by a user causes a reaction by the Al canvas where the reaction is a sort of history of one or more previous actions by the user. The example illustrated in FIG. 4 is one such example. In particular, the user adding the input dataset 112-1 causes suggested queries 114 to be displayed. In this example, the suggested queries 114 are queries that are relevant to the data in the input dataset 112-1. In particular, one of the suggested queries suggests that the user can search for 'art directors at Publicis'. Another suggested query illustrated in FIG. 4 is 'designers at Publicis'. Note that the user does not need to select one of these suggested queries, but rather could input their own query in the search box 116"); receiving feedback about the image via the user interface (See Mital: Mital: Figs. 1-24, and [0066], "In some embodiments, feedback is provided to the user is based on new semantics added into a semantic space. In particular, the Al canvas 100, which is a computer implemented processor that includes data processors and data analyzers, along with a graphical user interface, is able to identify what words are added to a new or existing semantic space. These may have been added as the result of the user adding new data sources to the Al canvas and/or the result of adding new Al models to the Al canvas 100"); and retraining the GenAI model based on the image and the feedback about the image (See Lyman: Figs. 1 and 6A-B, and [0115], “The remediation step 1140 can include automatically updating an identified medical inference function 1105. This can include automatically retraining identified medical inference function 1105 on the same training set or on a new training set that includes new data, data with higher corresponding confidence scores, or data selected based on new training set criteria. The identified medical inference function 1105 can also be updated and/or changed based on the review data received from the client device. For example, the medical scan and expert feedback data can be added to the training set of the medical scan inference function 1105, and the medical scan inference function 1105 can be retrained on the updated training set. Alternatively or in addition, the expert user can identify additional parameters and/or rules in the expert feedback data based on the errors made by the inference function in generating the inference data 1110 for the medical scan, and these parameters and/or rules can be applied to update the medical scan inference function, for example, by updating the model type data 622 and/or
Read full office action

Prosecution Timeline

Aug 30, 2023
Application Filed
May 09, 2025
Non-Final Rejection — §103
Jun 11, 2025
Response Filed
Jul 28, 2025
Final Rejection — §103
Sep 10, 2025
Response after Non-Final Action
Oct 31, 2025
Response after Non-Final Action
Oct 31, 2025
Notice of Allowance
Nov 21, 2025
Response after Non-Final Action
Dec 04, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602846
GENERATING REALISTIC MACHINE LEARNING-BASED PRODUCT IMAGES FOR ONLINE CATALOGS
2y 5m to grant Granted Apr 14, 2026
Patent 12602840
IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12602871
MESH TOPOLOGY GENERATION USING PARALLEL PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12592022
INTEGRATION CACHE FOR THREE-DIMENSIONAL (3D) RECONSTRUCTION
2y 5m to grant Granted Mar 31, 2026
Patent 12586330
DISPLAYING A VIRTUAL OBJECT IN A REAL-LIFE SCENE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
98%
With Interview (+15.1%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 673 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month