DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The following is a Final Office Action in response to communications received on 12/4/2025. Claims 1, 2, 4, 5, 7-10, 12-16, 18-23 are currently pending and have been examined. Claims 1, 2, 4, 5, 7-10, 12-16, 18-19 have been amended. Claims 3, 6, 11, 15 are cancelled. Claims 21-23 are new.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Step 1: The claims 1, 2, 3, 5, 7, 21 are a system, claims 8, 9, 10, 12, 13, 14, 22 are a method, and claims 15, 16, 18, 19, 20, 23 are a computer readable medium. Thus, each independent claim, on its face, is directed to one of the statutory categories of 35 U.S.C. §101. However, the claims 1, 2, 4, 5, 7-10, 12-16, 18-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 2A Prong 1: The independent claims (1, 8 and 15, taking claim 1 as a representative claim) recite:
A physical article modification system comprising:
a processor; a graphical user interface (GUI);and a memory comprising computer program code, the memory and the computer program code configured to cause the processor to:
display, on a display device of the GUI, an image that comprises an image of a physical article;
select an image style;
train an image generation model using the selected image style;
receive, from the GUI, an image request input that comprises a drawing input;
generate an image based on the received image request input using the trained image generation model, wherein the generated image is in the selected image style;
output the generated image to the GUI in response to the received image request input, including updating the image displayed on the display device with the generated image superimposed on the image of the physical article;
and cause an automated device to modify the physical article by applying the generated image to the physical article.
These limitations, except for the italicized portions, under their broadest reasonable interpretations, recite certain methods of organizing human activity for managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) as well as commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations). The claimed invention recites steps for receiving an image request, generating an image of a selected style, and output the generated image in response to the received request. The instant specification sets forth that the present disclosure is to systems and methods for personalizing articles or other items, such as those for sale in electronic commerce (e-commerce) (paragraphs 0013, 0020, 0021, 0029). The steps under its broadest reasonable interpretation specifically fall under sales activities.
The Examiner notes that although the claim limitations are summarized, the analysis regarding subject matter eligibility considers the entirety of the claim and all of the claim elements individually, as a whole, and in ordered combination.
Prong 2: This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of:
A physical article modification system comprising: a processor; and a memory comprising computer program code, the memory and the computer program code configured to cause the processor to: (claim 1)
A computer storage medium has computer-executable instructions that, upon execution by a processor, cause the processor to at least: (claim 15)
A computer-implemented method comprising: (claim 8)
display, on a display device of the GUI, an image that comprises an image of a physical article;
train an image generation model using the selected image style;
receive, from the GUI, an image request input that comprises a drawing input;
generate an image based on the received image request input using the trained image generation model, wherein the generated image is in the selected image style;
output the generated image to the GUI in response to the received image request input, including updating the image displayed on the display device with the generated image superimposed on the image of the physical article;
and cause an automated device to modify the physical article by applying the generated image to the physical article.
The additional elements emphasized above are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. The limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do not integrate the abstract idea into a practical application – MPEP 2106.05(f). Additionally, the limitation of and cause an automated device to modify the physical article by applying the generated image to the physical article is post solution activity as it is determined to be insignificant extra solution activity (insignificant application)- MPEP 2106.05(g).
Accordingly, these additional elements when considered individually or as a whole do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The independent claims are directed to an abstract idea.
Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed with respect to Step 2A Prong two, the additional elements in the claims amount to no more than mere instructions to apply the judicial exception using a generic computer component and generally linking the judicial exception to a particular technological environment.
Even when considered as an ordered combination, the additional elements of claim 1, 8, and 15 do not add anything that is not already present when they are considered individually. Therefore, under Step 2B, there are no meaningful limitations in claims 1, 8, and 15 that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself (see MPEP 2106.05).
The step of and cause an automated device to modify the physical article by applying the generated image to the physical article of Step 2A has been re-evaluated in Step 2B and determined to be well-understood, routine, conventional activity in the field. The specification (paragraph 0050) does not provide any indication that the controlling device and printing of the image to the physical article is anything more than generic device printing, and the Ameranth court decision (MPEP 2106.05(g)) indicates that mere printing of an output to materials is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner.
As such, independent claims 1, 8, and 15 are ineligible.
Dependent claims 2, 4, 5, 7, 9, 10, 12, 13, 14, 16, 18-23 when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. §101 because the additional recited limitations fail to establish that the claims are not directed to the same abstract idea of Independent Claims 1, 8, and 15 without significantly more.
Claim 2 recites wherein the computer program code is configured to further cause the processor to: receive feedback associated with the generated image;
and train the image generation model using the received feedback. The limitation merely further limits the abstract idea and the inputs fed into the training and therefore does not recite significantly more to integrate the judicial exception into a practical application.
Claim 4 recites wherein the received image request input includes at least one of a text input or a voice input. The limitation merely further limits the abstract idea and therefore does not recite significantly more to integrate the judicial exception into a practical application.
Claim 5 recites wherein the trained image generation model is a stable diffusion model, and the stable diffusion model is enhanced using a diffusion control model and a textual inversion model. The limitation merely further limits the abstract idea and the training model type and therefore does not recite significantly more to integrate the judicial exception into a practical application.
Claim 7 recites wherein
Claim 21 recites wherein the memory and the computer program code are further configured to cause the processor to: obtain the drawing input from graphical input supplied by the GUI and generated from user interaction with the GUI. The limitation merely further limits the abstract idea and the received inputs on a GUI and therefore does not recite significantly more to integrate the judicial exception into a practical application.
Claim 22 recites wherein the computer-implemented physical article modification method further comprises: obtaining the drawing input from graphical input supplied by the GUI and generated from user interaction with the GUI. The limitation merely further limits the abstract idea and the received inputs on a GUI and therefore does not recite significantly more to integrate the judicial exception into a practical application.
Claim 23 recites wherein the computer-implemented physical article modification method further comprises: obtaining the drawing input from graphical input supplied by the GUI and generated from user interaction with the GUI. The limitation merely further limits the abstract idea and the received inputs on a GUI and therefore does not recite significantly more to integrate the judicial exception into a practical application.
Claims 9, 10, 12, 13, 14 and 16, 18, 19, 20 recited parallel claim language and therefore are also rejected for the reasons set forth above. For these reasons claims 1, 2, 4, 5, 7-10, 12-16, 18-23 are rejected under 35 USC 101.
Subject Matter Free of Prior Art
Claims 1, 8 and 15 are determined to have overcome the prior art of rejection and are free of prior art, however the claims remain rejected under 35 USC 101, as set forth above. All dependent claims are also free of prior art by virtue of dependency, but remain rejected under 35 USC 101.
Taking amended claim 1 as a representative claim, the claims as amended are found to overcome the prior art rejection for the reasons set forth below.
Claim 1 now recites the additional claimed features of:
display, on a display device of the GUI, an image that comprises an image of a physical article; select an image style; train an image generation model using the selected image style; receive, from the GUI, an image request input that comprises a drawing input; generate an image based on the received image request input using the trained image generation model, wherein the generated image is in the selected image style; output the generated image to the GUI in response to the received image request input, including updating the image displayed on the display device with the generated image superimposed on the image of the physical article; and cause an automated device to modify the physical article by applying the generated image to the physical article.
The closest prior art was found to be as follows:
Karpman US 11995803 discloses Training Method M100 and step M110- "Access a set of training images for a text to image diffusion model from a network" and see [Col. 3 lines 15-35] Text-to-image diffusion model 112 can execute the base image diffusion model 120 (and the high-resolution diffusion models 116) on the assembled training set (e.g., text-image pairs) to infer and/or encode custom parameters for iteratively transforming randomly sampled visual noise into a visually appealing synthetic image that aligns with visual concepts described by a text prompt. By analyzing a large (e.g., million-scale, billion-scale) set of image-text pairs that meet quality standards enforced by the set of visual classifiers and captioner and filter modules, and by training and/or conditioning on outputs of multiple (different) text encoders 118, the text-to-image diffusion model 112 can therefore develop superior vision-language understanding and generalizability during training. Additionally, the system 102 can fine-tune the text-to-image diffusion model 112 using outputs of a human visual preference model (e.g., a reward model 114) trained on human input judgments of aesthetic quality and/or text-image alignment, thereby incorporating (simulated) human feedback on images generated by the text-to-image diffusion model 112 to further improve the model's performance on image generation tasks during operation. And see [Col. 10 lines 8-31])
[Col. 20 lines 55-65] The generation interface 400 also includes a style menu 404 that enables the user to browse and select among pre-set image styles for the image generation request. In the example of FIG. 4A, the style menu 404 displays a set (e.g., array) of style option tiles, each style option tile including a text description of the image style (e.g., anime, Van Gogh, oil painting, line drawing, digital art, etc.) and a sample image in the corresponding style.
[Col. 22 lines 57-65] In response to detecting a user input (e.g., a touch input, a click) on the generate affordance, the software application layer 124 can then construct an image generation request based on the text prompt entered into the interactive text box, and, if applicable, the style selected by the user, a negative prompt entered within the advanced settings menu, and/or a creativity level designated by the user within the advanced settings menu. However the reference does not disclose display, on a display device of the GUI, an image that comprises an image of a physical article; select an image style; receive, from the GUI, an image request input that comprises a drawing input; output the generated image to the GUI in response to the received image request input, including updating the image displayed on the display device with the generated image superimposed on the image of the physical article; and cause an automated device to modify the physical article by applying the generated image to the physical article as recited in the claimed invention.
Bowen US20200160612 discloses in Figure 3B-2 an item with an added image that can be added to a cart for purchase and Figure 3D the item in a user interface with prompts to add elements to an item for sale. However the reference does not disclose select an image style; train an image generation model using the selected image style; receive, from the GUI, an image request input that comprises a drawing input; generate an image based on the received image request input using the trained image generation model, wherein the generated image is in the selected image style; output the generated image to the GUI in response to the received image request input, including updating the image displayed on the display device with the generated image superimposed on the image of the physical article; and cause an automated device to modify the physical article by applying the generated image to the physical article as recited in the claimed invention.
Jeong US 20210007459 discloses [0048] When a drawing button 427 is selected, a drawing editor that enables a user to directly draw an image is loaded, so the user may directly generate an image. However the reference does not disclose select an image style; train an image generation model using the selected image style; generate an image based on the received image request input using the trained image generation model, wherein the generated image is in the selected image style; output the generated image to the GUI in response to the received image request input, including updating the image displayed on the display device with the generated image superimposed on the image of the physical article; and cause an automated device to modify the physical article by applying the generated image to the physical article as recited in the claimed invention.
Wang US 20140169683 discloses [0034] FIG. 1 shows a flowchart of an image retrieval method according to an embodiment of the present disclosure. The image retrieval method provided by the present disclosure includes the following operations: S11, detect an outline of an image and obtain an outline feature of the image; S12, generate an index list in an image database according to the outline feature; and S13, obtain a sketch input by a user and retrieve images containing the sketch from the index list. The Content-Based Information Retrieval (CBIR) method provided by the present disclosure can improve retrieval efficiency and achieve a highly precise retrieval performance. Before the retrieval, some information can be added by using human interaction methods so that a terminal device can find desired image information in a highly precise way. However the reference does not disclose select an image style; train an image generation model using the selected image style; generate an image based on the received image request input using the trained image generation model, wherein the generated image is in the selected image style; including updating the image displayed on the display device with the generated image superimposed on the image of the physical article; and cause an automated device to modify the physical article by applying the generated image to the physical article as recited in the claimed invention.
Harvill US 20110292451 discloses [0032] In step 102, a Customer uploads a finished image to an automated system. In an embodiment, the automated system is embodied in one or more computer programs that are hosted at one or more server computers of an online service. Each server computer may be structured as described herein in connection with FIG. 6. The term "Customer," in this context, refers broadly to any individual, user, or system that communicates electronically with the automated system. Step 102 may include a first input data communication or Request comprising an input image or plurality of images; a selection for a number of inks or key colors; a selection for the area that a key color must have in the input image; a selection to use or not to use halftones in making the product; and an agreement to buy, or not to buy, the product. The Request may be provided as a result of a plurality of requests and responses or other interactions between a user computer and one or more server computers and logical elements of the server computers. In various embodiments, one or more data items identified in the preceding sentence may be omitted, and other data items may be provided, in the Request. As part of step 102, the image is checked to verify that it meets resolution requirements. A response is provided to the Customer if the image does not have enough resolution for screen printing. The response contains references on how to correct the resolution of the image.
[0033] Once an acceptable image is received, at step 104 the automated process uses default settings to adapt the Customer image for screen printing. In step 106, the automated process performs special image processing actions to prepare the image and the system for use in screen printing. In an embodiment, in step 108 the image is limited to a selected number of colors using a process that constrains colors based on closeness in color space and the resolution of a given color feature, as further described in connection with FIG. 2A, FIG. 2B, FIG. 2C and in the section herein titled "Finding Key Color Components." Further, in an embodiment the image colors are matched to a set of metrics based on the inks used for printing. However the reference does not disclose display, on a display device of the GUI, an image that comprises an image of a physical article; select an image style; train an image generation model using the selected image style; receive, from the GUI, an image request input that comprises a drawing input; generate an image based on the received image request input using the trained image generation model, wherein the generated image is in the selected image style; output the generated image to the GUI in response to the received image request input as recited in the claimed invention.
Closest NPL of record was found to be Zhang which discloses “This paper presents ControlNet, an end-to-end neural network architecture that learns conditional controls for large pretrained text-to-image diffusion models (Stable Diffusion in our implementation). ControlNet preserves the quality and capabilities of the large model by locking its parameters, and also making a trainable copy of its encoding layers., page 3837 and . Textual Inversion [21] and DreamBooth [74] can personalize content in the generated image by finetuning the image diffusion model using a small set of user-provided example images on page 3838. However, the reference does not disclose the invention as claimed.
It was found that no references alone or in combination, neither anticipates, reasonable teaches, nor renders obvious the below noted features of Applicant’s invention. The features of claim 1 (and parallel claims 8 and 15) in combination that overcome the prior art are:
display, on a display device of the GUI, an image that comprises an image of a physical article; select an image style; train an image generation model using the selected image style; receive, from the GUI, an image request input that comprises a drawing input; generate an image based on the received image request input using the trained image generation model, wherein the generated image is in the selected image style; output the generated image to the GUI in response to the received image request input, including updating the image displayed on the display device with the generated image superimposed on the image of the physical article; and cause an automated device to modify the physical article by applying the generated image to the physical article.
Therefore, none of the cited references disclose or render obvious each and every feature of the claimed invention and the claimed invention is determined to be free of the prior art. Although individually the claimed features could be taught, any combination of references would teach the claimed limitations using a piecemeal analysis, since references would only be combined and deemed obvious based on knowledge gleaned from the applicant's disclosure. Such a reconstruction is improper (i.e., hindsight reasoning). See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). The examiner emphasizes that it is the interrelationship of the limitations that renders these claims free of the prior art/additional art.
Therefore, it is hereby asserted by the Examiner that, in light of the above, that the claims are free of prior art as the references do not anticipate the claims and do not render obvious any further modification of the references to a person of ordinary skill in art.
Response to Arguments
The rejection under 35 USC 101 with respect to reciting “signal per se” language is withdrawn in view of the claim amendment.
The prior art rejection has been withdrawn in view of the claim amendments. Reasons are set forth above in the “Subject Matter Free of Prior Art” section.
Applicant's arguments filed 12/4/2025 have been fully considered but they are not persuasive.
With respect to the remarks directed to Step 2A Prong One, the examiner maintains the claims recite an abstract idea. The instant specification sets forth that the present disclosure is to systems and methods for personalizing articles or other items, such as those for sale in electronic commerce (e-commerce) (paragraphs 0013, 0020, 0021, 0029). This is accomplished through the use of training models to generate an image based on inputs into the model which is then printed on an item for sale.
With respect to the remarks directed to Step 2A Prong Two, in particular the GUI display, the updating of the image is merely presentation of updated information on the GUI. This differs from the Example 37 where the additional element of the memory was involved in determining where the icons would be placed going beyond merely organizing and presenting displayed information.
Further the alleged improvements of: --The user is able to see an image of the physical article to be modified, supply, via the GUI, an image request input that comprises a drawing input, and then see, via the GUI, what the modified article will look like.
- The claimed embodiments use the trained image generation model to generate an
image based on the received image request input. In this way, what may be the user's crude illustration of what they would like to see applied to the physical article is refined by the trained image generation model.
-Two aspects are clearly apparent: 1) Each claimed embodiment is incapable of being performed by a human, since it is impractical for a person to act with the skill, speed, and efficiency of a trained image generation model; and 2) When arranged as defined in the claims, the role of the trained image generation model is merely one step in the chain of article- modifying actions, and in this way the ordered combination of actions is integrated into a practical application that improves the functioning of, for example, automated physical article modification technology.
- The claimed "GUI" provides a user-friendly interface for technology that "cause[s] an automated device to modify the physical article by applying the generated image to the physical article."
These alleged improvements merely improve the information provided or displayed to the user in the GUI and not the computer or technology itself. Herein the alleged above improvements lie in the abstract idea and not integrating the judicial exception into a practical application.
For the same reasons set before the claims do not integrate the judicial exception into a practical application as the additional elements are generic computer elements and post solution activity. It is noted by the examiner more detail as to the nature of the modification of the physical article in concert with the preceding image processing steps, may provide potential advancement on subject matter eligibility, however a review of the specification by the examiner did not find more that a high level discussion on the modification of the article by a generic controlling device ([0050] of the instant specification). The examiner welcomes further discussion on the matter.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VICTORIA E. FRUNZI whose telephone number is (571)270-1031. The examiner can normally be reached Monday- Friday 7-4 (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
VICTORIA E. FRUNZI
Primary Examiner
Art Unit TC 3689
/VICTORIA E. FRUNZI/Primary Examiner, Art Unit 3689 2/24/2026