Prosecution Insights
Last updated: April 19, 2026
Application No. 18/564,802

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, DATA PRODUCTION METHOD, AND PROGRAM

Final Rejection §103
Filed
Nov 28, 2023
Examiner
WANG, JIN CHENG
Art Unit
2617
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
2 (Final)
59%
Grant Probability
Moderate
3-4
OA Rounds
3y 7m
To Grant
69%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
492 granted / 832 resolved
-2.9% vs TC avg
Moderate +10% lift
Without
With
+10.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
40 currently pending
Career history
872
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
62.7%
+22.7% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 832 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s amendment filed 9/26/2025 has been entered. The claims 1, 17 and 19 have been amended. The claims 13-16, 18 and 20-21 have been cancelled. The claims 1-12, 17 and 19 are pending in the current application. Response to Arguments Applicant’s arguments with respect to the newly amended claim 1 filed 9/26/2025 against the previously cited Ojima reference have been considered but are moot in view of the newly cited Bowen ‘845 reference. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Bowen et al. US-PGPUB No. 2022/0075845 (hereinafter Bowen ‘845) in view of Bowen US-PGPUB No. 2020/0159871 (hereinafter Bowen ‘871). Re Claim 1: Bowen ‘845 teaches an information processing apparatus comprising: at least one processor, the at least one processor configured to carry out ( Bowen ‘845 teaches FIG. 1B and Paragraph 0146 that the CAD system 102 includes a processing unit 120B): an acquisition process of acquiring an original image of a product that belongs to any of a plurality of classes ( Bowen ‘845 teaches at Paragraph [0057] The user may utilize the CAD system to select an object to customize (e.g., t-shirts, hoodies, shirts, jackets, dresses, pants, glasses, phone cases, laptop skins, backpacks, laptop cases, tablet cases, hairbands, wristbands, jewelry, digital content, and the like) from an interactive catalog of objects, and may then customize the object using design elements/templates from a library of design elements/templates (e.g., tournament/match brackets, sport paraphernalia (e.g., sport clothing, sports equipment (e.g., basketball, baseball, football, soccer ball, hockey puck, basketball hoop, basketball net, football goal post, hockey goal, baseball bat, hockey stick, etc.), team names, team logos, league names, league logos, and/or the like), and/or user-provided content (e.g., uploaded images or text). Bowen ‘845 teaches at Paragraph [0058] Where the customized object is a digital object (e.g., a displayable electronic image customized by the user), the user-customized object may be transmitted to and displayed by a display (e.g., a large screen display at a venue during an event) and/or shared via social media or other communication channels (e.g., short messaging service messages, email, or otherwise). Where the digital object is an e-card the user may customize the e-card using images select from image galleries, from video frames, or uploaded by the user as similarly described elsewhere herein. In addition, the user can select text from a text gallery and/or enter text. By way of illustration, a text gallery may include text for common or uncommon events (e.g., relating to birthdays, anniversaries, new babies, graduations, holidays, etc.). The user may be enabled to manually enter or select from a contact database recipients to whom the e-card is to be delivered (e.g., via email, short messaging service, etc.). The e-card may be delivered as an image file and/or as a link to the e-card, where the link may be used (e.g., by clicking on or otherwise activating the link) to access the e-card from a networked site. Bowen ‘845 teaches at Paragraph [0059] Optionally, the CAD system may enable an item (e.g., a product) provider to submit (e.g., via an upload or by providing a link) one or more images of the item (e.g., a photograph or graphic image of the front, back, left side, right side, top view, bottom view, and/or interior view of the item) and/or portions of the item (e.g., left sleeve, right sleeve, shoe lace, strap, etc.) for posting to an online interactive catalog of one or more items. The CAD system may enable certain customization options to be enabled for users and may enable the definition of certain areas of the item which may or may not be customized by users. Optionally, the system may enable an item provider to specify a custom, product-specific, announcement bar for each item. The announcement bar may comprise text positioned at the top, bottom, or side of an item detail page. The announcement bar may include information regarding current availability, a promotion/sale, or other information. Bowen ‘845 teaches at Paragraph 0035 enabling a first user to select a product image…cause a menu of design elements associated with the first design area to be displayed on the first user device and at Paragraph 0025 that a definition of a first template configured to be used to customize a first product type….the first template comprising: a plurality of slots corresponding to respective locations of the first product type, the plurality of slots configured to be populated with respective design elements, the plurality of slots associated with respective height/width ratio and receive, from an end user device, a selection from an online catalog, of a product of a second type; access a mapping of locations of the slots of the first template to respective locations on the product of the second type; modify two dimensions of at least one design element of at least one slot of the first template while maintaining a respective height/width ratio of the slot; use the mappings of locations of the slots of the first template to respective locations on the product of the second type to cause design elements associated with respective slots of the first template to be displayed on a rendering of the product of the second type at corresponding slot locations using the modified dimensions for at least one design element; and enable design elements associated with respective slots of the first template to be printed or embroidered on a physical instantiation of the product of the second type at corresponding slot locations using the modified dimensions for at least one design element. Bowen ‘845 teaches at Paragraph [0119] Optionally, user interfaces may be configured to respond to a user swipe gesture (e.g., a left or a right swipe gesture using one or more fingers) by replacing a currently displayed design element (e.g., a template) on an item model with another design element (e.g., another template in a set of templates), sometimes referred to herein as performing a swapping operation. Optionally, if a user has edited a first design element and then used a swipe gesture to replace the design element with a second design element, some or all of the edits made to the first design element (e.g., height edit, width edit, color edit, or the like) may be automatically applied to the second design element. Bowen ‘845 teaches at Paragraph [0217] A menu of design elements included in the collection(s) assigned to the slot may be generated. A user selection of an available design element from the design element menu is detected. The user may make the selection by tapping the design element. The image of the item, with the selected design element in the selected slot, is rendered on the user device, where the selected design element replaces the previous (e.g., default) design element in that slot. Optionally, design element controls are provided which enable the user to edit the design element (e.g., rotate, resize, move, change color, other edits described herein, etc.). The foregoing process may be repeated for the previously selected slot or for other slots. Bowen ‘845 teaches at Paragraph [0463] At block 2008, using the result of the image and analyses and the recommendation criteria, a determination may be made as to whether a recommendation is to be generated (e.g., a recommendation of an item, template, and/or design element). If a determination is made that a recommendation is to be made, at block 2010, one or more recommendations may be generated. For example, using the object label generated by the image analyzer, items, templates, and/or design elements may be identified using corresponding labels/metadata. For example, a data store may store metadata/labels that describe the subject matter of a given item, template, and/or design element that are available via an online catalog and/or a dedicated application. The labels associated with the user-provided image may be used to generate a search query (e.g., an SQL query) to identify matching items, templates, and/or design elements, some or all of which may then be included in a corresponding recommendation); a determination process of determining a parameter that defines an image generation method (Bowen ‘845 teaches at Paragraph 0035 enabling a first user to select a product image…cause a menu of design elements associated with the first design area to be displayed on the first user device and at Paragraph 0025 that a definition of a first template configured to be used to customize a first product type….the first template comprising: a plurality of slots corresponding to respective locations of the first product type, the plurality of slots configured to be populated with respective design elements, the plurality of slots associated with respective height/width ratio and receive, from an end user device, a selection from an online catalog, of a product of a second type; access a mapping of locations of the slots of the first template to respective locations on the product of the second type; modify two dimensions of at least one design element of at least one slot of the first template while maintaining a respective height/width ratio of the slot; use the mappings of locations of the slots of the first template to respective locations on the product of the second type to cause design elements associated with respective slots of the first template to be displayed on a rendering of the product of the second type at corresponding slot locations using the modified dimensions for at least one design element; and enable design elements associated with respective slots of the first template to be printed or embroidered on a physical instantiation of the product of the second type at corresponding slot locations using the modified dimensions for at least one design element. Bowen ‘845 teaches at Paragraph [0119] Optionally, user interfaces may be configured to respond to a user swipe gesture (e.g., a left or a right swipe gesture using one or more fingers) by replacing a currently displayed design element (e.g., a template) on an item model with another design element (e.g., another template in a set of templates), sometimes referred to herein as performing a swapping operation. Optionally, if a user has edited a first design element and then used a swipe gesture to replace the design element with a second design element, some or all of the edits made to the first design element (e.g., height edit, width edit, color edit, or the like) may be automatically applied to the second design element. Bowen ‘845 teaches at Paragraph [0217] A menu of design elements included in the collection(s) assigned to the slot may be generated. A user selection of an available design element from the design element menu is detected. The user may make the selection by tapping the design element. The image of the item, with the selected design element in the selected slot, is rendered on the user device, where the selected design element replaces the previous (e.g., default) design element in that slot. Optionally, design element controls are provided which enable the user to edit the design element (e.g., rotate, resize, move, change color, other edits described herein, etc.). The foregoing process may be repeated for the previously selected slot or for other slots. Bowen ‘845 teaches at Paragraph [0463] At block 2008, using the result of the image and analyses and the recommendation criteria, a determination may be made as to whether a recommendation is to be generated (e.g., a recommendation of an item, template, and/or design element). If a determination is made that a recommendation is to be made, at block 2010, one or more recommendations may be generated. For example, using the object label generated by the image analyzer, items, templates, and/or design elements may be identified using corresponding labels/metadata. For example, a data store may store metadata/labels that describe the subject matter of a given item, template, and/or design element that are available via an online catalog and/or a dedicated application. The labels associated with the user-provided image may be used to generate a search query (e.g., an SQL query) to identify matching items, templates, and/or design elements, some or all of which may then be included in a corresponding recommendation); an image generation process of generating, from the original image, a new image corresponding to a product of a new type or a product of a new package with use of the parameter determined in the determination process ( Bowen ‘845 teaches at Paragraph 0035 enabling a first user to select a product image…cause a menu of design elements associated with the first design area to be displayed on the first user device and at Paragraph 0025 that a definition of a first template configured to be used to customize a first product type….the first template comprising: a plurality of slots corresponding to respective locations of the first product type, the plurality of slots configured to be populated with respective design elements, the plurality of slots associated with respective height/width ratio and receive, from an end user device, a selection from an online catalog, of a product of a second type; access a mapping of locations of the slots of the first template to respective locations on the product of the second type; modify two dimensions of at least one design element of at least one slot of the first template while maintaining a respective height/width ratio of the slot; use the mappings of locations of the slots of the first template to respective locations on the product of the second type to cause design elements associated with respective slots of the first template to be displayed on a rendering of the product of the second type at corresponding slot locations using the modified dimensions for at least one design element; and enable design elements associated with respective slots of the first template to be printed or embroidered on a physical instantiation of the product of the second type at corresponding slot locations using the modified dimensions for at least one design element. Bowen ‘845 teaches at Paragraph [0119] Optionally, user interfaces may be configured to respond to a user swipe gesture (e.g., a left or a right swipe gesture using one or more fingers) by replacing a currently displayed design element (e.g., a template) on an item model with another design element (e.g., another template in a set of templates), sometimes referred to herein as performing a swapping operation. Optionally, if a user has edited a first design element and then used a swipe gesture to replace the design element with a second design element, some or all of the edits made to the first design element (e.g., height edit, width edit, color edit, or the like) may be automatically applied to the second design element. Bowen ‘845 teaches at Paragraph [0217] A menu of design elements included in the collection(s) assigned to the slot may be generated. A user selection of an available design element from the design element menu is detected. The user may make the selection by tapping the design element. The image of the item, with the selected design element in the selected slot, is rendered on the user device, where the selected design element replaces the previous (e.g., default) design element in that slot. Optionally, design element controls are provided which enable the user to edit the design element (e.g., rotate, resize, move, change color, other edits described herein, etc.). The foregoing process may be repeated for the previously selected slot or for other slots. Bowen ‘845 teaches at Paragraph [0463] At block 2008, using the result of the image and analyses and the recommendation criteria, a determination may be made as to whether a recommendation is to be generated (e.g., a recommendation of an item, template, and/or design element). If a determination is made that a recommendation is to be made, at block 2010, one or more recommendations may be generated. For example, using the object label generated by the image analyzer, items, templates, and/or design elements may be identified using corresponding labels/metadata. For example, a data store may store metadata/labels that describe the subject matter of a given item, template, and/or design element that are available via an online catalog and/or a dedicated application. The labels associated with the user-provided image may be used to generate a search query (e.g., an SQL query) to identify matching items, templates, and/or design elements, some or all of which may then be included in a corresponding recommendation.). Bowen ‘845 at least implicitly teaches the claim limitation: a data generation process of generating data, the data including the new image and a label that is assigned to the new image and that corresponds to a class differing from a class to which the original image belongs, wherein a class of the plurality of classes is set for each type of product and/or for each package of the product, and each of the plurality of classes is assigned, as a label, an identifier for identifying the product ( Bowen ‘845 teaches at Paragraph 0035 enabling a first user to select a product image…cause a menu of design elements associated with the first design area to be displayed on the first user device and at Paragraph 0025 that a definition of a first template configured to be used to customize a first product type….the first template comprising: a plurality of slots corresponding to respective locations of the first product type, the plurality of slots configured to be populated with respective design elements, the plurality of slots associated with respective height/width ratio and receive, from an end user device, a selection from an online catalog, of a product of a second type; access a mapping of locations of the slots of the first template to respective locations on the product of the second type; modify two dimensions of at least one design element of at least one slot of the first template while maintaining a respective height/width ratio of the slot; use the mappings of locations of the slots of the first template to respective locations on the product of the second type to cause design elements associated with respective slots of the first template to be displayed on a rendering of the product of the second type at corresponding slot locations using the modified dimensions for at least one design element; and enable design elements associated with respective slots of the first template to be printed or embroidered on a physical instantiation of the product of the second type at corresponding slot locations using the modified dimensions for at least one design element. Bowen ‘845 teaches at Paragraph [0119] Optionally, user interfaces may be configured to respond to a user swipe gesture (e.g., a left or a right swipe gesture using one or more fingers) by replacing a currently displayed design element (e.g., a template) on an item model with another design element (e.g., another template in a set of templates), sometimes referred to herein as performing a swapping operation. Optionally, if a user has edited a first design element and then used a swipe gesture to replace the design element with a second design element, some or all of the edits made to the first design element (e.g., height edit, width edit, color edit, or the like) may be automatically applied to the second design element. Bowen ‘845 teaches at Paragraph [0217] A menu of design elements included in the collection(s) assigned to the slot may be generated. A user selection of an available design element from the design element menu is detected. The user may make the selection by tapping the design element. The image of the item, with the selected design element in the selected slot, is rendered on the user device, where the selected design element replaces the previous (e.g., default) design element in that slot. Optionally, design element controls are provided which enable the user to edit the design element (e.g., rotate, resize, move, change color, other edits described herein, etc.). The foregoing process may be repeated for the previously selected slot or for other slots. Bowen ‘845 teaches at FIG. 4 and at Paragraph 0250 that once the user has indicated that the user has finished editing the template on the front of the t-shirt, the rear of the t-shirt is displayed via the user interface with a rear side template including default text and/or images at respective slots overlaying the t-shirt. If the user selects a different charity from the available charities, the rear template will be populated in substantially real time with the corresponding text and images. Bowen ‘845 teaches at Paragraph 0314 that the end user may assign one or more labels to the customized template such as a t-shirt may be used on different types of items such as a t-shirt, hoodie, or backpack and at Paragraph 0315 an end user may assign a label to a given end user-provided design element and an end user may assign a label basketball to a basketball design element. Bowen ‘845 teaches at Paragraph [0319] that, the end user-provided labels may be used to create predefined categories (directed to related subject matter) of an online catalog that an end user can access via a table contents menu (where a given category may be assigned a name that is the same or similar to the labels of the included end user customized templates or end user-provided design elements). For example, end user customized templates with the label #anniversary may be organized into an anniversary category, and end user customized templates with the label #birthday may be organized into a birthday category. The end user may select a desired template from a given category of end user designed templates and use it to customize an item (optionally with further customization of template slots or colors). Bowen ‘845 teaches at Paragraph 0026 receiving from an end user a selection of a product of a second type). Bowen ‘871 teaches the claim limitation of a data generation process of generating data, the data including the new image and a label that is assigned to the new image and that corresponds to a class differing from a class to which the original image belongs, wherein a class of the plurality of classes is set for each type of product and/or for each package of the product, and each of the plurality of classes is assigned, as a label, an identifier for identifying the product (Bowen ‘871 teaches at Paragraph [0578] that a logo detection interface enables the user to specify whether logos (e.g., brand logos) are to be detected in end user-provided content, and if logos are detected, how the logos are to be handled, e.g., blurred, deleted, replaced with another logo, replaced with an image, replaced with text, etc.). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Bowen ‘871’s replacement of logo in an image with another logo to have modified the image creation method of Bowen ‘845 teaches changing a parameter of an original image to have further incorporated the feature of changing logo in an original image to have created a new image with a replaced logo. One of the ordinary skill in the art would have been motivated to have replaced a logo in the original image. Re Claim 17: The claim 17 is in parallel with the claim 1 in a method form. The claim 17 is subject to the same rationale of rejection as the claim 1. Re Claim 19: The claim 19 is in parallel with the claim 1 in the form of a computer program product. The claim 19 is subject to the same rationale of rejection as the claim 1. Moreover, Bowen ‘845 further teaches a computer-readable non-transitory storage medium storing a program for causing a computer to function as an information processing apparatus, the program causing the computer to carry out [the processes of the claim 1] (Bowen ‘845 teaches at FIG. 1B and Paragraph 0145-0147 that the memory 128B may contain computer program instructions that the processing unit 120B may execute in order to implement one or more aspects of the present disclosure. The memory 120B generally includes RAM, ROM (and variants thereof, such as EEPROM) and/or other persistent or non-transitory computer-readable storage media. The memory 120B may store an operating system 132B that provides computer program instructions for use by the processing unit 120B in the general administration and operation of the CAD application module 134B, including it components. The memory 128B may store user accounts, including copies of a user's intellectual property assets (e.g., logos, brand names, photographs, graphics, animations, videos, sound files, stickers, tag lines, etc.) and groupings thereof (with associated group names).). Claims 2-12 are rejected under 35 U.S.C. 103 as being unpatentable over Bowen et al. US-PGPUB No. 2022/0075845 (hereinafter Bowen ‘845) in view of Bowen US-PGPUB No. 2020/0159871 (hereinafter Bowen ‘871) and Ojima et al. US-PGPUB No. 2024/0167965 (hereinafter Ojima). Re Claim 2: The claim 2 encompasses the same scope of invention as that of the claim 1 except additional claim limitation that the at least one processor further carries out a degree-of-difference determination process of deriving a degree of difference between the original image and the new image and comparing the degree of difference with a first threshold value. Bowen ‘845 at least implicitly teaches the claim limitation that the at least one processor further carries out a degree-of-difference determination process of deriving a degree of difference between the original image and the new image and comparing the degree of difference with a first threshold value ( Bowen ‘845 teaches at Paragraph [0298] By way of yet further example, color histograms may be generated for each design element, and the design color histogram may be compared to that of item colors to determine if the color distance rule is satisfied. Bowen ‘845 teaches at Paragraph [0299] Thus, for example, a rule may specify that the color distance of a design element from an item color must be greater than a specified threshold value to be utilized. ) Ojima further teach the claim limitation that the at least one processor further carries out a degree-of-difference determination process of deriving a degree of difference between the original image and the new image and comparing the degree of difference with a first threshold value ( Ojima teaches at Paragraph [0068] that the condition determiner 16 determines the degree of similarity (the larger the degree of similarity, the smaller the degree of difference) between the standard value and each of a plurality of standard values and selects, when the plurality of standard values includes any particular value having a high degree of similarity with the standard value, a candidate model N1, associated with the particular value and belonging to the plurality of candidate models N1, as the predetermined image creation model M1. To be more specific, the condition determiner 16 compares, with a threshold value, the absolute value (|p−p′|) of the difference (degree of difference) between a first standard value P11 (painting parameter p) of a discharge rate of interest and each of a plurality of first standard values P11 (painting parameters p′) associated with a plurality of candidate models N1. When finding any candidate model N1, of which the absolute value (|p−p′|) is less than the threshold value (smaller degree of difference is equal to larger degree of similarity), in the first storage device 5, the condition determiner 16 selects the candidate model N1 as the image creation model M1. Ojima teaches at Paragraph [0069] that this condition determination processing is preferably performed by the condition determiner 16 as a preparatory step for the sample making work. When finding no candidate models N1, of which the absolute value (|p−p′|) is less than the threshold value, in the first storage device 5, the server 200 notifies the telecommunications device 9 to that effect. In that case, the maker H1 makes a new image creation model M1 and enters its information into the inspection assistance system 1 via the operating interface 3). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Ojima’s condition determination processing to have determined the difference of the new image and the old image to have modified the image creation method of Cowen ‘845 and Cowen ‘871 teaches changing a parameter (e.g., a label/logo) of an original image to have further incorporated the feature of creating a new image with the new parameter (e.g., new label/logo) relative to the parameter of an original image to have created a new image with the changed parameter. One of the ordinary skill in the art would have been motivated to have changed a parameter of the original image. Re Claim 3: The claim 3 encompasses the same scope of invention as that of the claim 2 except additional claim limitation that, in a case where the degree of difference derived in the degree-of-difference determination process is smaller than the first threshold value, in the determination process, the at least one processor changes the parameter. Bowen ‘845 at least implicitly teaches the claim limitation that in a case where the degree of difference derived in the degree-of-difference determination process is smaller than the first threshold value, in the determination process, the at least one processor changes the parameter ( Bowen ‘845 teaches at [0293] Optionally, if a user is enabled to customize a slot with a user-provided image, the image color palette may be analyzed to determine whether the color of the user-provided image will be visible against the color of the item being customized. Bowen ‘845 teaches at Paragraph [0298] By way of yet further example, color histograms may be generated for each design element, and the design color histogram may be compared to that of item colors to determine if the color distance rule is satisfied. Bowen ‘845 teaches at Paragraph [0299] Thus, for example, a rule may specify that the color distance of a design element from an item color must be greater than a specified threshold value to be utilized. Bowen ‘845 teaches at Paragraph [0300] By way of illustration, when a user selects a template slot to customize via a computer aided design customization user interface, the corresponding color rules associated with the item color may be accessed from memory. The associated design element collections associated with the slot may be accessed from memory and presented via the user interface, with an indication as to which design elements may not be used with the current item color (e.g., where the indication may be provided by greying out or fading the prohibited design elements, where the prohibited design elements may be crossed out, and/or otherwise). If the user selects a different item color from an item color menu presented via the user interface, the corresponding color rules may be accessed and an indication as to which design elements may not be used with the new item color may be provided in real time. The resulting populated template may be printed or embroidered on an item (e.g., a clothing item, backpack, or other item) Ojima further teaches the claim limitation that in a case where the degree of difference derived in the degree-of-difference determination process is smaller than the first threshold value, in the determination process, the at least one processor changes the parameter ( Ojima teaches at Paragraph [0068] that the condition determiner 16 determines the degree of similarity (the larger the degree of similarity, the smaller the degree of difference) between the standard value and each of a plurality of standard values and selects, when the plurality of standard values includes any particular value having a high degree of similarity with the standard value, a candidate model N1, associated with the particular value and belonging to the plurality of candidate models N1, as the predetermined image creation model M1. To be more specific, the condition determiner 16 compares, with a threshold value, the absolute value (|p−p′|) of the difference (degree of difference) between a first standard value P11 (painting parameter p) of a discharge rate of interest and each of a plurality of first standard values P11 (painting parameters p′) associated with a plurality of candidate models N1. When finding any candidate model N1, of which the absolute value (|p−p′|) is less than the threshold value (smaller degree of difference is equal to larger degree of similarity), in the first storage device 5, the condition determiner 16 selects the candidate model N1 as the image creation model M1. Ojima teaches at Paragraph [0069] that this condition determination processing is preferably performed by the condition determiner 16 as a preparatory step for the sample making work. When finding no candidate models N1, of which the absolute value (|p−p′|) is less than the threshold value, in the first storage device 5, the server 200 notifies the telecommunications device 9 to that effect. In that case, the maker H1 makes a new image creation model M1 and enters its information into the inspection assistance system 1 via the operating interface 3. Ojima teaches at Paragraph [0071] that the learner 7 generates a learned model M2 (refer to FIG. 1) by using, as learning data, image data, to which a label is attached. The label indicates whether the surface condition (painting condition in this case) is good or bad and meets the inspection criteria set up by the criteria setter 14. Ojima teaches at Paragraph [0056] that when Equation (1) is adopted, the image creator 12 determines the respective RGB color densities (pixel values) of the first standard image A11 and the second standard image A12 to calculate the difference ΔI. The image creator 12 changes α, multiplies α by the difference ΔI every time α is changed, and adds the product to the image data I1 (color density) of the first standard image A11 that forms the basis, thereby creating a plurality of evaluation images B1 with respect to the target T1. FIG. 3A shows five evaluation images B1 created simply by Equation (1). Therefore, the respective color densities of the five evaluation images B1 increase progressively and linearly (proportionally) as the painting parameter p increases. Ojima teaches at Paragraph [0083] that, the painting system 300 also performs painting on another target T1 (which is provided separately from the target T1 in Step S1) under a second painting condition (including the second standard value P12) to make a real product (in Step S3). The image capture device 2 shoots the real product made under the second painting condition (in Step S4: generate second standard image A12). Then, the image capture device 2 transmits the second standard image A12 to the server 200 of the inspection assistance system 1. As a result, the image acquirer 11 of the processor 10 acquires the second standard image A12 about the target T1 for which the second standard value P12 has been set (image acquisition processing). Ojima teaches at Paragraph [0084] that the inspection assistance system 1 compares, with a threshold value, the absolute value (|p−p′|) of the difference between the first standard value P11 (painting parameter p) of interest and each of a plurality of first standard values P11 (painting parameters p′) associated with a plurality of candidate models N1. When finding any candidate model N1, of which the absolute value (|p−p′|) is less than the threshold value, in the first storage device 5 (if the answer is YES in Step S5), the inspection assistance system 1 selects the candidate model N1 as the image creation model M1 (in Step S6). Ojima teaches at Paragraph [0085] that, when finding no candidate models N1, of which the absolute value (|p−p′|) is less than the threshold value (if the answer is NO in Step S5), the inspection assistance system 1 notifies the maker H1 of the result. The maker H1 newly prepares an image creation model M1 and enters its information into the inspection assistance system 1. That is to say, the inspection assistance system 1 acquires the new image creation model M1 (in Step S7). Ojima teaches at Paragraph [0086] that the inspection assistance system 1 creates a plurality of evaluation images B1 about the target T1 by changing, by reference to the first and second standard values P11, P12, the condition parameter P1 based on the image creation model M1 in Step S8: image creation processing). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Ojima’s condition determination processing to have determined the difference of the new image and the old image to have modified the image creation method of Cowen ‘845 and Cowen ‘871 teaches changing a parameter (e.g., a label/logo) of an original image to have further incorporated the feature of creating a new image with the new parameter (e.g., new label/logo) relative to the parameter of an original image to have created a new image with the changed parameter. One of the ordinary skill in the art would have been motivated to have changed a parameter of the original image. Re Claim 4: The claim 4 encompasses the same scope of invention as that of the claim 3 except additional claim limitation that, in a case where the degree of difference is smaller than the first threshold value, in the determination process, the at least one processor changes the parameter so that the degree of difference increases. Bowen ‘845 at least implicitly teaches the claim limitation that, in a case where the degree of difference is smaller than the first threshold value, in the determination process, the at least one processor changes the parameter so that the degree of difference increases ( Bowen ‘845 teaches at [0293] Optionally, if a user is enabled to customize a slot with a user-provided image, the image color palette may be analyzed to determine whether the color of the user-provided image will be visible against the color of the item being customized. Bowen ‘845 teaches at Paragraph [0298] By way of yet further example, color histograms may be generated for each design element, and the design color histogram may be compared to that of item colors to determine if the color distance rule is satisfied. Bowen ‘845 teaches at Paragraph [0299] Thus, for example, a rule may specify that the color distance of a design element from an item color must be greater than a specified threshold value to be utilized. Bowen ‘845 teaches at Paragraph [0300] By way of illustration, when a user selects a template slot to customize via a computer aided design customization user interface, the corresponding color rules associated with the item color may be accessed from memory. The associated design element collections associated with the slot may be accessed from memory and presented via the user interface, with an indication as to which design elements may not be used with the current item color (e.g., where the indication may be provided by greying out or fading the prohibited design elements, where the prohibited design elements may be crossed out, and/or otherwise). If the user selects a different item color from an item color menu presented via the user interface, the corresponding color rules may be accessed and an indication as to which design elements may not be used with the new item color may be provided in real time. The resulting populated template may be printed or embroidered on an item (e.g., a clothing item, backpack, or other item).) Ojima further teaches the claim limitation that, in a case where the degree of difference is smaller than the first threshold value, in the determination process, the at least one processor changes the parameter so that the degree of difference increases ( Ojima teaches at Paragraph [0084] that the inspection assistance system 1 compares, with a threshold value, the absolute value (|p−p′|) of the difference between the first standard value P11 (painting parameter p) of interest and each of a plurality of first standard values P11 (painting parameters p′) associated with a plurality of candidate models N1. When finding any candidate model N1, of which the absolute value (|p−p′|) is less than the threshold value, in the first storage device 5 (if the answer is YES in Step S5), the inspection assistance system 1 selects the candidate model N1 as the image creation model M1 (in Step S6). Ojima teaches at Paragraph [0085] that, when finding no candidate models N1, of which the absolute value (|p−p′|) is less than the threshold value (if the answer is NO in Step S5), the inspection assistance system 1 notifies the maker H1 of the result. The maker H1 newly prepares an image creation model M1 and enters its information into the inspection assistance system 1. Ojima teaches at Paragraph [0086] that the inspection assistance system 1 creates a plurality of evaluation images B1 about the target T1 by changing, by reference to the first and second standard values P11, P12, the condition parameter P1 based on the image creation model M1 in Step S8: image creation processing. Ojima teaches at Paragraph 0130 that the inspection assistance system 1 creates the evaluation images B1 by changing the condition parameter P1 in an increasing direction from the origin). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Ojima’s condition determination processing to have determined the difference of the new image and the old image to have modified the image creation method of Cowen ‘845 and Cowen ‘871 teaches changing a parameter (e.g., a label/logo) of an original image to have further incorporated the feature of creating a new image with the new parameter (e.g., new label/logo) relative to the parameter of an original image to have created a new image with the changed parameter. One of the ordinary skill in the art would have been motivated to have changed a parameter of the original image. Re Claim 5: The claim 5 encompasses the same scope of invention as that of the claim 3 except additional claim limitation that, in a case where the degree of difference is smaller than the first threshold value, in the determination process, the at least one processor changes the parameter in a random manner. Bowen ‘845 at least implicitly teaches the claim limitation that, in a case where the degree of difference is smaller than the first threshold value, in the determination process, the at least one processor changes the parameter in a random manner ( Bowen ‘845 teaches at [0293] Optionally, if a user is enabled to customize a slot with a user-provided image, the image color palette may be analyzed to determine whether the color of the user-provided image will be visible against the color of the item being customized. Bowen ‘845 teaches at Paragraph [0298] By way of yet further example, color histograms may be generated for each design element, and the design color histogram may be compared to that of item colors to determine if the color distance rule is satisfied. Bowen ‘845 teaches at Paragraph [0299] Thus, for example, a rule may specify that the color distance of a design element from an item color must be greater than a specified threshold value to be utilized. Bowen ‘845 teaches at Paragraph [0300] By way of illustration, when a user selects a template slot to customize via a computer aided design customization user interface, the corresponding color rules associated with the item color may be accessed from memory. The associated design element collections associated with the slot may be accessed from memory and presented via the user interface, with an indication as to which design elements may not be used with the current item color (e.g., where the indication may be provided by greying out or fading the prohibited design elements, where the prohibited design elements may be crossed out, and/or otherwise). If the user selects a different item color from an item color menu presented via the user interface, the corresponding color rules may be accessed and an indication as to which design elements may not be used with the new item color may be provided in real time. The resulting populated template may be printed or embroidered on an item (e.g., a clothing item, backpack, or other item) Ojima further teaches the claim limitation that, in a case where the degree of difference is smaller than the first threshold value, in the determination process, the at least one processor changes the parameter so that the degree of difference increases ( Ojima teaches at Paragraph [0084] that the inspection assistance system 1 compares, with a threshold value, the absolute value (|p−p′|) of the difference between the first standard value P11 (painting parameter p) of interest and each of a plurality of first standard values P11 (painting parameters p′) associated with a plurality of candidate models N1. When finding any candidate model N1, of which the absolute value (|p−p′|) is less than the threshold value, in the first storage device 5 (if the answer is YES in Step S5), the inspection assistance system 1 selects the candidate model N1 as the image creation model M1 (in Step S6). Ojima teaches at Paragraph [0085] that, when finding no candidate models N1, of which the absolute value (|p−p′|) is less than the threshold value (if the answer is NO in Step S5), the inspection assistance system 1 notifies the maker H1 of the result. The maker H1 newly prepares an image creation model M1 and enters its information into the inspection assistance system 1. Ojima teaches at Paragraph [0086] that the inspection assistance system 1 creates a plurality of evaluation images B1 about the target T1 by changing, by reference to the first and second standard values P11, P12, the condition parameter P1 based on the image creation model M1 in Step S8: image creation processing. Ojima teaches at Paragraph 0130 that the inspection assistance system 1 creates the evaluation images B1 by changing the condition parameter P1 in an increasing direction from the origin. However, this is only an example and should not be construed as limiting. Alternatively, the inspection assistance system 1 may also create the evaluation images B1 by changing the condition parameter P1 in a decreasing direction from either the first standard value P11 or the second standard value P12, for example). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Ojima’s condition determination processing to have determined the difference of the new image and the old image to have modified the image creation method of Cowen ‘845 and Cowen ‘871 teaches changing a parameter (e.g., a label/logo) of an original image to have further incorporated the feature of creating a new image with the new parameter (e.g., new label/logo) relative to the parameter of an original image to have created a new image with the changed parameter. One of the ordinary skill in the art would have been motivated to have changed a parameter of the original image. Re Claim 6: The claim 6 encompasses the same scope of invention as that of the claim 1 except additional claim limitation that the at least one processor further carries out a first identification process of deriving an identification result by inputting the new image into a model that identifies an image. Bowen ‘845 at least implicitly teaches the claim limitation that, the at least one processor further carries out a first identification process of deriving an identification result by inputting the new image into a model that identifies an image ( Bowen ‘845 teaches at Paragraph [0077] At block 2104, a determination is made as to whether a candidate image of the prohibited subject matter is already present in a data store (e.g., a subject matter image data store). For example, images (which may be the actual images, and/or “facial fingerprints” of people or “fingerprints” of objects in the images, generated as similarly discussed elsewhere herein) in a data store may be stored in association with metadata that identifies one or more subject matters in the images (e.g., a name and/or related identifying data). The subject matter identifier and associated information may be compared to that stored in the data store to determine if a match is found. If the user submitted an image of the subject matter, a fingerprint of the subject matter in the user-provided image may be generated (as similarly discussed elsewhere herein) and compared against subject matter fingerprints stored in the data store. Bowen ‘845 teaches at Paragraph [0078] If a determination is made that one or more candidate images of the identified subject matter are in the data store (e.g., with a certain level of confidence as determined based on a similarity score generated at block 2104), at block 2110 one or more of the images may be accessed and provided for display on the user device via a user interface. The candidate image(s) may be displayed in association with a corresponding verification control, via which the user can indicate whether or not the image is an image of the subject matter the user intends to add to the blacklist. Bowen ‘845 teaches at Paragraph 0073 that if the user-provided image is to be used to customize a slot that prohibits the inclusion of certain faces or categories of faces, the facial fingerprint of the user-provided image may be compared to the facial fingerprints of prohibited specified people or categories of people. The closest matching face may be identified as a match. If the closest matching face is that of a prohibited specified person or prohibited category of people, a warning or rejection indication may be generated, inhibiting the use of the user-provided image in customizing the slot. Otherwise, the user-provided image may be used to customize the slot. Bowen ‘845 teaches at Paragraph [0327] The template may include a slot for a team logo, a slot for a player name, and a slot for a player number. The school name designated via the school field name may be utilized to select and access a school record from a database, and in turn, a team logo may be identified in the accessed school record. The identified team logo may be used to populate the team logo slot). Ojima further teaches the claim limitation that the at least one processor further carries out a first identification process of deriving an identification result by inputting the new image into a model that identifies an image (Ojima teaches at Paragraph [0071] that the learner 7 generates a learned model M2 (refer to FIG. 1) by using, as learning data, image data, to which a label is attached. The label indicates whether the surface condition (painting condition in this case) is good or bad and meets the inspection criteria set up by the criteria setter 14. [0074] The plurality of pieces of learning data are generated by labeling the plurality of evaluation images B1, which have been created under various painting conditions, as either OK or NG indicating the result of evaluation in accordance with the inspection criteria that has been set up by the criteria setter 14. That is to say, in the example shown in FIG. 4, learning data is generated by labeling the image data of the evaluation images B11-B13 as OK. In addition, learning data is also generated by labeling the image data of the evaluation images B14, B15 as NG. Optionally, learning data may also be generated by labeling the image data of the first standard image A11 as OK. In addition, learning data may be further generated by labeling the image data of the second standard image A12 as NG. [0075] That is to say, it can be said that if the plurality of evaluation images B1 are adopted as the learning data, the labeling work has already been done automatically at a point in time when the inspection criteria are set up. This reduces the chances of causing the user the trouble of newly generating or labeling learning data with respect to the inspection assistance system 1 via a user interface such as the operating interface 3. The learner 7 generates the learned model M2 by making, using a plurality of pieces of labeled learning data, machine learning about good and bad painting conditions of the target T1. The learned model M2 thus generated by the learner 7 is stored in the second storage device 6. [0076] The learner 7 may contribute to improving the performance of the learned model M2 by making re-learning using newly acquired labeled learning data (evaluation images B1). For example, if any evaluation image B1 is created under a new painting condition, then the learner 7 may be made to make re-learning about the new evaluation image B1). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Ojima’s condition determination processing to have determined the difference of the new image and the old image to have modified the image creation method of Cowen ‘845 and Cowen ‘871 teaches changing a parameter (e.g., a label/logo) of an original image to have further incorporated the feature of creating a new image with the new parameter (e.g., new label/logo) relative to the parameter of an original image to have created a new image with the changed parameter. One of the ordinary skill in the art would have been motivated to have changed a parameter of the original image. Re Claim 7: The claim 7 encompasses the same scope of invention as that of the claim 6 except additional claim limitation that, in a case where the identification result is a result such that a degree of similarity between the new image and the original image is smaller than a second threshold value, in the determination process, the at least one processor changes the parameter so that the similarity between the new image and the original image increases. Bowen ‘845 at least implicitly teaches the claim limitation that, that, in a case where the identification result is a result such that a degree of similarity between the new image and the original image is smaller than a second threshold value, in the determination process, the at least one processor changes the parameter so that the similarity between the new image and the original image increases ( Bowen ‘845 teaches at Paragraph [0077] At block 2104, a determination is made as to whether a candidate image of the prohibited subject matter is already present in a data store (e.g., a subject matter image data store). For example, images (which may be the actual images, and/or “facial fingerprints” of people or “fingerprints” of objects in the images, generated as similarly discussed elsewhere herein) in a data store may be stored in association with metadata that identifies one or more subject matters in the images (e.g., a name and/or related identifying data). The subject matter identifier and associated information may be compared to that stored in the data store to determine if a match is found. If the user submitted an image of the subject matter, a fingerprint of the subject matter in the user-provided image may be generated (as similarly discussed elsewhere herein) and compared against subject matter fingerprints stored in the data store. Bowen ‘845 teaches at Paragraph [0078] If a determination is made that one or more candidate images of the identified subject matter are in the data store (e.g., with a certain level of confidence as determined based on a similarity score generated at block 2104), at block 2110 one or more of the images may be accessed and provided for display on the user device via a user interface. The candidate image(s) may be displayed in association with a corresponding verification control, via which the user can indicate whether or not the image is an image of the subject matter the user intends to add to the blacklist. Bowen ‘845 teaches at Paragraph 0073 that if the user-provided image is to be used to customize a slot that prohibits the inclusion of certain faces or categories of faces, the facial fingerprint of the user-provided image may be compared to the facial fingerprints of prohibited specified people or categories of people. The closest matching face may be identified as a match. If the closest matching face is that of a prohibited specified person or prohibited category of people, a warning or rejection indication may be generated, inhibiting the use of the user-provided image in customizing the slot. Otherwise, the user-provided image may be used to customize the slot. Bowen ‘845 teaches at Paragraph [0327] The template may include a slot for a team logo, a slot for a player name, and a slot for a player number. The school name designated via the school field name may be utilized to select and access a school record from a database, and in turn, a team logo may be identified in the accessed school record. The identified team logo may be used to populate the team logo slot). Ojima further teaches the claim limitation that, in a case where the identification result is a result such that a degree of similarity between the new image and the original image is smaller than a second threshold value, in the determination process, the at least one processor changes the parameter so that the similarity between the new image and the original image increases ( Ojima teaches at Paragraph [0084] that the inspection assistance system 1 compares, with a threshold value, the absolute value (|p−p′|) of the difference between the first standard value P11 (painting parameter p) of interest and each of a plurality of first standard values P11 (painting parameters p′) associated with a plurality of candidate models N1. When finding any candidate model N1, of which the absolute value (|p−p′|) is less than the threshold value, in the first storage device 5 (if the answer is YES in Step S5), the inspection assistance system 1 selects the candidate model N1 as the image creation model M1 (in Step S6). Ojima teaches at Paragraph [0085] that, when finding no candidate models N1, of which the absolute value (|p−p′|) is less than the threshold value (if the answer is NO in Step S5), the inspection assistance system 1 notifies the maker H1 of the result. The maker H1 newly prepares an image creation model M1 and enters its information into the inspection assistance system 1. Ojima teaches at Paragraph [0086] that the inspection assistance system 1 creates a plurality of evaluation images B1 about the target T1 by changing, by reference to the first and second standard values P11, P12, the condition parameter P1 based on the image creation model M1 in Step S8: image creation processing. Ojima teaches at Paragraph 0130 that the inspection assistance system 1 creates the evaluation images B1 by changing the condition parameter P1 in an increasing direction from the origin). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Ojima’s condition determination processing to have determined the difference of the new image and the old image to have modified the image creation method of Cowen ‘845 and Cowen ‘871 teaches changing a parameter (e.g., a label/logo) of an original image to have further incorporated the feature of creating a new image with the new parameter (e.g., new label/logo) relative to the parameter of an original image to have created a new image with the changed parameter. One of the ordinary skill in the art would have been motivated to have changed a parameter of the original image. Re Claim 8: The claim 8 encompasses the same scope of invention as that of the claim 6 except additional claim limitation that the identification result includes a class into which the new image is classified, and in a case where the identification result is a result such that the new image is classified into a class differing from the class to which the original image belongs, in the determination process, the at least one processor changes the parameter so that the similarity between the new image and the original image increases. Bowen ‘845 at least implicitly teaches the claim limitation that, the identification result includes a class into which the new image is classified, and in a case where the identification result is a result such that the new image is classified into a class differing from the class to which the original image belongs, in the determination process, the at least one processor changes the parameter so that the similarity between the new image and the original image increases ( Bowen ‘845 teaches at Paragraph [0077] At block 2104, a determination is made as to whether a candidate image of the prohibited subject matter is already present in a data store (e.g., a subject matter image data store). For example, images (which may be the actual images, and/or “facial fingerprints” of people or “fingerprints” of objects in the images, generated as similarly discussed elsewhere herein) in a data store may be stored in association with metadata that identifies one or more subject matters in the images (e.g., a name and/or related identifying data). The subject matter identifier and associated information may be compared to that stored in the data store to determine if a match is found. If the user submitted an image of the subject matter, a fingerprint of the subject matter in the user-provided image may be generated (as similarly discussed elsewhere herein) and compared against subject matter fingerprints stored in the data store. Bowen ‘845 teaches at Paragraph [0078] If a determination is made that one or more candidate images of the identified subject matter are in the data store (e.g., with a certain level of confidence as determined based on a similarity score generated at block 2104), at block 2110 one or more of the images may be accessed and provided for display on the user device via a user interface. The candidate image(s) may be displayed in association with a corresponding verification control, via which the user can indicate whether or not the image is an image of the subject matter the user intends to add to the blacklist. Bowen ‘845 teaches at Paragraph 0073 that if the user-provided image is to be used to customize a slot that prohibits the inclusion of certain faces or categories of faces, the facial fingerprint of the user-provided image may be compared to the facial fingerprints of prohibited specified people or categories of people. The closest matching face may be identified as a match. If the closest matching face is that of a prohibited specified person or prohibited category of people, a warning or rejection indication may be generated, inhibiting the use of the user-provided image in customizing the slot. Otherwise, the user-provided image may be used to customize the slot. Bowen ‘845 teaches at Paragraph [0327] The template may include a slot for a team logo, a slot for a player name, and a slot for a player number. The school name designated via the school field name may be utilized to select and access a school record from a database, and in turn, a team logo may be identified in the accessed school record. The identified team logo may be used to populate the team logo slot). Ojima further teaches the claim limitation that the at least one processor further carries out a first identification process of deriving an identification result by inputting the new image into a model that identifies an image (Ojima teaches at Paragraph [0071] that the learner 7 generates a learned model M2 (refer to FIG. 1) by using, as learning data, image data, to which a label is attached. The label indicates whether the surface condition (painting condition in this case) is good or bad and meets the inspection criteria set up by the criteria setter 14. [0074] The plurality of pieces of learning data are generated by labeling the plurality of evaluation images B1, which have been created under various painting conditions, as either OK or NG indicating the result of evaluation in accordance with the inspection criteria that has been set up by the criteria setter 14. That is to say, in the example shown in FIG. 4, learning data is generated by labeling the image data of the evaluation images B11-B13 as OK. In addition, learning data is also generated by labeling the image data of the evaluation images B14, B15 as NG. Optionally, learning data may also be generated by labeling the image data of the first standard image A11 as OK. In addition, learning data may be further generated by labeling the image data of the second standard image A12 as NG. [0075] That is to say, it can be said that if the plurality of evaluation images B1 are adopted as the learning data, the labeling work has already been done automatically at a point in time when the inspection criteria are set up. This reduces the chances of causing the user the trouble of newly generating or labeling learning data with respect to the inspection assistance system 1 via a user interface such as the operating interface 3. The learner 7 generates the learned model M2 by making, using a plurality of pieces of labeled learning data, machine learning about good and bad painting conditions of the target T1. The learned model M2 thus generated by the learner 7 is stored in the second storage device 6. [0076] The learner 7 may contribute to improving the performance of the learned model M2 by making re-learning using newly acquired labeled learning data (evaluation images B1). For example, if any evaluation image B1 is created under a new painting condition, then the learner 7 may be made to make re-learning about the new evaluation image B1. Ojima teaches at Paragraph [0084] that the inspection assistance system 1 compares, with a threshold value, the absolute value (|p−p′|) of the difference between the first standard value P11 (painting parameter p) of interest and each of a plurality of first standard values P11 (painting parameters p′) associated with a plurality of candidate models N1. When finding any candidate model N1, of which the absolute value (|p−p′|) is less than the threshold value, in the first storage device 5 (if the answer is YES in Step S5), the inspection assistance system 1 selects the candidate model N1 as the image creation model M1 (in Step S6). Ojima teaches at Paragraph [0085] that, when finding no candidate models N1, of which the absolute value (|p−p′|) is less than the threshold value (if the answer is NO in Step S5), the inspection assistance system 1 notifies the maker H1 of the result. The maker H1 newly prepares an image creation model M1 and enters its information into the inspection assistance system 1. Ojima teaches at Paragraph [0086] that the inspection assistance system 1 creates a plurality of evaluation images B1 about the target T1 by changing, by reference to the first and second standard values P11, P12, the condition parameter P1 based on the image creation model M1 in Step S8: image creation processing. Ojima teaches at Paragraph 0130 that the inspection assistance system 1 creates the evaluation images B1 by changing the condition parameter P1 in an increasing direction from the origin). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Ojima’s condition determination processing to have determined the difference of the new image and the old image to have modified the image creation method of Cowen ‘845 and Cowen ‘871 teaches changing a parameter (e.g., a label/logo) of an original image to have further incorporated the feature of creating a new image with the new parameter (e.g., new label/logo) relative to the parameter of an original image to have created a new image with the changed parameter. One of the ordinary skill in the art would have been motivated to have changed a parameter of the original image. Re Claim 9: The claim 9 encompasses the same scope of invention as that of the claim 8 except additional claim limitation that the identification result includes a class into which the new image is classified and a degree of reliability related to the classification into the class, and in a case where the identification result is a result such that the new image is classified into a class differing from the class to which the original image belongs and that the degree of reliability related to the classification into the class is larger than a third threshold value, in the determination process, the at least one processor changes the parameter so that the similarity between the new image and the original image increases. Bowen ‘845 at least implicitly teaches the claim limitation that the identification result includes a class into which the new image is classified and a degree of reliability related to the classification into the class, and in a case where the identification result is a result such that the new image is classified into a class differing from the class to which the original image belongs and that the degree of reliability related to the classification into the class is larger than a third threshold value, in the determination process, the at least one processor changes the parameter so that the similarity between the new image and the original image increases ( Bowen ‘845 teaches at Paragraph [0077] At block 2104, a determination is made as to whether a candidate image of the prohibited subject matter is already present in a data store (e.g., a subject matter image data store). For example, images (which may be the actual images, and/or “facial fingerprints” of people or “fingerprints” of objects in the images, generated as similarly discussed elsewhere herein) in a data store may be stored in association with metadata that identifies one or more subject matters in the images (e.g., a name and/or related identifying data). The subject matter identifier and associated information may be compared to that stored in the data store to determine if a match is found. If the user submitted an image of the subject matter, a fingerprint of the subject matter in the user-provided image may be generated (as similarly discussed elsewhere herein) and compared against subject matter fingerprints stored in the data store. Bowen ‘845 teaches at Paragraph [0078] If a determination is made that one or more candidate images of the identified subject matter are in the data store (e.g., with a certain level of confidence as determined based on a similarity score generated at block 2104), at block 2110 one or more of the images may be accessed and provided for display on the user device via a user interface. The candidate image(s) may be displayed in association with a corresponding verification control, via which the user can indicate whether or not the image is an image of the subject matter the user intends to add to the blacklist. Bowen ‘845 teaches at Paragraph 0073 that if the user-provided image is to be used to customize a slot that prohibits the inclusion of certain faces or categories of faces, the facial fingerprint of the user-provided image may be compared to the facial fingerprints of prohibited specified people or categories of people. The closest matching face may be identified as a match. If the closest matching face is that of a prohibited specified person or prohibited category of people, a warning or rejection indication may be generated, inhibiting the use of the user-provided image in customizing the slot. Otherwise, the user-provided image may be used to customize the slot. Bowen ‘845 teaches at Paragraph [0327] The template may include a slot for a team logo, a slot for a player name, and a slot for a player number. The school name designated via the school field name may be utilized to select and access a school record from a database, and in turn, a team logo may be identified in the accessed school record. The identified team logo may be used to populate the team logo slot). Ojima further teaches the claim limitation that the identification result includes a class into which the new image is classified and a degree of reliability related to the classification into the class, and in a case where the identification result is a result such that the new image is classified into a class differing from the class to which the original image belongs and that the degree of reliability related to the classification into the class is larger than a third threshold value, in the determination process, the at least one processor changes the parameter so that the similarity between the new image and the original image increases ( Ojima teaches at Paragraph [0071] that the learner 7 generates a learned model M2 (refer to FIG. 1) by using, as learning data, image data, to which a label is attached. The label indicates whether the surface condition (painting condition in this case) is good or bad and meets the inspection criteria set up by the criteria setter 14. [0074] The plurality of pieces of learning data are generated by labeling the plurality of evaluation images B1, which have been created under various painting conditions, as either OK or NG indicating the result of evaluation in accordance with the inspection criteria that has been set up by the criteria setter 14. That is to say, in the example shown in FIG. 4, learning data is generated by labeling the image data of the evaluation images B11-B13 as OK. In addition, learning data is also generated by labeling the image data of the evaluation images B14, B15 as NG. Optionally, learning data may also be generated by labeling the image data of the first standard image A11 as OK. In addition, learning data may be further generated by labeling the image data of the second standard image A12 as NG. [0075] That is to say, it can be said that if the plurality of evaluation images B1 are adopted as the learning data, the labeling work has already been done automatically at a point in time when the inspection criteria are set up. This reduces the chances of causing the user the trouble of newly generating or labeling learning data with respect to the inspection assistance system 1 via a user interface such as the operating interface 3. The learner 7 generates the learned model M2 by making, using a plurality of pieces of labeled learning data, machine learning about good and bad painting conditions of the target T1. The learned model M2 thus generated by the learner 7 is stored in the second storage device 6. [0076] The learner 7 may contribute to improving the performance of the learned model M2 by making re-learning using newly acquired labeled learning data (evaluation images B1). For example, if any evaluation image B1 is created under a new painting condition, then the learner 7 may be made to make re-learning about the new evaluation image B1. Ojima teaches at Paragraph [0084] that the inspection assistance system 1 compares, with a threshold value, the absolute value (|p−p′|) of the difference between the first standard value P11 (painting parameter p) of interest and each of a plurality of first standard values P11 (painting parameters p′) associated with a plurality of candidate models N1. When finding any candidate model N1, of which the absolute value (|p−p′|) is less than the threshold value, in the first storage device 5 (if the answer is YES in Step S5), the inspection assistance system 1 selects the candidate model N1 as the image creation model M1 (in Step S6). Ojima teaches at Paragraph [0085] that, when finding no candidate models N1, of which the absolute value (|p−p′|) is less than the threshold value (if the answer is NO in Step S5), the inspection assistance system 1 notifies the maker H1 of the result. The maker H1 newly prepares an image creation model M1 and enters its information into the inspection assistance system 1. Ojima teaches at Paragraph [0086] that the inspection assistance system 1 creates a plurality of evaluation images B1 about the target T1 by changing, by reference to the first and second standard values P11, P12, the condition parameter P1 based on the image creation model M1 in Step S8: image creation processing. Ojima teaches at Paragraph 0130 that the inspection assistance system 1 creates the evaluation images B1 by changing the condition parameter P1 in an increasing direction from the origin). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Ojima’s condition determination processing to have determined the difference of the new image and the old image to have modified the image creation method of Cowen ‘845 and Cowen ‘871 teaches changing a parameter (e.g., a label/logo) of an original image to have further incorporated the feature of creating a new image with the new parameter (e.g., new label/logo) relative to the parameter of an original image to have created a new image with the changed parameter. One of the ordinary skill in the art would have been motivated to have changed a parameter of the original image. Re Claim 10: The claim 10 encompasses the same scope of invention as that of the claim 1 except additional claim limitation that, in the image generation process, the at least one processor generates the new image with use of at least one selected from the group consisting of conversion of at least one or some of colors, replacement of at least one or some of characters, style conversion, interpolation by an image generation model, replacement or superimposition of a portion of an image. Bowen ‘845 at least implicitly teaches the claim limitation that the identification result includes a class into which the new image is classified and a degree of reliability related to the classification into the class, and in a case where the identification result is a result such that the new image is classified into a class differing from the class to which the original image belongs and that the degree of reliability related to the classification into the class is larger than a third threshold value, in the determination process, the at least one processor changes the parameter so that the similarity between the new image and the original image increases ( Bowen ‘845 teaches at Paragraph [0077] At block 2104, a determination is made as to whether a candidate image of the prohibited subject matter is already present in a data store (e.g., a subject matter image data store). For example, images (which may be the actual images, and/or “facial fingerprints” of people or “fingerprints” of objects in the images, generated as similarly discussed elsewhere herein) in a data store may be stored in association with metadata that identifies one or more subject matters in the images (e.g., a name and/or related identifying data). The subject matter identifier and associated information may be compared to that stored in the data store to determine if a match is found. If the user submitted an image of the subject matter, a fingerprint of the subject matter in the user-provided image may be generated (as similarly discussed elsewhere herein) and compared against subject matter fingerprints stored in the data store. Bowen ‘845 teaches at Paragraph [0078] If a determination is made that one or more candidate images of the identified subject matter are in the data store (e.g., with a certain level of confidence as determined based on a similarity score generated at block 2104), at block 2110 one or more of the images may be accessed and provided for display on the user device via a user interface. The candidate image(s) may be displayed in association with a corresponding verification control, via which the user can indicate whether or not the image is an image of the subject matter the user intends to add to the blacklist. Bowen ‘845 teaches at Paragraph [0119] Optionally, user interfaces may be configured to respond to a user swipe gesture (e.g., a left or a right swipe gesture using one or more fingers) by replacing a currently displayed design element (e.g., a template) on an item model with another design element (e.g., another template in a set of templates), sometimes referred to herein as performing a swapping operation. Optionally, if a user has edited a first design element and then used a swipe gesture to replace the design element with a second design element, some or all of the edits made to the first design element (e.g., height edit, width edit, color edit, or the like) may be automatically applied to the second design element. Bowen ‘845 teaches at Paragraph [0217] A menu of design elements included in the collection(s) assigned to the slot may be generated. A user selection of an available design element from the design element menu is detected. The user may make the selection by tapping the design element. The image of the item, with the selected design element in the selected slot, is rendered on the user device, where the selected design element replaces the previous (e.g., default) design element in that slot. Optionally, design element controls are provided which enable the user to edit the design element (e.g., rotate, resize, move, change color, other edits described herein, etc.). The foregoing process may be repeated for the previously selected slot or for other slots). Ojima further teaches the claim limitation that in the image generation process, the at least one processor generates the new image with use of at least one selected from the group consisting of conversion of at least one or some of colors, replacement of at least one or some of characters, style conversion, interpolation by an image generation model, replacement or superimposition of a portion of an image ( Ojima teaches at FIG. 3A and Paragraph [0062] that the criteria setter 14 locates a boundary where the results of evaluations with respect to the plurality of evaluation images B1 that are arranged in line change from OK into NG. Specifically, the criteria setter 14 sets the inspection criteria at an evaluation image B1, of which the result of evaluation is OK but is closest to NG (i.e., the evaluation image B13 in the example shown in FIG. 4). The processor 10 stores, in the storage device (such as the first storage device 5), information about the image data 13 (color density) and inspection criteria value P13 (third painting condition) of the evaluation image B13 that has turned out to be the inspection criteria. Ojima teaches at Paragraph [0050] The image creation model M1 is a function model that uses the condition parameter P1 as a variable. In this case, supposing a painting parameter as the condition parameter P1 is p (variable), the image data I (color density) of the evaluation image B1 is determined by a function f(p) (image creation model M1). That is to say, the function f(p) (approximately) defines the characteristic of a variation in RGB color density with respect to the discharge rate (condition parameter P1) for one layer (e.g., the third layer). The function f(p) is obtained by either verification by measurement or simulation, for example. Information about the image creation model M1 is stored in advance in the first storage device 5. Ojima teaches at Paragraph [0051] To make the following description easily understandable, the evaluation images B1 are supposed to be created with only the discharge rate (condition parameter P1) for the third layer changed as a condition parameter P1 of interest and with the condition parameters P1 for the other layers, such as discharge rates, the number of times of overcoating, and the atomization pressure, fixed at standard values, as far as the painting condition is concerned. However, this is only an example and should not be construed as limiting. Alternatively, the evaluation images B1 may also be created with two or more condition parameters P1 changed in parallel. For example, if the discharge rate and the number of times of overcoating are changed in parallel, then a function f(p) defining the characteristic of a variation in color density with respect to the discharge rate and the number of times of overcoating may be prepared for the image data I (color density) of the evaluation images B1. Ojima teaches at Paragraph [0086] that the inspection assistance system 1 creates a plurality of evaluation images B1 about the target T1 by changing, by reference to the first and second standard values P11, P12, the condition parameter P1 based on the image creation model M1 in Step S8: image creation processing). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Ojima’s condition determination processing to have determined the difference of the new image and the old image to have modified the image creation method of Cowen ‘845 and Cowen ‘871 teaches changing a parameter (e.g., a label/logo) of an original image to have further incorporated the feature of creating a new image with the new parameter (e.g., new label/logo) relative to the parameter of an original image to have created a new image with the changed parameter. One of the ordinary skill in the art would have been motivated to have changed a parameter of the original image. Re Claim 11: The claim 11 encompasses the same scope of invention as that of the claim 1 except additional claim limitation that the at least one processor further carries out a training process of means for training a target model with reference to data generated in the data generation process. Bowen ‘845 at least implicitly teaches the claim limitation that the at least one processor further carries out a training process of means for training a target model with reference to data generated in the data generation process ( Bowen ‘845 teaches at Paragraph 0082] that at block 2106, the user may be prompted to upload a specified number of images of the subject matter. For example, the number of images may be determined based on the number of images needed to train a facial or object recognizer (e.g., ten image, a hundred images, a thousand images) such as those described elsewhere herein (e.g., convolutional neural networks), to recognize the subject matter in new images (e.g., submitted by end users who are using templates to customize items as described elsewhere herein). The number of needed images may be reduced if the neural network is pre-trained. Bowen ‘845 teaches at Paragraph [0136] For example, a deep convolutional neural network (CNN) model may be trained to identify matching faces from different photographs. The deep neural network may include an input layer, an output layer, and one or more levels of hidden layers between the input and output layers. The deep neural network may be configured as a feed forward network. The convolutional deep neural network may be configured with a shared-weights architecture and with translation invariance characteristics. The hidden layers may be configured as convolutional layers, pooling layers, fully connected layers and/or normalization layers. The convolutional deep neural network may be configured with pooling layers that combine outputs of neuron clusters at one layer into a single neuron in the next layer. Max pooling and/or average pooling may be utilized. Max pooling may utilize the maximum value from each of a cluster of neurons at the prior layer. Average pooling may utilize the average value from each of a cluster of neurons at the prior layer. Bowen ‘845 teaches at Paragraph [0226] Optionally, the colors may be manually set. Optionally, each entity may be associated with a unique identifier and a table may be generated and accessed that associates a given entity unique identifier with one or more colors (e.g., a primary color and a secondary color, which may be identified by name, RGB values, YUV values, and/or the like) and the accessed colors may be used to color design elements (e.g., text, graphics, images, etc.) in corresponding slots. Optionally, a learning engine (e.g., a neural network) may be trained to identify colors associated with an entity (e.g., by examining images tagged with the entities name and a key phrase such as “jersey” or “school colors”), and the identified colors may be used to color design elements (e.g., text, graphics, images, etc.) in corresponding slots). Ojima further teaches the claim limitation that the at least one processor further carries out a training process of means for training a target model with reference to data generated in the data generation process ( Ojima teaches at Paragraph [0071] that the learner 7 generates a learned model M2 (refer to FIG. 1) by using, as learning data, image data, to which a label is attached. The label indicates whether the surface condition (painting condition in this case) is good or bad and meets the inspection criteria set up by the criteria setter 14. [0074] The plurality of pieces of learning data are generated by labeling the plurality of evaluation images B1, which have been created under various painting conditions, as either OK or NG indicating the result of evaluation in accordance with the inspection criteria that has been set up by the criteria setter 14. That is to say, in the example shown in FIG. 4, learning data is generated by labeling the image data of the evaluation images B11-B13 as OK. In addition, learning data is also generated by labeling the image data of the evaluation images B14, B15 as NG. Optionally, learning data may also be generated by labeling the image data of the first standard image A11 as OK. In addition, learning data may be further generated by labeling the image data of the second standard image A12 as NG. [0075] That is to say, it can be said that if the plurality of evaluation images B1 are adopted as the learning data, the labeling work has already been done automatically at a point in time when the inspection criteria are set up. This reduces the chances of causing the user the trouble of newly generating or labeling learning data with respect to the inspection assistance system 1 via a user interface such as the operating interface 3. The learner 7 generates the learned model M2 by making, using a plurality of pieces of labeled learning data, machine learning about good and bad painting conditions of the target T1. The learned model M2 thus generated by the learner 7 is stored in the second storage device 6. [0076] The learner 7 may contribute to improving the performance of the learned model M2 by making re-learning using newly acquired labeled learning data (evaluation images B1). For example, if any evaluation image B1 is created under a new painting condition, then the learner 7 may be made to make re-learning about the new evaluation image B1. Ojima teaches at Paragraph [0084] that the inspection assistance system 1 compares, with a threshold value, the absolute value (|p−p′|) of the difference between the first standard value P11 (painting parameter p) of interest and each of a plurality of first standard values P11 (painting parameters p′) associated with a plurality of candidate models N1. When finding any candidate model N1, of which the absolute value (|p−p′|) is less than the threshold value, in the first storage device 5 (if the answer is YES in Step S5), the inspection assistance system 1 selects the candidate model N1 as the image creation model M1 (in Step S6). Ojima teaches at Paragraph [0085] that, when finding no candidate models N1, of which the absolute value (|p−p′|) is less than the threshold value (if the answer is NO in Step S5), the inspection assistance system 1 notifies the maker H1 of the result. The maker H1 newly prepares an image creation model M1 and enters its information into the inspection assistance system 1. Ojima teaches at Paragraph [0086] that the inspection assistance system 1 creates a plurality of evaluation images B1 about the target T1 by changing, by reference to the first and second standard values P11, P12, the condition parameter P1 based on the image creation model M1 in Step S8: image creation processing. Ojima teaches at Paragraph 0130 that the inspection assistance system 1 creates the evaluation images B1 by changing the condition parameter P1 in an increasing direction from the origin). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Ojima’s condition determination processing to have determined the difference of the new image and the old image to have modified the image creation method of Cowen ‘845 and Cowen ‘871 teaches changing a parameter (e.g., a label/logo) of an original image to have further incorporated the feature of creating a new image with the new parameter (e.g., new label/logo) relative to the parameter of an original image to have created a new image with the changed parameter. One of the ordinary skill in the art would have been motivated to have changed a parameter of the original image. Re Claim 12: The claim 12 encompasses the same scope of invention as that of the claim 11 except additional claim limitation that the at least one processor further carries out an identification target image acquisition process of acquiring an identification target image; and a second identification process of means for inputting the identification target image acquired in the identification target image acquisition process into the target model trained in the training process to thereby carry out an identification process involving the identification target image. Bowen ‘845 at least implicitly teaches the claim limitation that the at least one processor further carries out an identification target image acquisition process of acquiring an identification target image; and a second identification process of means for inputting the identification target image acquired in the identification target image acquisition process into the target model trained in the training process to thereby carry out an identification process involving the identification target image ( Bowen ‘845 teaches at Paragraph [0057] The user may utilize the CAD system to select an object to customize (e.g., t-shirts, hoodies, shirts, jackets, dresses, pants, glasses, phone cases, laptop skins, backpacks, laptop cases, tablet cases, hairbands, wristbands, jewelry, digital content, and the like) from an interactive catalog of objects, and may then customize the object using design elements/templates from a library of design elements/templates (e.g., tournament/match brackets, sport paraphernalia (e.g., sport clothing, sports equipment (e.g., basketball, baseball, football, soccer ball, hockey puck, basketball hoop, basketball net, football goal post, hockey goal, baseball bat, hockey stick, etc.), team names, team logos, league names, league logos, and/or the like), and/or user-provided content (e.g., uploaded images or text). Bowen ‘845 teaches at Paragraph [0058] Where the customized object is a digital object (e.g., a displayable electronic image customized by the user), the user-customized object may be transmitted to and displayed by a display (e.g., a large screen display at a venue during an event) and/or shared via social media or other communication channels (e.g., short messaging service messages, email, or otherwise). Where the digital object is an e-card the user may customize the e-card using images select from image galleries, from video frames, or uploaded by the user as similarly described elsewhere herein. In addition, the user can select text from a text gallery and/or enter text. By way of illustration, a text gallery may include text for common or uncommon events (e.g., relating to birthdays, anniversaries, new babies, graduations, holidays, etc.). The user may be enabled to manually enter or select from a contact database recipients to whom the e-card is to be delivered (e.g., via email, short messaging service, etc.). The e-card may be delivered as an image file and/or as a link to the e-card, where the link may be used (e.g., by clicking on or otherwise activating the link) to access the e-card from a networked site. Bowen ‘845 teaches at Paragraph 0082] that at block 2106, the user may be prompted to upload a specified number of images of the subject matter. For example, the number of images may be determined based on the number of images needed to train a facial or object recognizer (e.g., ten image, a hundred images, a thousand images) such as those described elsewhere herein (e.g., convolutional neural networks), to recognize the subject matter in new images (e.g., submitted by end users who are using templates to customize items as described elsewhere herein). The number of needed images may be reduced if the neural network is pre-trained. Bowen ‘845 teaches at Paragraph [0136] For example, a deep convolutional neural network (CNN) model may be trained to identify matching faces from different photographs. The deep neural network may include an input layer, an output layer, and one or more levels of hidden layers between the input and output layers. The deep neural network may be configured as a feed forward network. The convolutional deep neural network may be configured with a shared-weights architecture and with translation invariance characteristics. The hidden layers may be configured as convolutional layers, pooling layers, fully connected layers and/or normalization layers. The convolutional deep neural network may be configured with pooling layers that combine outputs of neuron clusters at one layer into a single neuron in the next layer. Max pooling and/or average pooling may be utilized. Max pooling may utilize the maximum value from each of a cluster of neurons at the prior layer. Average pooling may utilize the average value from each of a cluster of neurons at the prior layer. Bowen ‘845 teaches at Paragraph [0226] Optionally, the colors may be manually set. Optionally, each entity may be associated with a unique identifier and a table may be generated and accessed that associates a given entity unique identifier with one or more colors (e.g., a primary color and a secondary color, which may be identified by name, RGB values, YUV values, and/or the like) and the accessed colors may be used to color design elements (e.g., text, graphics, images, etc.) in corresponding slots. Optionally, a learning engine (e.g., a neural network) may be trained to identify colors associated with an entity (e.g., by examining images tagged with the entities name and a key phrase such as “jersey” or “school colors”), and the identified colors may be used to color design elements (e.g., text, graphics, images, etc.) in corresponding slots). Ojima further teaches the claim limitation that the at least one processor further carries out an identification target image acquisition process of acquiring an identification target image; and a second identification process of means for inputting the identification target image acquired in the identification target image acquisition process into the target model trained in the training process to thereby carry out an identification process involving the identification target image ( Ojima teaches at Paragraph [0071] that the learner 7 generates a learned model M2 (refer to FIG. 1) by using, as learning data, image data, to which a label is attached. The label indicates whether the surface condition (painting condition in this case) is good or bad and meets the inspection criteria set up by the criteria setter 14. [0074] The plurality of pieces of learning data are generated by labeling the plurality of evaluation images B1, which have been created under various painting conditions, as either OK or NG indicating the result of evaluation in accordance with the inspection criteria that has been set up by the criteria setter 14. That is to say, in the example shown in FIG. 4, learning data is generated by labeling the image data of the evaluation images B11-B13 as OK. In addition, learning data is also generated by labeling the image data of the evaluation images B14, B15 as NG. Optionally, learning data may also be generated by labeling the image data of the first standard image A11 as OK. In addition, learning data may be further generated by labeling the image data of the second standard image A12 as NG. [0075] That is to say, it can be said that if the plurality of evaluation images B1 are adopted as the learning data, the labeling work has already been done automatically at a point in time when the inspection criteria are set up. This reduces the chances of causing the user the trouble of newly generating or labeling learning data with respect to the inspection assistance system 1 via a user interface such as the operating interface 3. The learner 7 generates the learned model M2 by making, using a plurality of pieces of labeled learning data, machine learning about good and bad painting conditions of the target T1. The learned model M2 thus generated by the learner 7 is stored in the second storage device 6. [0076] The learner 7 may contribute to improving the performance of the learned model M2 by making re-learning using newly acquired labeled learning data (evaluation images B1). For example, if any evaluation image B1 is created under a new painting condition, then the learner 7 may be made to make re-learning about the new evaluation image B1. Ojima teaches at Paragraph [0084] that the inspection assistance system 1 compares, with a threshold value, the absolute value (|p−p′|) of the difference between the first standard value P11 (painting parameter p) of interest and each of a plurality of first standard values P11 (painting parameters p′) associated with a plurality of candidate models N1. When finding any candidate model N1, of which the absolute value (|p−p′|) is less than the threshold value, in the first storage device 5 (if the answer is YES in Step S5), the inspection assistance system 1 selects the candidate model N1 as the image creation model M1 (in Step S6). Ojima teaches at Paragraph [0085] that, when finding no candidate models N1, of which the absolute value (|p−p′|) is less than the threshold value (if the answer is NO in Step S5), the inspection assistance system 1 notifies the maker H1 of the result. The maker H1 newly prepares an image creation model M1 and enters its information into the inspection assistance system 1. Ojima teaches at Paragraph [0086] that the inspection assistance system 1 creates a plurality of evaluation images B1 about the target T1 by changing, by reference to the first and second standard values P11, P12, the condition parameter P1 based on the image creation model M1 in Step S8: image creation processing. Ojima teaches at Paragraph 0130 that the inspection assistance system 1 creates the evaluation images B1 by changing the condition parameter P1 in an increasing direction from the origin). It would have been obvious to one of the ordinary skill in the art before the filing date of the instant application to have incorporated Ojima’s condition determination processing to have determined the difference of the new image and the old image to have modified the image creation method of Cowen ‘845 and Cowen ‘871 teaches changing a parameter (e.g., a label/logo) of an original image to have further incorporated the feature of creating a new image with the new parameter (e.g., new label/logo) relative to the parameter of an original image to have created a new image with the changed parameter. One of the ordinary skill in the art would have been motivated to have changed a parameter of the original image. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIN CHENG WANG whose telephone number is (571)272-7665. The examiner can normally be reached Mon-Fri 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at 571-270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JIN CHENG WANG/Primary Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Nov 28, 2023
Application Filed
Jun 24, 2025
Non-Final Rejection — §103
Sep 26, 2025
Response Filed
Jan 13, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594883
DISPLAY DEVICE FOR DISPLAYING PATHS OF A VEHICLE
2y 5m to grant Granted Apr 07, 2026
Patent 12597086
Tile Region Protection in a Graphics Processing System
2y 5m to grant Granted Apr 07, 2026
Patent 12592012
METHOD, APPARATUS, ELECTRONIC DEVICE AND READABLE MEDIUM FOR COLLAGE MAKING
2y 5m to grant Granted Mar 31, 2026
Patent 12586270
GENERATING AND MODIFYING DIGITAL IMAGES USING A JOINT FEATURE STYLE LATENT SPACE OF A GENERATIVE NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12579709
IMAGE SPECIAL EFFECT PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
59%
Grant Probability
69%
With Interview (+10.3%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 832 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month