Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: “INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM FOR IMAGE POSTER”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-5, 8-12 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Furaya (Pub No. US 20170084066 A1) in view of Tsutaoka (Pub No. US 20190378262 A1).
As per claim 1, Furaya teaches the claimed:
1. An information processing apparatus comprising: one or more controllers including one or more processors and one or more memories, the one or more controllers configured to: (Furuya [0066]: “The image compositing server 20 includes a memory 22 for storing data temporarily, a compact-disc drive 24 for accessing a compact disc 23, and a printer interface 25 for connecting to the printer 29. The image compositing server 20 further includes a hard disk 28 and a hard-disk drive 27 for accessing the hard disk 28.”).
receive, from a user, a designation of a target impression of a poster to be generated; (Furuya [0059]: “In this embodiment, a case where a target image is combined with a template to thereby generate the composite image of a postcard will be described. However, the present invention is not limited to the generation of a postcard and can be applied to all systems of the kind that generate a composite image by combining a target image with a template as in the manner of an electronic album or other photo goods.” It would be obvious to use generation of postcards or other digital graphic media to generate posters instead.
Furaya teaches selecting a target image that has a target impression value. Furaya [0017]: “A template selection system according to the present invention comprises: a template impression evaluation value storage unit for storing template impression evaluation values with regard to multiple templates; a target image impression evaluation value calculation unit for calculating an impression evaluation value of a target image to be combined with a template; and a template selection unit for selecting templates in order of increasing discrepancy (difference) between multiple template impression evaluation values stored in the template impression evaluation value storage unit and a target image impression evaluation value calculated by the target image impression evaluation value calculation unit.” The selection of an image with the target impression value is the designation by the user.).
Furaya alone does not explicitly teach the remaining claim limitations.
However, Furaya in combination with Tsutaoka teaches the claimed:
receive, from a user, a designation of a target quality of the poster to be generated; (Tsutaoka abstract: “There are provided an image evaluation apparatus, an image evaluation method, and an image evaluation program capable of highly evaluating an image having an impression of a user's preference. A plurality of images included in a first image group are input to an image evaluation apparatus (step 21). A plurality of representative image candidates are displayed (step 22), and the user selects a desired representative image from the representative image candidates (step 23). The difference between the impression value of the representative image and the impression value of the input image is calculated (step 24), and the image quality of the input image is determined (step 25). An image evaluation value is calculated so as to become higher as the difference between the impression values becomes smaller and the image quality becomes higher (step 26).” The selection of the image with the desired quality is the user input of the quality. Tsutaoka teaches generating postcards and other graphics based on an evaluated image and judging it based on quality. Tsutaoka [0057]: “Table 1 is an example of an image evaluation value table in which an image quality evaluation value, a distance from the impression value S0 of the representative image to the impression value of each of a plurality of images input to the image evaluation apparatus 1 (difference between the impression values), and an image evaluation value are stored for each image.” Tsutaoka [0059]: “In a case where the image evaluation value is obtained in this manner, a postcard, a photobook, and the like are created using images having high image evaluation values, so that it is possible to create a postcard, a photobook, and the like with high image quality and user's favorite impression.”).
generate one or more pieces of poster data based on at least the target impression; (Furuya [0019]: “The template selection unit may be adapted so as to select a template having an impression evaluation value, from among the multiple template impression evaluation values stored in the template impression evaluation value storage unit, for which the discrepancy with respect to the impression evaluation value of the target image calculated by the target image impression evaluation value calculation unit is less than a threshold value. The system may further comprise a target image combining unit for generating composite images by combining the target image with the templates selected by the template selection unit.”).
calculate an evaluation value based on first information indicating a difference between the target impression and a degree of impression of each of the one or more pieces of poster data(Furuya [0017]: “A template selection system according to the present invention comprises: a template impression evaluation value storage unit for storing template impression evaluation values with regard to multiple templates; a target image impression evaluation value calculation unit for calculating an impression evaluation value of a target image to be combined with a template; and a template selection unit for selecting templates in order of increasing discrepancy (difference) between multiple template impression evaluation values stored in the template impression evaluation value storage unit and a target image impression evaluation value calculated by the target image impression evaluation value calculation unit.” The selection of these templates is the information indicating the difference between the target and each of the templates. The templates are used to make the images.).
and second information indicating a difference between the target quality and a quality of each of the one or more pieces of poster data; (Tsutaoka [0006]: “An image evaluation apparatus according to the present invention comprises: a first image input device (a first image input device) to which a plurality of images included in a first image group are input: a representative image selection device (a representative image selection device) for selecting a representative image; and a first image evaluation value calculation device (a first image evaluation value calculation device) for calculating an image evaluation value from a difference between an impression value of the representative image selected by the representative image selection device and an impression value of each of the plurality of images input to the first image input device and an image quality of each of the plurality of images included in the first image group.” The image group represents the different images and them being evaluated based on quality in comparison to an impression value.).
and select, from the one or more pieces of poster data, a poster for which the evaluation value has an evaluation higher than a predetermined evaluation. (Furaya [0140]: “Further, impressions are not limited to gender, age, expression and face orientation; other impressions that can be utilized are cute, gentle, bracing and chic. Furthermore, in the foregoing embodiments, the template selected is one having an impression evaluation value for which the discrepancy with respect to the impression evaluation value of a target image is less than a threshold value. However, even if it is arranged so as to calculate the degree of approximation between the impression evaluation value of a target image and the impression evaluation value of a template based upon the inverse of discrepancy, for example, and select a template for which the calculated degree of approximation is equal to or greater than a threshold value, the processing would be essentially the same as that described above.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the determination of a target quality characteristic as taught by Tsutaoka with the system of Furaya in order to allow image quality to be one of the traits sought by the image content generation.
As per claims 16 and 17, these claims are similar in scope to limitations recited in claim 1, and thus are rejected under the same rationale. As per the non-transitory computer-readable medium of claim 17, Furuya teaches it in claim 21.
As per claim 3, Furaya teaches the claimed:
3. The information processing apparatus according to claim 1, wherein the difference is expressed as a Euclidean distance. (Furaya [0091]: “Further, since the impression evaluation values regarding templates contained in the table of template impression evaluation values shown in FIG. 5 and the impression evaluation values regarding target images contained in the table of target image impression evaluation values are all considered to be vectors, a discrepancy may be taken to be the vector-to-vector distance between the impression evaluation value of the target image and the impression evaluation value of the template. The discrepancy between the impression evaluation value of the template T1 and impression evaluation value of the target imageI1 in this case is discrepancy=V√{(impression evaluation value for gender of template T1−impression evaluation value for gender of target image I1).sup.2+(impression evaluation value for age of template T1−impression evaluation value for age of target image I1).sup.2+(impression evaluation value for expression of template T1−impression evaluation value for expression of target image I1).sup.2+(impression evaluation value for face orientation of template T1−impression evaluation value for face orientation of target image I1).sup.2}=√{(L6−L1).sup.2+(L3−L8).sup.2+(L7−L3).sup.2+(L8−L8).sup.2=L8. Discrepancies with respect to the impression evaluation values of the target image I1 can be calculated in a similar manner with regard to the other templates T2 to T6.” This vector distance is the Euclidean distance.).
As per claim 4, Furaya teaches the claimed:
4. The information processing apparatus according to claim 1, wherein in the generation of the one or more pieces of poster data, the one or more controllers generate the one or more pieces of poster data by arranging at least one of an image, text, and a graphic to be used in the poster, according to layout information. (Furuya fig. 9 shows output templates that are arrangements of images, such as the person and the flowers, and graphics like the boxes r clouds present in the outputs.).
As per claim 5, Furaya alone does not explicitly teach the claimed limitations.
However, Furaya in combination with Tsutaoka teaches the claimed:
5. The information processing apparatus according to claim 1, wherein in a case that respective designations of the target impression and the target quality are to be received from a user, the one or more controllers display a screen for receiving the respective designations from the user and receive the respective designations of the target impression and the target quality via the screen. (Furaya [0129]: “The user designates the order of priority of one or more target images from the multiple target images 12 and 13 selected (step 102). For example, the target images 12 and 13 are displayed on the display screen 60 of the smartphone 1 and the order of priority thereof is designated by touching the target images 12 and 13 in an order that is in accordance with the order of priority. Further, by touching only target image I2 or 13, the order of priority is decided such that the touched target image takes on a priority higher than that of the untouched target image. In response, image data representing the multiple images selected and data representing the order of priority are transmitted from the smartphone 1 to the image compositing server 20 (step 103).” The target image is the designation of target impression value. It is done through a screen by touching the screen. The same would apply to the designation of quality as taught by Tsutaoka.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the display of a quality as taught by Tsutaoka with the system of Furaya in order to allow quality to be a trait prioritized in the generated images.
As per claim 8, Furaya alone does not explicitly teach the claimed limitations.
However, Furaya in combination with Tsutaoka teaches the claimed:
8. The information processing apparatus according to claim 1, wherein the one or more controllers are further configured to: receive, from a user, weighting information indicating weighting of the target impression and the target quality, and in the calculation of the evaluation value, (Furuya teaches weighting on the impression value. Furuya [0032]: “The system may further comprise a priority acceptance unit for accepting, from among multiple target images, order of priority of one or more target images to be combined with a template. In this case, by way of example, the overall target image impression evaluation value calculation unit calculates an overall target image impression evaluation value upon weighting an impression evaluation value, wherein the higher the priority of a target image the priority of which has been accepted by the priority acceptance unit, the greater the weighting applied.” Tsutaoka teaches weighting the quality. Tsutaoka [0086]: “Then, central values of the impression values of the plurality of images I21 to I25, I31 to I35, and I41 to I45 included in the plurality of clusters CL20, CL30, and CL40 are calculated by the CPU 7 (step 74 in FIG. 8). The central value of the impression values S21 to S25 of the plurality of images included in the cluster CL20 is expressed as S20, the central value of the impression values S31 to S35 of the plurality of images included in the cluster CL30 is expressed as S30, the central value of the impression values S41 to S45 of the plurality of images included in the cluster CL40 is expressed as S40. The central value can be calculated by weighted average of the impression values S21 to S25, S31 to S35, and S41 to S45 included in the respective clusters CL20, CL30, and CL40 (all weighting coefficients are 1, but weighting coefficients may be changed according to the impression value).” The impression value can include quality. Tsutaoka [0007]: “An image evaluation apparatus according to the present invention comprises: a first image input device (a first image input device) to which a plurality of images included in a first image group are input: a representative image selection device (a representative image selection device) for selecting a representative image; and a first image evaluation value calculation device (a first image evaluation value calculation device) for calculating an image evaluation value from a difference between an impression value of the representative image selected by the representative image selection device and an impression value of each of the plurality of images input to the first image input device and an image quality of each of the plurality of images included in the first image group.”).
the one or more controllers calculate the evaluation value based on the weighting information, information indicating a difference between the target information and a degree of impression of the poster data, (Furuya [0130]: “When the image data representing the multiple images and the data representing the order of priority transmitted from the smartphone 1 are received by the image compositing server 20 (step 111), impression evaluation values are calculated with regard to respective ones of the images of the multiple target images received (step 112). From the impression evaluation values regarding the multiple target images, the CPU 21 (overall target image impression evaluation value calculation unit) calculates an overall impression evaluation value representing the overall evaluation of the multiple impression evaluation values regarding the multiple target images (step 116). The overall impression evaluation value is the sum, product, average, etc., of the multiple impression evaluation values regarding the multiple target images. The CPU 21 selects a template having an impression evaluation value for which the discrepancy with respect to the thus calculated overall impression evaluation value is less than a threshold value (step 117). In the calculation of the overall impression evaluation value, it may be arranged so as to calculate the overall target image impression evaluation value upon weighting an impression evaluation value, wherein the higher the priority of a target image the priority of which has been designated by the user, the greater the weighting applied.”).
and information indicating a difference between the target quality and the quality of each of the one or more pieces of poster data. (Tsutaoka [0008]: “An image evaluation apparatus that inputs a plurality of images included in a first image group, selects a representative image, and calculates an image evaluation value from a difference between an impression value of the selected representative image and an impression value of each of the plurality of images input to a first image input device and an image quality of each of the plurality of images included in the first image group may be provided by a processor.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the weighted impression values related to quality as taught by Tsutaoka with the system of Furaya in order to allow quality to be a trait targeted in the image generation.
As per claim 9, Furaya teaches the claimed:
9. The information processing apparatus according to claim 1, wherein in the selection of the poster data, when a plurality of pieces of poster data are generated, the one or more controllers select poster data for which the evaluation value has an evaluation higher than a predetermined evaluation from the plurality of pieces of poster data, (Furuya [0140]: “Further, impressions are not limited to gender, age, expression and face orientation; other impressions that can be utilized are cute, gentle, bracing and chic. Furthermore, in the foregoing embodiments, the template selected is one having an impression evaluation value for which the discrepancy with respect to the impression evaluation value of a target image is less than a threshold value. However, even if it is arranged so as to calculate the degree of approximation between the impression evaluation value of a target image and the impression evaluation value of a template based upon the inverse of discrepancy, for example, and select a template for which the calculated degree of approximation is equal to or greater than a threshold value, the processing would be essentially the same as that described above.” The selected template is the template based on being higher than a threshold. The composite images generated are based on templates.).
the number of pieces of selected poster data being at least a number obtained based on a number of posters to be created, which was designated by the user. (Furaya teaches that the number of selected pieces of poster data can very. This would imply the user could select the number. Furuya [0069]: “Although six templates T1 to T6 are illustrated as the templates in FIG. 4, it goes without saying that the number of templates may be more or less than six. Image data representing these templates T1 to T6 has been stored on the hard disk 28 of the image compositing server 20.” The templates are converted to composite images. Furuya [0137]: “The target images are combined with the selected template in accordance with the order of priority, whereby a composite image is generated (step 114). Composite image data representing the generated composite image is transmitted from the image compositing server 20 to the smartphone 1 (step 115).”).
As per claim 10, Furaya teaches the claimed:
10. The information processing apparatus according to claim 1, wherein the one or more controllers are further configured to: display a poster image based on the selected poster data. (Furuya abstract: “The target image is combined with the templates and image data representing the resulting composite images are transmitted to a smartphone. A desired composite image is selected by the user from among the composite images displayed on the smartphone.” The composite image is the poster.).
As per claim 11, Furaya teaches the claimed:
11. The information processing apparatus according to claim 9, wherein in the selection of the poster data, the one or more controllers select poster data for which the evaluation value has an evaluation higher than a predetermined evaluation, in descending order of the evaluation. (Furaya [0133]: “When discrepancies are calculated, templates are selected by the CPU 21 (template selection unit) in order of increasing discrepancy (step 53). For example, assume that the discrepancy between the impression evaluation values of template T5 and target image I1 is the smallest, the discrepancy between the impression evaluation values of template T3 and target image I1 is the next smallest, and the discrepancy between the impression evaluation values of template 14 and target image I1 is the next smallest. In this case, the CPU 21 (target image combining unit) combines the target image I1 with the combining areas 35, 33 and 34 of the selected templates T5, T3 and 14 to thereby generate composite images (step 54). The composite image data representing the generated composite images is transmitted from the image compositing server 20 to the smartphone 1 (step 55). Naturally, it may be arranged so that templates for which the discrepancies with regard to the impression evaluation value of the target image I1 are less than a threshold value are simply selected.” The higher the discrepancy, the lower the value, so the values are arranged in descending order. The templates are the poster data because they are used to generate graphics, which could include posters.).
As per claim 12, Furaya alone does not explicitly teach the claimed limitations.
However, Furaya in combination with Tsutaoka teaches the claimed:
12. The information processing apparatus according to claim 10, wherein in the display, the one or more controllers display a degree of impression and a quality of the poster image. (Furaya [0138]: “When the composite image data transmitted from the image compositing server 20 is received by the smartphone 1 (step 43), composite images whose templates are utilizing an impression evaluation value having a small discrepancy with respect to the overall impression evaluation value are displayed on the display screen 60 of the smartphone 1 in regular order (step 121).” The composite image is the poster image. The display in order of the impression values implies that the degree is being shown. This would be combined with the quality trait taught by Tsutaoka above in the rejection to claim 1.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the target quality value as taught by Tsutaoka with the system of Furaya in order to allow quality to be displayed.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Furaya in view of Tsutaoka and further in view of Ogasawara (Pub No.US 20240020075 A1).
As per claim 2, Furaya alone does not explicitly teach the claimed limitations.
However, Furaya in combination with Ogasawara teaches the claimed:
The information processing apparatus according to claim 1, wherein in the calculation of the evaluation value, the one or more controllers calculate the evaluation value using the second information if the quality of each of the one or more pieces of poster data does not achieve the target quality, and the one or more controllers calculate the evaluation value using 0 as the second information if the quality of each of the one or more pieces of poster data achieves the target quality. (Ogasawara [0119]: “In S702, the CPU 101 performs factor analysis on the subjective evaluation result acquired by the subjective evaluation acquisition unit. If subjective evaluation results are directly used, the number of dimensions is given by the number of adjective pairs, which results in complicate control. Therefore, it is desirable to reduce the number of dimensions to a small value using an analysis technique such as principal component analysis, factor analysis, or the like such that efficient analysis becomes possible. In the following description of the present embodiment, it is assumed that the dimensions are reduced such that the number of factors is reduced to four as a result of the factor analysis. Note that the number of factors varies depending on the selection of adjective pairs in the subjective evaluation and the method of factor analysis. It is also assumed that the output of factor analysis is standardized. That is, each factor is scaled to have a mean of 0 and a variance of 1 in the posters used for analysis. As a result, −2, −1, 0, +1, and +2 of the impressions specified by the target impression specification unit 204 can be directly corresponded to −26, −16, mean value, +16, and +26 in each impression, which makes it easy to calculate the distance (the difference) between the target impression and the estimated impression, as will be described in further detail later. In the present embodiment, the four factors are luxury, familiarity, dynamism, and stateliness shown in FIG. 5. The names of these factors are given for convenience to convey impressions to the user via the user interface, and each factor is composed of a plurality of adjective pairs that influence each other.” The estimated impression is the impression for each of the images generated. The variance is measured in the range described above to show the distance between the data points and the target. Thus, if they were the same, the output number would be 0.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the output of values representing the difference between a target impression value and a data point’s impression value as taught by Ogusawara with the system of Furaya in order to represent the accuracy of the image generator, including representing when the target is achieved.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Furaya in view of Tsutaoka and further in view of Patel (Pub No. US 9292175 B).
As per claim 6, Furaya alone does not explicitly teach the claimed limitations.
However, Furaya in combination with Patel teaches the claimed:
6. The information processing apparatus according to claim 5, wherein the screen displays sliders as objects for respectively designating the target impression and the target quality, and the one or more controllers receive the respective designations of the target impression and the target quality according to an operation of the sliders performed by the user. (Patel teaches the customization of graphic images such as posters, based on selectable templates. Patel col. 2 lines 20-37: “(8) A graphical user interface (GUI) is used for selecting and customizing products that are decorated with an ornamental design. The GUI displays product items, with different candidate design templates, to a user such as a potential customer. The user uses the GUI to customize the designs of the displayed product items, to select one of the items, and to purchase a product with the customized design.
(9) The product items may be, for example, art prints, business cards, posters, flyers, brochures, stationery, calendars, event (e.g., wedding and party) invitations, personal journals (with a decorative cover and blank inner pages for writing in), and greeting cards. These design products typically comprise printed paper medium and have a utilitarian function, such as providing information (e.g., regarding a contact, a scheduled event and personal notes). In this example, the products are greeting cards.” Patel teaches a slider to control them. Patel col. 5 lines 43-55: “(26) The GUI includes an image selector 530, to be actuated with the input device 114 (FIG. 1) for transforming the images in the cards. In this example, the selector 530 includes a virtual slider button 531 that can be grabbed with the mouse or finger (if touch screen) and slid along a horizontal virtual track 532. The selector 530 has a finite number of button positions that equals the total number of available image sets, which itself equals the number of user-designated image sets (in this case two: 321 and 322 in FIG. 3) plus the stock image set that the cards 501 are initially displayed with. So, in this example, the selector 530 has three positions 541, 542, 543.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the slider as a tool to select images and indicate a target as taught by Patel with the system of Furaya in order to use that interface element as an easy way to make a selection of an image and a target.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Furaya in view of Tsutaoka and further in view of Ptucha (Pub No. US 8274523 B2).
As per claim 7, Furaya alone does not explicitly teach the claimed limitations.
However, Furaya in combination with Ptucha teaches the claimed:
7. The information processing apparatus according to claim 1, wherein the quality of the poster is expressed as a numeric value calculated based on at least one evaluation value among balance, noticeability, and visibility of a design of the poster. (Ptucha teaches designing aesthetic digital displays. Ptucha abstract: “A method for creating an artistically coordinated image display. A digital template is provided for said image display and it includes of openings for placing images each having at least one required attribute for an image. A programmed computer system automatically searches a database of images for images to be placed in the openings and each of the images satisfy the openings required image attributes. One or more vertical and horizontal lines are demarcated in the so that subjects in the image can be placed on the lines or their intersecting points by modifying and shifting the image appropriately. A subject of the image can also be measured and its size can be set as a reference measurement unit to assist in aesthetically placing subjects proportionally within the image.” Ptucha teaches generating a poster: Ptucha col. 25 lines 24-31: “(107) The first example can be used to create adaptive montages of either aesthetic or practical significance. Aesthetically, special effects can be created, for example, in an advertising poster to add an image or text in a sky or background area of an image. The second example can also be used to create artistic effects, but can also be used to replace faces that have inappropriate facial expressions, eye closers, head pose, etc, with faces that are more appropriate.” Ptucha teaches generating images that are edited to have a balanced look: col. 15 lines 55-col. 16 line 5: “(54) All entries, except the last one in the above example recipe segment, are used for scoring candidate images. The last entry, in this example it's "OffsetLeft", is used as a template/window specification, and not as an image selection guideline, and thus the last line does not have a paired weight line. The last entry specifies that the highest scoring image, or the image selected for a virtual template opening, should be modified in some respect. This can be referred to as a "post processing" step because it is used as a fine tuning step or as a post-selection layout step. In this example the specification states that the main subject be shifted left using the rule of thirds before virtual placement into the template opening. The rule of thirds is a photographic composition rule that has proven to yield well balanced or natural looking prints by the average observer, and can be programmed for use by computer system 26.” Ptucha teaches a fitness score related to an aesthetic element. Ptucha col. 1 line 60-col. 2 line 20: “Personalized image collages, clothing, albums and other image enhanced items are becoming increasingly more accessible as digital printing technologies improve. However, as personalized image bearing products have become more accessible, consumers have become more discriminating. In particular, consumers now seek methods and systems that produce customized image products in a more convenient, faster, seamless, automatic, and integrated manner. While becoming somewhat more common, many items for displaying and/or including embedded customized images are still considered novelties. Methods for recognizing image contents that fulfill a predescribed aesthetic appearance often fall short of expectations. For example, many products with customizable embedded images include photos of people. For this type of product, it would be desirable to identify images that satisfy preselected artistic criteria and/or image attributes, such as number of persons pictured, who is pictured, what zoom ratio, temporal aspects, clothing, background, season, facial expressions, hue, colorfulness, texture, and sharpness, etc. Because some artistic aesthetic elements work better for certain product formats, it would also be desirable if multiple image attributes including aesthetic criteria could be evaluated in parallel for a number of images, and the images with the highest fitness score be automatically determined by computer algorithm” This could include the balance of the print, which would be expressed as that score.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the trait of balance as taught by Ptucha with the system of Furaya in order to input that as a goal characteristic for graphic generation.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Furaya in view of Tsutaoka and further in view of Whitby (Pub No. US 20110029914 A1).
As per claim 13, Furaya alone does not explicitly teach the claimed limitations.
However, Furaya in combination with Tsutaoka and Whitby teaches the claimed:
13. The information processing apparatus according to claim 1, wherein in the generation of the one or more pieces of poster data, the one or more controllers re-generate poster data that is different from the poster data if the information indicating the difference between the target quality and the quality of each of the one or more pieces of generated poster data is larger than a predetermined threshold value. (Whitby concerns image generation based on templates with target characteristics. Whitby abstract: “A computerized apparatus for automatically generating an artist coordinated image display based upon a user's favorite image or images. A plurality of digital image templates are stored in a computer accessible memory and each have programmed characteristics associated therewith that are designed by an artist. The characteristics of the template can be programmed and stored, including number, location, shape, color, etc, of windows in the digital template. Color and texture characteristics of a body of the template can be selected to resemble, for example, a frame, which will eventually hold the printed images of the template for display. The windows of the template have stored image attribute requirements associated therewith that are designed by the artist are correlated to match image attributes of at least one user selected image that is placed in at least one other of the windows in the template.” Whitby teaches redoing its template generation if the resulting image scores do not reach a threshold value. Whitby [0048]: “Another common template recipe generation and fulfillment technique is image splitting. Image splitting is the process of taking a single customer image, and spanning that one image over 2 or more window openings. For example, the program can receive as input a single picture of a bride and groom. An application of a particular recipe will result in the bride being placed in the left window with the groom in the right window. An artist will be able to define and store recipe requirements for the size, location, and spacing of two people in a single photograph. The customer images in a product order can be analyzed using the aforementioned methods. An image with a highest score can be chosen to fulfill the product intent. If no images score above a defined threshold, the algorithm can be designed so that a different recipe is automatically chosen and the process repeats itself with the new recipe.” The product intent is the target quality. If it does not meet the required threshold, a new recipe is used. This is the regeneration of poster data. Whitby [0096]: “The first example can be used to create adaptive montages of either aesthetic or practical significance. Aesthetically, special effects can be created, for example, in an advertising poster to add an image or text in a sky or background area of an image. The second example can also be used to create artistic effects, but can also be used to replace faces that have inappropriate facial expressions, eye closers, head pose, etc, with faces that are more appropriate.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the regeneration of images from a digital template if it falls short of a pre-determined threshold as taught by Whitby with the system of Furaya in order to repeat the image generation process if the desired results were not attained according to a threshold.
Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Furaya in view of Tsutaoka and further in view of Whitby and further in view of Patel.
As per claim 14, Furaya alone does not explicitly teach the claimed limitations.
However, Furaya in combination with Whitby and Patel teaches the claimed:
14. The information processing apparatus according to claim 13, wherein in the generation of the one or more pieces of poster data, the one or more controllers generate poster data different from the poster data by changing at least one of an arrangement, a color scheme, and a font in the poster data in the re-generation. (Patel col. 7 lines 48-56: “(36) A list of features (e.g., greeting, font, background, image) may appear beside the colored squares 560 for the user to select which of the cards' feature the selected square's color will be applied to. Alternatively, the server 101 (FIG. 1) may designate, individually and independently for each item 501, 511, which feature the user-selected color should be applied to, so that the color is applied to different features (e.g., greeting, background, image) in different cards.” Patel teaches changing the font. This could be applied to the images after they have been generated. The server is the controller, and its automatic change is the re-generation.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use re-generation of images that do not reach a threshold level of a certain characteristic as taught by Whitby with the system of Furaya in order to have it better repeat the generation process to achieve a desired goal.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the changing of a poster’s features such as a color to improve a card’s appearance in a specific way as taught by Patel with the system of Furaya in order to have it better match a user specified overall trait.
As per claim 15, Furaya alone does not explicitly teach the claimed limitations.
However, Furaya in combination with Whitby and Patel teaches the claimed:
15. The information processing apparatus according to claim 13, wherein in the generation of the one or more pieces of poster data, the one or more controllers determine whether or not to change the arrangement or the color scheme in the one or more pieces of poster data based on the target impression. (Patel col. 4 line 45-col. 5 line 3: “(21) The server 101 (FIG. 1) searches the stock templates in its database 102 for templates that match the user-designated search criteria and ranks the templates for closeness of match. The server also filters (narrows) the full database of templates down to a displayable number, in this case nine, of templates that best match the user's criteria. The displayable number may be designated by the user in the criteria designation window 400. Or the displayable number may be mathematically determined by the server based on data regarding the particular user and results of the search. The ranking and filtering may include preferring templates whose space for containing the greeting and message best matches the length of the user-designated greeting and user-designated message or whose style matches a mood indicated by wording of the user-designated greeting and user-designated message. The ranking and filtering may be further based on features of the user-designated image or images, such as the image's shape, size, aspect ratio (height to width), style, color range (e.g., large color variety, small color variety, just black-and-white), color palette (e.g., which colors included), art-type (e.g., whether line drawing, painting or photograph), how cluttered (e.g., finely detailed or bare), type of main feature (e.g., whether people, things or landscape), and number of selected photos to include in the card.” Patel teaches using the user-designated search criteria to search for templates. It then decides traits like color palette or color range based on the user-designated image. This would correspond to the target impression of Furuya that signals to change a trait of the image.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the selection of image templates by color scheme based on user input as taught by Patel with the system of Furaya in order to allow color to be influenced by a user-designated goal for color.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS JOHN FOSTER whose telephone number is (571)272-5053. The examiner can normally be reached Mon, Fri 8:30-6. Tues-Thurs 7:30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THOMAS JOHN FOSTER/Examiner, Art Unit 2616
/HAI TAO SUN/Primary Examiner, Art Unit 2616