DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed December 30th, 2025 has been entered. Claims 1-19 are pending in the application. Claims 3, 4, 11, and 12 are cancelled. Applicant’s amendments to the Claims 1, 9, and 17 have overcome the rejections previously set forth in the Final Office Action mailed June 5th, 2025. A second search has been performed to address the material amended in the aforementioned claims.
Response to Arguments
Applicant’s arguments with respect to claims 1-17 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Newly found reference Furuya (JP 2015162850 A, hereinafter Furuya 2015-2) was used for the newly amended claim limitations.
Allowable Subject Matter
Claim 18 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Claim 18 recites: “in a case where a distance, to the aimed impression, of an average of the impression of the acquired image and the impression set for the selected template image is not less than a threshold, the determining unit determines that the acquired image be adjusted.”
Furuya 2015-2 discusses that based on a sum of the impression of the acquired image and impression set of the template image being not less than a threshold, the acquired image is adjusted: “If the sum of evaluation values is not equal to or greater than the corresponding sensitivity threshold (if it is determined that it does not match or does not match) ( The parameter of the previous correction is changed in step 84 of FIG. 16 and the correction is performed on the target image” [0082]. However, Furuya 2015-2 fails to teach “a distance, to the aimed impression, of an average of the impression of the acquired image and the impression set for the selected template image”.
Noguchi and Furuya 2018 teach a “regression line […] expressed by the following expression (1).
(x−x0)/a=(y−y0)/b
Here, x0 represents an average value of impression values of respective images on the lateral axis” [0054-0055].
The expression contains the term (x – x0) which appears to be similar to a distance formula. However, neither Noguchi nor Furuya 2018 utilize the equation to determine whether to adjust the target image.
Further search did not reveal any references that taught the limitations of claim 18. Therefore, none of the other prior art searched or on the record, alone or in combination, explicitly teaches the limitations of claim 18.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 5, 8, 9, 10, 13, 16, 17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Furuya (US 20160063746 A1; from applicant’s IDS; hereinafter Furuya 2016) in view of Bronstein (US 20040075669 A1), Noguchi (US 20190371027 A1), Kim (KR 20200080577 A), and Furuya (JP 2015162850 A, hereinafter Furuya 2015-2).
Regarding claim 1:
Furuya 2016 teaches:
An information processing apparatus (Furuya 2016: image combining apparatus [0003]), comprising:
at least one processor (Furuya 2016: The overall operation of the image combining server 20 is controlled by a CPU 21 [0054]); and
a memory that stores a program (Furuya 2016: The present disclosure also provides a non-transitory computer readable medium for storing an image combining program [0017]) which, when executed by the at least one processor, causes the at least one processor to function as:
an image acquisition unit configured to acquire an image (Furuya 2016: the user selects a target image by touching an image that is to be combined with a template image (step 31 in FIG. 4) [0062]);
a receiving unit configured to receive an input of aimed impression (Furuya 2016: a first target image impression determination device (first target image impression determination means) for determining one or multiple impressions given by the target image [0018]);
an image adjustment unit configured to, based on the aimed impression, adjust the image (Furuya 2016: a combining device (combining means) for combining the target image with the template image found by the template image detection device [0015]; Furuya 2016: For example, if the impression given by a template image to be utilized in a composite image selected by the user is “CUTE”, then the levels of such items as the brightness, contrast and saturation of the target image to be combined with this template image are each raised by one [0090]); and
a creation unit configured to create an image by using the adjusted image (Furuya 2016: Connected to the image combining server 20 is a printer 29 for printing a postcard from image data representing a composite image generated in the image combining server 20 [0049]; Furuya 2016: the present disclosure is not limited to the generation of a postcard and can be applied to all systems of the kind that generate a composite image [0048]).
Furuya 2016 fails to teach:
a receiving unit configured to receive an input of aimed impression from a user via a setting screen;
a selecting unit configured to, based on the aimed impression, select a template image from among a plurality of template images;
a determining unit configured to, based on an impression set for the selected template image, an impression of the acquired image, and the aimed impression, determine whether or not to adjust the acquired image;
an image adjustment unit configured to, based on the aimed impression, adjust the image in a case where the determining unit determines that the acquired image be adjusted;
a poster creation unit configured to create a poster by using the adjusted image,
a storage unit configured to store a table in which, for each of a plurality of different items of aimed impression, information for making an image adjustment is contained in an associated manner,
wherein based on the aimed impression and the table, the image adjustment unit adjusts the image, and
wherein the input of the aimed impression is received from the user by selecting one value from a range of values for at least one impression from a plurality of different items that comprise the aimed impression.
Bronstein teaches:
a poster creation unit configured to create a poster (Bronstein: in block 333 the print image is sent to a printer, such as the printer 120 (FIG. 1), for printing of the final poster [0039]) by using the adjusted image (Bronstein: As discussed in greater detail below, the image manipulation/selection interface 180c may be configured to allow the user of the client 109 to manipulate the images within the template, and/or to select a part of the poster for generation of a preview image [0035]; Bronstein: The print image is generated by the image generator 166, using the same information that is used to generate the printable preview image [0038]).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Bronstein with Furuya 2016. Having a poster creation unit configured to create a poster by using the adjusted image, as in Bronstein, would benefit the Furuya 2016 teachings by enabling the user to utilize images synthesized by the system on larger decorations and signs, increasing readability when viewed from farther away.
Furuya 2016 in view of Bronstein still fails to teach:
a receiving unit configured to receive an input of aimed impression from a user via a setting screen;
a selecting unit configured to, based on the aimed impression, select a template image from among a plurality of template images;
a determining unit configured to, based on an impression set for the selected template image, an impression of the acquired image, and the aimed impression, determine whether or not to adjust the acquired image;
an image adjustment unit configured to, based on the aimed impression, adjust the image in a case where the determining unit determines that the acquired image be adjusted;
a storage unit configured to store a table in which, for each of a plurality of different items of aimed impression, information for making an image adjustment is contained in an associated manner,
wherein based on the aimed impression and the table, the image adjustment unit adjusts the image, and
wherein the input of the aimed impression is received from the user by selecting one value from a range of values for at least one impression from a plurality of different items that comprise the aimed impression.
Noguchi teaches:
a receiving unit configured to receive an input of aimed impression from a user (Noguchi: A user moves a slider 63 of a first desired impression value designating slider axis 62 corresponding to the first impression axis to designate an unspecified desired impression value, Abstract) via a setting screen (Noguchi, Fig. 12);
wherein the input of the aimed impression is received from the user (Noguchi: In a case where the first composite image Sy1 does not match the user's impression, the user may move the slider 63 to change the mount image 51 to be combined with the composite target image 50 [0069]) by selecting one value from a range of values for at least one impression (Noguchi: by moving the slider 63 […] on the first desired impression value designating slider axis 62 using the mouse 9 […], a first desired impression value on the first impression axis 41 is designated [0069]) from a plurality of different items (Noguchi: Fig. 12 depicts slider axis 62 as part of one impression slider of a group of plurality of impression sliders) that comprise the aimed impression (Noguchi: a slider 63 that is moved on the first desired impression value designating slider axis 62 is also displayed [0069]).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Noguchi with Furuya 2016 in view of Bronstein. Having a receiving unit configured to receive an input of aimed impression from a user, wherein the input of the aimed impression is received from the user by selecting one value from a range of values for at least one impression from a plurality of different items that comprise the aimed impression, as in Noguchi, would benefit the Furuya 2016 in view of Bronstein teachings by ensuring the user has direct and precise control over the impression the composite image should have.
Furuya 2016 in view of Bronstein and Noguchi still fails to teach:
a selecting unit configured to, based on the aimed impression, select a template image from among a plurality of template images;
a determining unit configured to, based on an impression set for the selected template image, an impression of the acquired image, and the aimed impression, determine whether or not to adjust the acquired image;
an image adjustment unit configured to, based on the aimed impression, adjust the image in a case where the determining unit determines that the acquired image be adjusted;
a storage unit configured to store a table in which, for each of a plurality of different items of aimed impression, information for making an image adjustment is contained in an associated manner,
wherein based on the aimed impression and the table, the image adjustment unit adjusts the image,
Kim teaches:
a storage unit configured to store (Kim: the memory 20 may store data used for face modification, Pg. 2, par. 8) a table (Kim: a pre-stored impression-face table, Pg. 4, par. 5) in which, for each of a plurality of different items of aimed impression, information for making an image adjustment is contained in an associated manner (Kim: the impression-face table includes face models for each impression (or corresponding impression face models) and/or data associated with each impression, Pg. 4, par. 5; see Note 1A),
wherein based on the aimed impression (Kim: the image editing application providing device 1 first receives a user input for the impression mode to edit the impression of the target image through the menu tool 120, Pg. 8, par. 1) and the table, the image adjustment unit adjusts the image (Kim: The apparatus 1 for providing an image editing application may perform a face transformation operation in an impression mode using a pre-stored impression-face table, Pg. 4, par. 5),
Note 1A: Kim teaches that “the impression-face table includes one or more impression types, and a face model for each impression type,” As cited above from Pg. 8, par. 1 and Pg. 4, par. 5, Kim also teaches that the table may be used to create face modifications and that the user may select an impression to modify the face with. Therefore, Kim teaches that the table stores “a plurality of different items of aimed impression”.
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Kim with Furuya 2016 in view of Bronstein and Noguchi. Storing a table which stores information for making image adjustments for each of a plurality of items of aimed impression and adjusting the image based on the aimed impression and the table, as in Kim, would benefit the Furuya 2016 in view of Bronstein and Noguchi teachings by enabling specific designs tailored for individual impressions: “The face model for each impression type may be specified by a designer, or may be determined based on evaluation results of multiple evaluators on multiple samples,” (Kim, Pg. 9, par. 8).
Furuya 2016 in view of Bronstein, Noguchi, and Kim still fails to teach:
a selecting unit configured to, based on the aimed impression, select a template image from among a plurality of template images;
a determining unit configured to, based on an impression set for the selected template image, an impression of the acquired image, and the aimed impression, determine whether or not to adjust the acquired image;
an image adjustment unit configured to, based on the aimed impression, adjust the image in a case where the determining unit determines that the acquired image be adjusted;
Furuya 2015-2 teaches:
a selecting unit configured to, based on the aimed impression, select a template image (Furuya 2015-2: When the user specifies that the finish impression of the photo book is "cool" or "sick", the template image T1 is determined to be used. [0039]) from among a plurality of template images (Furuya 2015-2: An evaluation value of sensitivity is stored in a template evaluation value table for each template image (previously stored in the hard disk 28 of the image combining server 20) [0039]);
a determining unit configured to, based on an impression set for the selected template image, an impression of the acquired image (Furuya 2015-2: The evaluation values corresponding to the sensibility (for example, “cute”) of the finish impression specified by the user are respectively a template image a, a character image b, a frame image c, target images d [0081], emphasis added), and the aimed impression (Furuya 2015-2: impression specified by the user [0082]), determine whether or not to adjust the acquired image (Furuya 2015-2: Therefore, the sum of the evaluation values of the finished sample image according to this example is Σp = a + b + c + d + e + f + g. […] If the sum of evaluation values is not equal to or greater than the corresponding sensitivity threshold (if it is determined that it does not match or does not match) ( The parameter of the previous correction is changed in step 84 of FIG. 16 and the correction is performed on the target image [0081-0082]; see also Note 1B);
an image adjustment unit configured to, based on the aimed impression, adjust the image in a case where the determining unit determines that the acquired image be adjusted (see Note 1C);
Note 1B: As cited above, Furuya 2015-2 teaches that the correction of the target image (i.e., the acquired image) is based on whether an evaluation value is greater than a threshold. The threshold is based on the “evaluation value” or the impression of the target image (an impression of the acquired image) and the “evaluation value” or impression of the template image (an impression set for the selected template image).
Furuya teaches that when the sum is calculated, it is compared to the “impression specified by the user” or the aimed impression: “it is determined whether the sum of evaluation values is equal to or greater than a predetermined threshold value for each sensitivity (step 84 in FIG. 16). (Determining whether the impression of the finished sample image matches the impression specified by the user.” [0082] and that the correction is based on said comparison: “If the sum of evaluation values is not equal to or greater than the corresponding sensitivity threshold […] the correction is performed on the target image” [0082].
Therefore, the method of Furuya 2015-2 determines whether or not to adjust the acquired image based on an impression set for the selected template image, an impression of the acquired image, and the aimed impression.
Note 1C: In Claim 1, “the image” has antecedent basis in: “an image acquisition unit configured to acquire an image”. Therefore, as claimed, the “image” is the same as the “acquired image”. Therefore, when Furuya 2015-2 teaches that the target image may be corrected, it also teaches an “image adjustment unit configured to, based on the aimed impression, adjust the image”.
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Furuya 2015-2 with Furuya 2016 in view of Bronstein, Noguchi and Kim. Selecting, based on the aimed impression, a template image from among a plurality of template images; and determining, based on an impression set for the selected template image, an impression of the acquired image, and the aimed impression, determine whether or not to adjust the acquired image, as in Furuya 2015-2, would benefit the Furuya 2016 in view of Bronstein, Noguchi, and Kim teachings by enabling specific adjustment to match the intended impression of the user: An object of the present invention is to make an impression of a finish fit the user's preference when a target image is attached to a template image to generate a composite image. (Furuya 2015-2, [0006])
Regarding claim 2:
Furuya 2016 in view of Bronstein, Noguchi, Kim, and Furuya 2015-2 teaches:
The information processing apparatus according to claim 1 (as shown above), wherein the at least one processor further functions as:
a display control unit configured to display a screen for receiving the input of the aimed impression (Noguchi: FIG. 8 is a diagram showing an example of a window displayed on the display screen of the display device 2 [0067]; Noguchi: A first desired impression value is designated by the slider 63 [0078]), wherein the receiving unit receives the input of the aimed impression via the screen (Noguchi: However, the composite image generating apparatus 1 may not be the dedicated device, but instead, may be configured by a personal computer, may be configured of a so-called smart device such as a smartphone or a tablet device, or may be configured of a mobile phone such as a feature phone [0128]; see fig. 1).
Regarding claim 5:
Furuya 2016 in view of Bronstein, Noguchi, Kim and Furuya 2015-2 teaches:
The information processing apparatus according to claim 1 (as shown above), wherein based on the adjusted image and the aimed impression, the poster creation unit creates the poster (Furuya 2016: When a target image is corrected, the composite image that includes the corrected target image is printed by the printer 29 (step 54 in FIG. 5). As a result, a postcard in which the target image has been combined with the template image is obtained [0091]; Furuya 2016: the present disclosure is not limited to the generation of a postcard and can be applied to all systems of the kind that generate a composite image [0048]; see Note 5A).
Note 5A: Furuya 2016 teaches that the template is selected based on the aimed impression: “the template image detection device would find, by way of example, a template image for which the first degree of resemblance is equal to or greater than the first threshold value, the second degree of resemblance is less than the second threshold value and, moreover, which gives the impression determined by the first target image impression determination device,” [0018]. Therefore, the printer creates the image of the poster based on the aimed impression and the adjusted image.
Regarding claim 8:
Furuya 2016 in view of Bronstein, Noguchi, Kim and Furuya 2015-2 teaches:
The information processing apparatus according to claim 5 (as shown above), wherein based on the aimed impression (Furuya 2016: the template image detection device would find, by way of example, a template image […] which gives the impression determined by the first target image impression determination device [0018]), the poster creation unit creates the poster (Bronstein: in block 333 the print image is sent to a printer, such as the printer 120 (FIG. 1), for printing of the final poster [0039]) by changing a color of any of an image included in the poster, characters included in the poster, or a graphic included in the poster (Furuya 2016: if the impression given by a template image to be utilized in a composite image selected by the user is “CUTE”, then the levels of such items as the brightness, contrast and saturation of the target image to be combined with this template image are each raised by one [0090]).
Furuya 2016 fails to teach:
wherein based on the aimed impression, the poster creation unit creates the poster by changing a layout of any of an image included in the poster, characters included in the poster, or a graphic included in the poster.
Noguchi teaches:
wherein based on the aimed impression (Noguchi: on the basis of the impression values [0018]), the image creation unit creates the first image by changing a layout of any of a second image included in the first image (Noguchi: The first mount image determining unit may include design parameter determining unit that determines a design parameter for setting disposition of the composite target image in the first mount image on the basis of the impression values [0018]) of any of an image included in the poster, characters included in the first image, or a graphic included in the first image (see Note 8A).
wherein the input of the aimed impression is received from the user (Noguchi: In a case where the first composite image Sy1 does not match the user's impression, the user may move the slider 63 to change the mount image 51 to be combined with the composite target image 50 [0069]) by selecting one value from a range of values for at least one impression (Noguchi: by moving the slider 63 […] on the first desired impression value designating slider axis 62 using the mouse 9 […], a first desired impression value on the first impression axis 41 is designated [0069]) from a group of a plurality of impressions (Noguchi: Fig. 12 depicts slider axis 62 as part of one impression slider of a group of plurality of impression sliders) that comprise the aimed impression (Noguchi: a slider 63 that is moved on the first desired impression value designating slider axis 62 is also displayed [0069]).
Note 8A: Depicted in Fig. 8 and Fig. 9 of Noguchi are two instances of composite images. In Fig. 8, the aimed impression is closer to “elegant”, while in Fig. 9, the aimed impression is closer to “cute”. Note that in Fig. 9, four hearts have been added to the layout of the image to help match the aimed impression of the user.
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Noguchi with Furuya 2016 in view of Bronstein. Wherein based on the aimed impression, the poster creation unit creates the poster by changing a layout of any of an image included in the poster, characters included in the poster, or a graphic included in the poster, as in Noguchi, would benefit the Furuya 2016 in view of Bronstein teachings by enabling the system to re-organize the image(s) and add new auxiliary content to best match the aimed impression of the user.
Regarding claim 9:
Claim 9 is substantially similar to Claim 1, and is therefore rejected for similar reasons. Claim 9 contains the following notable differences:
Claim 9 claims an information processing apparatus control method instead of an information processing apparatus. The combination of Furuya 2016, Bronstein, Noguchi, Kim, and Furuya 2015-2 teaches the information processing apparatus performing the claimed control method, and therefore said combination also teaches the claimed information processing apparatus control method.
Regarding claim 10:
Claim 10 is substantially similar to Claim 2, and is therefore rejected for similar reasons. Claim 10 contains the following notable differences:
Claim 10 claims an information processing apparatus control method instead of an information processing apparatus. The combination of Furuya 2016, Bronstein, Noguchi, Kim, and Furuya 2015-2 teaches the information processing apparatus performing the claimed control method, and therefore said combination also teaches the claimed information processing apparatus control method.
Regarding claim 13:
Claim 13 is substantially similar to Claim 5, and is therefore rejected for similar reasons. Claim 13 contains the following notable differences:
Claim 13 claims an information processing apparatus control method instead of an information processing apparatus. The combination of Furuya 2016, Bronstein, Noguchi, Kim, and Furuya 2015-2 teaches the information processing apparatus performing the claimed control method, and therefore said combination also teaches the claimed information processing apparatus control method.
Regarding claim 16:
Claim 16 is substantially similar to Claim 8, and is therefore rejected for similar reasons. Claim 16 contains the following notable differences:
Claim 16 claims an information processing apparatus control method instead of an information processing apparatus. The combination of Furuya 2016, Bronstein, Noguchi, Kim, and Furuya 2015-2 teaches the information processing apparatus performing the claimed control method, and therefore said combination also teaches the claimed information processing apparatus control method.
Regarding claim 17:
Claim 17 is substantially similar to Claim 1, and is therefore rejected for similar reasons. Claim 17 contains the following notable differences:
Claim 17 claims a non-transitory computer-readable storage medium instead of an information processing apparatus. Furuya 2016 teaches a non-transitory computer-readable storage medium:(Furuya 2016: The present disclosure also provides a non-transitory computer readable medium for storing an image combining program [0017])
Regarding claim 19:
Furuya 2016 in view of Bronstein, Noguchi, Kim and Furuya 2015-2 teaches:
The information processing apparatus according to claim 1 (as shown above), wherein the image adjustment unit adjusts any of lightness, chroma, hue, and an edge amount of the image (Furuya 2015-2: For example, the selected target image is corrected using a correction parameter that approximates the color, saturation, and lightness of the selected template image [0077]).
Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Furuya (US 20160063746 A1; from applicant’s IDS; hereinafter Furuya 2016) in view of Bronstein (US 20040075669 A1), Noguchi (US 20190371027 A1), Kim (KR 20200080577 A), Furuya (JP 2015162850 A, hereinafter Furuya 2015-2), and Furuya (US 20180268586 A1; hereinafter Furuya 2018).
Regarding claim 6:
Furuya 2016 in view of Bronstein, Noguchi, Kim, and Furuya 2015-2 teaches:
The information processing apparatus according to claim 5 (as shown above),
Furuya 2016 in view of Bronstein, Noguchi, Kim, and Furuya 2015-2 fails to teach:
wherein a difference between an impression given by the poster created by the poster creation unit and the aimed impression is not greater than a predetermined threshold.
Furuya 2018 teaches:
wherein a difference between an impression given by the poster created by the poster creation unit (Furuya 2018: With respect to each impression axis of the first impression axis Ax1 to the fifth impression axis Ax5, images having the same impression are synthesized [0066]; see Note 6A) and the aimed impression (Furuya 2018: the user designates a desired impression axis from the first impression axis Ax1 to the fifth impression axis Ax5 that are determined in advance [0083]; see Note 6A) is not greater than a predetermined threshold (Furuya 2018: The term “small difference” means that a value of the difference is equal to or smaller than a threshold value [0064]; see Note 6A).
Note 6A: Furuya 2018 teaches that the user designates a desired impression axis, thereby choosing an aimed impression for the system: “the user can designate an impression axis for which an impression value is to be determined. In the example shown in FIG. 16, the user designates a desired impression axis from the first impression axis Ax1 to the fifth impression axis Ax5 that are determined in advance.” [0083]. Furuya further teaches that the image synthesized by the system is created from a candidate image and a background image with respect to the aimed impression: (Furuya 2018: The synthesis candidate image I2 and the background image FR4 has small differences between respective impression values with respect to the first impression axis Ax1 to the fifth impression axis Ax5 [0066]). Both images have been determined to have an impression that has a small difference from the aimed impression. Furuya 2018 teaches that small difference indicates the difference is less than a threshold value: “The term “small difference” means that a value of the difference is equal to or smaller than a threshold value, or that combinations having small differences equal to or smaller than the threshold value are arranged in the order from a combination having the smallest difference, for example,” [0064]. Furuya 2018 further teaches that “With respect to each impression axis of the first impression axis Ax1 to the fifth impression axis Ax5, images having the same impression are synthesized. Thus, a user does not feel discomfort for an impression given by the synthetic image 30,” [0066] i.e., the impression of the output image should be the same as the aimed impression. Because the output image was synthesized from images having an impression within the range of the aimed impression, and because the output image will have the same impression value as the aimed impression, it would be obvious to one of ordinary skill in the art to synthesize an output image such that the impression given by said image is less than a threshold value.
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Furuya 2018 with Furuya 2016 in view of Bronstein, Noguchi, Kim, and Furuya 2015-2. Having a difference between an impression given by the poster created by the poster creation unit and the aimed impression be not greater than a predetermined threshold, as in Furuya 2018, would benefit the Furuya 2016 in view of Bronstein, Noguchi, Kim, and Furuya 2015-2 teachings by ensuring that the user receives an impression from the generated image that matches the impression they desired.
Regarding claim 14:
Claim 14 is substantially similar to Claim 6, and is therefore rejected for similar reasons. Claim 14 contains the following notable differences:
Claim 14 claims an information processing apparatus control method instead of an information processing apparatus. The combination of Furuya 2016, Bronstein, Noguchi, Kim, and Furuya 2015-2 teaches the information processing apparatus performing the claimed control method, and therefore said combination also teaches the claimed information processing apparatus control method.
Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Furuya (US 20160063746 A1; from applicant’s IDS; hereinafter Furuya 2016) in view of Bronstein (US 20040075669 A1), Noguchi (US 20190371027 A1), Kim (KR 20200080577 A), Furuya (JP 2015162850 A, hereinafter Furuya 2015-2), and Furuya (WO 2015129328 A1; hereinafter Furuya 2015).
Regarding claim 7:
Furuya 2016 in view of Bronstein, Noguchi, Kim and Furuya 2015-2 teaches:
The information processing apparatus according to claim 1 (as shown above), wherein the at least one processor further functions as: an information acquisition unit (Furuya 2016: target image analysis information acquisition device [0015]) configured to acquire information, and based on the adjusted image (Furuya 2016: Connected to the image combining server 20 is a printer 29 for printing a postcard from image data representing a composite image generated in the image combining server 20 [0049]; see Note 7A), the information (Furuya 2016: target image analysis information acquired by the target image analysis information acquisition device [0015]), and the aimed impression (Furuya 2016: the user selects a target image by touching an image that is to be combined with a template image (step 31 in FIG. 4) [0062]; Furuya 2016: the template image detection device would find, by way of example, a template image which gives the impression determined by the first target image impression determination device [0018]; see Note 7A), the poster creation unit creates the poster (Bronstein: As discussed in greater detail below, the image manipulation/selection interface 180c may be configured to allow the user of the client 109 to manipulate the images within the template, and/or to select a part of the poster for generation of a preview image [0035]; see Note 7A).
Note 7A: Furuya 2016 teaches than the impression is received from the user selected target image, and then used to find a template to combine with said target image. Additionally, as shown previously, it would be obvious to one of ordinary skill in the art to utilize the method taught by Bronstein to create a poster.
Furuya 2016 in view of Bronstein, Noguchi, Kim, and Furuya 2015-2 fails to teach:
a character acquisition unit configured to acquire characters, and based on the adjusted image, the characters, and the aimed impression, the poster creation unit creates the poster.
Furuya 2015 teaches:
a character acquisition unit configured to acquire characters (Furuya 2015: The user selects one or more desired character images from the character selection window 131, Pg. 4, par. 3; See Fig. 19), and based on the adjusted image (Furuya 2015: corrected target image, Pg. 4, par. 13), the characters (Furuya 2015: character image, Pg. 4, par. 13; Furuya 2015: the shape (font) of the character image, and the target image may be changed to correct the impression specified by the user, Pg. 5, par. 3), and the aimed impression (Furuya 2015: the impression specified by the user, Pg. 5, par. 3), the image creation unit creates the composite image (Furuya 2015: When the target image is corrected, one or more types of photobook sample images (finished sample images, composite images) are generated using the selected template image, character image, frame image, and corrected target image, Pg. 4, par. 13).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Furuya 2015 with Furuya 2016 in view of Bronstein, Noguchi, Kim, and Furuya 2015-2. Having a character acquisition unit configured to acquire characters, and based on the adjusted image, the characters, and the aimed impression, the image creation unit creates the composite image, as in Furuya 2015, would benefit the Furuya 2016 in view of Bronstein, Noguchi, Kim, and Furuya 2015-2 teachings by enabling the system to additionally change text and select fonts to accurately match the desired impression of the user.
Regarding claim 15:
Claim 15 is substantially similar to Claim 7, and is therefore rejected for similar reasons. Claim 15 contains the following notable differences:
Claim 15 claims an information processing apparatus control method instead of an information processing apparatus. The combination of Furuya 2016, Bronstein, Noguchi, Kim, and Furuya 2015-2 teaches the information processing apparatus performing the claimed control method, and therefore said combination also teaches the claimed information processing apparatus control method.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT ALEXANDER PROVIDENCE whose telephone number is (571)270-5765. The examiner can normally be reached Monday-Thursday 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached on (571)270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VINCENT ALEXANDER PROVIDENCE/Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617