DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments filed 12/4/2025, with respect to claims 1-25 have been fully considered but are moot in view new ground(s) of rejection.
Claim Objections
Claim 4 is objected to because of the following informalities:
At line 3, the Examiner believes, “a machine learning model” should be replaced with “a second machine learning model”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 7, 20, 24 and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Corazza et al. (PGPUB Document No. US 2016/0027200) in view of Zhang et al. (PGPUB Document No. US 2023/0230198) in view of Borovikov et al. (PGPUB Document No. US 2020/0312003).
Regarding claim 1, Corazza teaches an information processing system, comprising:
Processing circuitry configured to acquire an input given by a user (input for searching parts in the search field 520 (Corazza: 0079, FIG.5));
Acquire a specific image obtained by inputting the given input to the first machine learning model (the search result(s) as a result of interacting with the search field 520);
And associate the specific image with a specific item usable in a virtual space (the resulting customized character as taught by Corazza, wherein the Examiner submits the character is usable in virtual space (Corazza: 0079-0080, Abstract)).
However, Corazza does not expressly teach acquiring a 2D specific image newly generated using a machine learning model constructed by artificial intelligence.
Corazza contained a device which differed the claimed process by the substitution of the steps of using text search for searching for parts to modify/customize a character.
Zhang teaches the concept of a generative neural network that performs text-to-image generation and text-guided image modification (Zhang: Abstract, 0052, 0021). Wherein applying the teachings of Zhang to Corazza enables utilizing text-to-image generation when searching for parts (search field 520 of Corazza) to modify/create the character of Corazza. Further note, the generated image is a newly generated 2D image based on user input (see acts 206 and 208 of Zhang as disclosed in para 0049-0051).
Therefore, Zhang teaches the substituted step of utilizing a generative neural network that performs text-to-image generation.
The functions of searching for images as taught by Corazza and Zhang were known in the art to effectively generate images in response to user input.
Corazza’s steps of searching for parts using text could have been substituted with the steps of text-to-image generation taught by Zhang.
The results would have been predictable and resulted in equally searching for parts for modifying/creating a character. Therefore, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Further, the combined teachings above do not expressly teach but Borovikov teaches,
And associated the 2D specific image with a 3D specific item usable in virtual space by applying the 2D specific image as a texture to a surface of the 3D specific item (Borovikov teaches the concept of extracting custom texture data from an input image/video (step 406) and applying it to a model (step 408) (Borovikov: 0076, 0078). Applying the teachings of Borovikov to the combined teachings above enables the user to extract texture data from the user input and apply it to the character that is being customized).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to customize the character in the manner taught by Borovikov, because this enables an added variety of effects when customizing characters.
Regarding claim 2, the combined teachings teach the information processing system according to Claim 1, wherein the 3D specific item includes at least one of an item associated with an avatar in the virtual space and an item placed in the virtual space (3D character (Corazza: 0076, FIG.5) utilized in 3D animated content (Corazza: 0005)).
Regarding claim 7, the combined teachings teach the information processing system according to Claim 1, wherein the given input includes at least one of a text (text input in the search field 520 (Corazza: 0079, FIG.5)), a symbol, a pictogram, a digit figure, a color, a texture, an image, a sound, a gesture, and a combination of any two or more of these.
Regarding claim 20, the combined teachings teach the information processing system according to Claim 1, wherein the processing circuitry is further configured to manage, in association with the given input, the 3D specific image or the 2D specific item with which the specific image is associated (the claim appears to recite a processor managing the specific image. The limitation “manage” is broad. Therefore, under the broadest reasonable interpretation, the Examiner construes the processors (Corazza: 0140) handling any data associated with the resulting image of Corazza (Corazza: 0079-0080, Abstract) corresponds to managing said image.).
Claim(s) 24 are corresponding method claim(s) of claim(s) 1. The limitations of claim(s) 24 are substantially similar to the limitations of claim(s) 1. Therefore, it has been analyzed and rejected substantially similar to claim(s) 24.
Claim(s) 25 are corresponding computer readable medium claim(s) of claim(s) 1. The limitations of claim(s) 25 are substantially similar to the limitations of claim(s) 1. Therefore, it has been analyzed and rejected substantially similar to claim(s) 25. Note, the combined teachings teach a computer readable medium as presently claimed (Corazza: 0142).
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Corazza in view of Zhang as applied to the claim(s) above, and further in view of Song et al. (US Patent No. 12198451).
Regarding claim 3, the combined teachings do not expressly teach but *** teaches the information processing system according to Claim 1, wherein the processing circuitry is further configured to acquire, using a second machine learning model constructed by artificial intelligence (generative adversarial network (GAN) (Song: col.5, line 42-53)), shape information of the 3D specific item obtained by inputting the given input to the second machine learning model, and shape the 3D specific item based on the shape information (identifying a shape of a one-piece dress from text “one-piece dress” (Song: col.4, line 50-58)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to utilize the Gan of Song to further factor in shapes identified from the use input when generating images, because this enables an increased level of accuracy when generating images from text.
Claim(s) 4-6, 9-12 and 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Corazza in view of Zhang as applied to the claim(s) above, and further in view of Liew et al. (PGPUB Document No. US 2024/0144544).
Regarding claim 4, the combined teachings do not expressly teach but Liew teaches the information processing system according to Claim 1, wherein the processing circuitry is further configured to acquire, using a machine learning model, item surface information obtained by inputting the given input to the third machine learning model (the output of the image to text diffusion generative model comprising layout information such as texture (Liew: 0040)), and set or change at least one of a pattern, a fabric, a decoration, and a texture of the specific item based on the item surface information (an input texture setting that of another input (Liew: 0040)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art such the combined teachings above utilize the teachings of Liew, because this enables an effective method for synthesizing multiple inputs.
Regarding claim 5, the combined teachings teach the information processing system according to Claim 3, wherein the 3D specific item includes at least one of an item associated with an avatar in the virtual space and an item placed in the virtual space (3D character (Corazza: 0076, FIG.5) utilized in 3D animated content (Corazza: 0005)), and the processing circuitry is further configured to edit the shape information based on information on the avatar to be associated with the 3D specific item or information on the space in which the 3D specific item is to be placed (as stated in the rejection to claim 4 above, utilizing the teachings of Liew enables applying the texture of another input to the character of Corazza (Liew: 0040)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art such the combined teachings above utilize the teachings of Liew, because this enables an effective method for synthesizing multiple inputs.
Regarding claim 6, the combined teachings teach the information processing system according to Claim 4, wherein the 3D specific item includes an item associated with an avatar in the virtual space, and the processing circuitry is further configured to edit the item surface information based on information on the avatar with which the 3D specific item is to be associated (as implied by the object synthesis teaching of Liew in para 40, the teachings of Liew enables the texture of the character to applied to the surface of another item).
Regarding claim 9, the combined teachings do not expressly teach but Liew teaches. The information processing system according to Claim 1, wherein the processing circuitry simultaneously associates the 2D specific image or a derivative image obtained by changing a part of the 2D specific image with the 2D specific items related to a plurality of avatars (as stated in the rejection to claim 4 above, utilizing the teachings of Liew enables applying the texture of another input to the character of Corazza (Liew: 0040). Therefore, applying the texture of one character/avatar to another character corresponds to claim 9).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art such the combined teachings above utilize the teachings of Liew, because this enables an effective method for synthesizing multiple inputs.
Regarding claim 10, the combined teachings do not expressly teach but Liew teaches the information processing system according to Claim 1, wherein the processing circuitry is further configured to calculate a value of a specific parameter related to similarity between the 3D specific item with which the 2D specific image is associated and another item that is usable in the virtual space (the value of k (“specific parameter”) corresponds to the number of denoising operations based on a similarity value, wherein the similarity value that’s related to the number k corresponds to the similarity between a first input object and a second input object (Liew: 0053-0054)), and output the value of the specific parameter (the calculated value of k used (output) in determining the number of denoising operations (Liew: 0053)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to utilize the similarity calculation of Liew, because this enables efficient operation iterations (Liew: 0053, 0055).
Regarding claim 11, the combined teachings teach the information processing system according to Claim 10, wherein the processing circuitry calculates the value of the 3D specific parameter related to the one 2D specific item with which the 2D specific image is associated, based on the 3D specific image associated with the one specific item and the given input used to acquire the 2D specific image (the calculated value of k that is based on the similarity value, wherein the similarity value is based on the input (Liew: 0053-0054)).
Regarding claim 12, the combined teachings teach the information processing system according to Claim 11, wherein the processing circuitry calculates the value of the specific parameter related to the one 3D specific item with which the 2D specific image is associated further based on an attribute of the one 3D specific item (the example given by Liew in para 55 demonstrates the types (attribute) of objects determining the similarity value, wherein said similarity value determines the value of k).
Regarding claim 16, the combined teachings teach the information processing system according to Claim 10, wherein the processing circuitry is further configured to calculate a value of a second parameter related to similarity between the given input used to acquire the 2D specific image and another input corresponding to the given input and used to generate an image associated with the other item (the similarity value from which the value of k is based on corresponds to the second parameter (Liew: 0053)), and the value of the specific parameter includes the value of the second parameter or a value based on the value of the second parameter (k (specific parameter) is based on the similarity value (second parameter) (Liew: 0053)).
Regarding claim 17, the combined teachings teach the information processing system according to Claim 16, wherein the processing circuitry calculates the value of the second parameter based on a relationship between a text included in the given input and a text included in the other input (similarity value is based on the similarity between two text inputs (Liew: 0054)).
Regarding claim 18, the combined teachings teach the information processing system according to Claim 16, wherein the given input and the other input each further include a random seed value, and the processing circuitry calculates the value of the second parameter based on a relationship between the seed value included in the given input and the seed value included in the other input (k as described in the rejection above is further based on semantic similarity value N (Liew: 0055)).
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Corazza in view of Zhang as applied to the claim(s) above, and further in view of Tanwer et al. (PGPUB Document No. US 2020/0402307).
Regarding claim 8, the combined teachings do not expressly teach but teaches the information processing system according to Claim 1, wherein the processing circuitry is further configured to collect evaluation results from a plurality of users in the virtual space with respect to the 3D specific item with which the 2D specific image is associated, or an avatar with which the specific 3D item is associated (Tanwer teaches the concept of collecting votes of an image dataset from multiple users to determine the fashionability (Tanwer: 0068). Note, the pg.6 of the Applicant’s specification utilizes the collected evaluation results in an image ranking contest similar to the voting of Tanwer).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to apply the image voting teaching of Tanwer to the generated image of combined teachings above, because this enables an added number of functionalities to the image generating system of the combined teachings above.
Claim(s) 21 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Corazza in view of Zhang as applied to the claim(s) above, and further in view of Brager et al. (US Patent No. 11861528).
Regarding claim 21, the combined teachings do not expressly teach Brager teaches the information processing system according to Claim 1, wherein the processing circuitry is further configured to determine whether the given input satisfies a predetermined condition, and in a case where the given input satisfies the predetermined condition, prohibit or restrict use or distribution in the virtual space of the 2D specific image acquired based on the given input (an infringement detection system by analyzing images (Brager: abstract, col.5, line 14-32)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to detect potential infringement utilizing the teaching of Brager, because this enables generating images that are free of infringement.
Regarding claim 22, the combined teachings do not expressly teach but Brager teaches the information processing system according to Claim 21, wherein the predetermined condition is satisfied when a possibility of infringing another person's intellectual property right is equal to or higher than a predetermined threshold value or when a possibility of violating public order or morality is equal to or higher than a predetermined threshold value (determining a degree of similarity to determine a tight fit between two images (Brager: col.5, line 23-32)).
Claim(s) 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Corazza in view of Zhang as applied to the claim(s) above, and further in view of Goncalves (PGPUB Document No. US 2023/0077278).
Regarding claim 23, the combined teachings teach but Goncalves teaches the information processing system according to Claim 1, wherein the processing circuitry is further configured to issue and manage a non-fungible token based on the 3D specific image or the 2D specific item with which the specific image is associated (Goncalves teaches the concept of generating NFT from images (Goncalves: 0052)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to add the ability to generate NFT from the images of the combined teachings above, because this enables the added option to certify ownership of unique virtual items such as digital art.
Allowable Subject Matter
Claims 13-15 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to David H Chu whose telephone number is (571)272-8079. The examiner can normally be reached M-F: 9:30 - 1:30pm, 3:30-8:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel F Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID H CHU/Primary Examiner, Art Unit 2616