DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to the Applicants’ communication filed on June 20, 2022. In virtue of this communication, claims 1-10 are currently presented in the instant application.
Drawings
The drawings submitted on June 20, 2022. These drawings are reviewed and accepted by the examiner.
Information Disclosure Statement
The information Disclosure Statement (IDS) Forms PTO-1449, filed on June 20, 2024 in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosed therein was considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-5 and 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Yasutake (US 20150371447 A1) in view of KEBIS et al. (US 20170363780 A1).
Regarding claim 1. Yasutake discloses an information processing system (Yasutake, see at least par. [0072] The user devices 12, 16 can be any type of computer device configured to function as a client-side device of the system 100.) comprising:
at least one processor; a memory operably connected to the at least one processor, wherein the at least one processor (Yasutake, see at least par. [0080] FIG. 4D is a block diagram illustrating functions performed by an AR application in connection with the schematic illustrations of FIGS. 4A-4C. Instructions for such an AR application can be stored in a memory of a computer device (e.g., a mobile device, a smart phone, etc.) of a user, and performed by a processor of that computer device.);
acquire an image of a user (Yasutake, see at least par. [0074], the user takes a picture of his/her own face using, for example, a front camera of a smart phone (as shown in FIG. 2A), or selects a still picture saved in a picture gallery of a smart phone, and the user then selects the face region, as shown in step 1 of FIG. 2A;);
generate a 3D avatar from the acquired image (Yasutake, see at least sections 2-3 of par. [0074], the application program automatically generates a picture file of the face region as a foreground with transparent background, as shown in step 2 of FIG. 2A and the left figure in FIG. 2B; (3) the picture file is mapped as a texture onto the surface of the 3D AR object, as shown in step 3 of FIG. 2A and the right figure in FIG. 2B.);
receive an instruction to generate content including the 3D avatar and a virtual space (Yasutake, see pars. [0076-0078], FIG. 2E is a schematic illustration of an AR scene including a real person and an animated AR object of FIG. 2D. Such an AR scene is a scene of a AR based environment that includes a first user as a real person (e.g., the male user on the left of the scene in FIG. 2E) and a virtual object related to a second user (e.g., the female user whose face is mapped onto the AR object on the right of the scene in FIG. 2E). Similarly stated, FIG. 2E depicts the AR picture generated in FIG. 2D that includes an AR body with human face, as well as a real person, in the real world. In some embodiments, the AR body in FIG. 2D can be modified to make, for example, a pre-defined animation. In such embodiments, a user can use an animated AR creature to generate an AR video clip. In some embodiments, such a still picture based texture mapping to 3D AR body can be expanded to a whole body of an AR creature.);
generate the content in response to the instruction (Yasutake, see at least par. [0078] FIG. 4A is a schematic illustration of collecting 3D depth data of a subject (i.e., a real person) in real time in accordance with some embodiments. To be specific, FIG. 4A depicts the determination of 3D coordinates of the subject by, for example, a mobile device with a 3D depth sensor. In some embodiments, using real time depth sensing data (i.e., Z axis data) and conventional 2D pixel data (i.e., X-Y axis data) collected from, for example, a video camera, the AR application can be developed to realize a real time interaction of the real person as the subject and a virtual AR object in scenes of the AR based environment (e.g., video camera scenes of the AR based environment). [0079] FIG. 4B is a schematic illustration of measuring a distance between the subject in FIG. 4A and a 3D AR object in a scene of an AR based environment in accordance with some embodiments. Specifically, FIG. 4B depicts a scene of the AR based environment captured in a computer device (e.g., a smart phone), where the scene displays the subject (i.e., the real person) and the 3D based AR object (i.e., a tiger). Furthermore, 3D coordinates of the real person's body and 3D coordinates of the AR tiger in the scene can be computed and compared with (predefined) threshold values to activate pre-defined animation behavior of the AR tiger. FIG. 4C is a schematic illustration of an interaction between the real person and the 3D AR object in the AR based environment of FIG. 4B. As shown in FIG. 4C, the AR tiger interacts with the real person by pre-defined animation when the distance between the real person and the AR tiger is less than a (predefined) threshold value.);
convert the content into data for use in the virtual space, and data for manufacturing products for use in a real space (Yasutake, see at least par. [0080] FIG. 4D is a block diagram illustrating functions performed by an AR application in connection with the schematic illustrations of FIGS. 4A-4C. Instructions for such an AR application can be stored in a memory of a computer device (e.g., a mobile device, a smart phone, etc.) of a user, and performed by a processor of that computer device. As shown in FIG. 4D, a 3D video camera installed at the computer device (e.g., at a rear side of a mobile device) can be used to capture the light from the subject (i.e., the real person), and convert, in a real-time manner, collected raw data into 3D location data in accordance with the coordinate system of set at the computer device. The AR application can also overlay the 3D AR creature (i.e., the AR tiger) in a scene of the AR based environment (e.g., a camera view scene). The AR application can compute an estimated distance between the real person's body and the AR creature, and then activate the pre-defined animation of the AR creature if the estimated distance is less than a threshold value. As a result of the pre-defined animation being activated, the still scene is changed to a moving scene as if the AR creature is interacting with the real person, as shown in FIG. 4C.);
output data for use of the content in the virtual space (Yasutake, see at least par. [0077] FIG. 3A is a schematic illustration of still pictures of a real person in accordance with some embodiments. That is, FIG. 3A depicts 2D still picture shots of both a front view and a back view of a real person as a subject. FIG. 3B is a schematic illustration of a 3D AR object in accordance with some embodiments. FIG. 3C is a schematic illustration of mapping the still pictures of FIG. 3A onto the 3D AR object of FIG. 3B. Overall, FIGS. 3A-3C depict a mapping of the front view picture of FIG. 3A onto a front surface of the 3D AR object of FIG. 3B, and a mapping of the back view picture of FIG. 3A onto a back surface of that 3D AR object of FIG. 3B. As a result of such mappings, a photo-realistic 3D avatar model of the subject, which can be used for photo AR applications, is shown in FIG. 3C. [0078] FIG. 4A is a schematic illustration of collecting 3D depth data of a subject (i.e., a real person) in real time in accordance with some embodiments. To be specific, FIG. 4A depicts the determination of 3D coordinates of the subject by, for example, a mobile device with a 3D depth sensor. In some embodiments, using real time depth sensing data (i.e., Z axis data) and conventional 2D pixel data (i.e., X-Y axis data) collected from, for example, a video camera, the AR application can be developed to realize a real time interaction of the real person as the subject and a virtual AR object in scenes of the AR based environment (e.g., video camera scenes of the AR based environment).);
Yasutake does not disclose convert the content into data for use in the virtual space, and data for manufacturing products for use in a real space; and output data to a device that manufactures the products for use in the real space. However,
KEBIS discloses:
convert the content into data for use in the virtual space, and data for manufacturing products for use in a real space (KEBIS, see at least par. [0069] Furthermore, the resulting lenticular product created by the lenticular sheet and the backing layer can be, for example, a lenticular card, a lenticular pamphlet, a lenticular cover, a lenticular billboard, a lenticular screen, an advertisement, a lenticular manual, a lenticular object, and so forth. In some embodiments, the lenticular product can be an educational card providing instructions and animations of specific steps.);
output data to a device that manufactures the products for use in the real space (KEBIS, see at least par. [0026] Each of the images printed on the lenticular product 100 can represent a particular state of the image from the various states required to produce the desired lenticular effect. For example, if the desired lenticular effect is to show a person running, one of the images can show the person in the initial running stance, another image can show the person after taking a first running step, and one or more images can be used to show subsequent running steps. Thus, when printed in the lenticular product 100, the images, in combination with the lenticular sheet 102, can be shown to create the visual effect of the person running according to the various running positions displayed by the images. This effect can be generated by a light source 110 transmitting light to the lenticular product 100.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and apparatus of Yasutake, with convert the content into data for use in the virtual space, and data for manufacturing products for use in a real space; output data to a device that manufactures the products for use in the real space, as provide by KEBIS. The modification provides an improved system and method for generating an image including three dimensional avatar and a virtual space and output the image to a device that manufactures products for use in real space, therefore to provide details for performing each step of the action(s) depicted by the images. In some cases, the second set of images can include subsets of images displayed over different areas of the second surface of the backing layer. For example, the second surface of the backing layer can have areas depicting various steps or stages of an action based on a subset of images and/or instructions.( KEBIS, see par. [0012]).
Regarding claim 2. Yasutake in view of KEBIS discloses the information processing system according to claim 1 (as rejected above), Yasutake in view of KEBIS further discloses wherein the instructions include an instruction to change a position, posture, or movement of the 3D avatar (Yasutake, see at least par. [0102] FIG. 8B depicts the AV based environment for a secondary actor and a secondary actress located at locations different from the physical stage in FIG. 8A. In FIG. 8B, a large PC screen for a secondary performer receives and displays the real-time streaming of AR video scenes generated by the PC at the stage in FIG. 8A through a server. The secondary performer can make his or her next gesture or movement while he or she watches the live steaming scenes of the AR stage via the large PC screen. A 3D depth sensor installed at the PC screen captures the 3D body movement of the secondary performer. The captured data includes change in 3D positions of captured body and skeleton parameters to control the bone based kinetics of a 3D avatar. The captured data is then sent to the stage through the server to display the 3D avatar of the secondary performer in pixel coordinates of the stage PC screen.).
Regarding claim 4. Yasutake in view of KEBIS discloses the information processing system according to claim 1 (as rejected above), Yasutake in view of KEBIS further discloses wherein the instructions include an instruction to include in the content, at least one content component that is additional to the 3D avatar and the virtual space ([0103] FIGS. 8C and 8D depict how audience can watch the performance in the AR reality at the stage. On one hand, FIG. 8C illustrates the audience watching an actual scene of the stage when the audience does not have a computer device such as a smart phone or an AR glass. In this scenario, the audience can only see the performer physically at the stage, but not the performers at other locations. On the other hand, FIG. 8D illustrates the audience watching the AR performance using a computer device such as a smart phone or an AR glass. In this scenario, the stage PC can generate and upload a real-time video streaming of AR scenes in the stage through the server. Each audience can download and enjoy the live video scene of performance using, for example, an AR glass or a mobile device. The AR application program captures the AR markers and overlays the 3D AR avatars in the screen of the computer device.).
Regarding claim 5. Yasutake in view of KEBIS discloses the information processing system according to claim 5 (as rejected above), Yasutake in view of KEBIS further discloses wherein the additional content component includes background images, objects, or a 3D avatar of another user (Yasutake, see at least par. [0100] FIGS. 8A-8F are schematic illustrations of generating a hybrid reality environment for performance art in accordance with some embodiments. Such a hybrid reality environment provides audience with a mixed scene of real actors/actresses and AR avatars on an AR stage.).
Regarding claim 9. An information processing system of claim 1 performs the same step of claim 9. Therefore, claim 9 is further rejected based on the same rationale as claim 1 set forth above and incorporated herein.
Regarding claim 10. Yasutake discloses a computer-readable non-transitory storage medium for storing a program (Yasutake, see par. [0013] In some embodiments, a server device includes one or more processors and memory storing one or more programs for execution by the one or more processors. The one or more programs include instructions that cause the server device to perform the method for generating a hybrid reality environment of real and virtual objects as described above. In some embodiments, a non-transitory computer readable storage medium of a server device stores one or more programs including instructions for execution by one or more processors. The instructions, when executed by the one or more processors, cause the processors to perform the method of generating a hybrid reality environment of real and virtual objects as described above.) that causes a computer to execute a process the process the same steps of claim 1. Therefore, claim 10 is further rejected based on the same rationale as claim 1 set forth above and incorporated herein.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Yasutake (US 20150371447 A1) in view of KEBIS et al. (US 20170363780 A1), as applied claim 1 above and further in view of Baba et al. (US 20220323862 A1).
Regarding claim 3. Yasutake in view of KEBIS discloses the information processing system according to claim 1 (as rejected above), but Yasutake in view of KEBIS does not discloses wherein the instructions include an instruction to change a viewpoint in the virtual space. However,
Baba discloses:
wherein the instructions include an instruction to change a viewpoint in the virtual space (Baba, see at least par. [0286] Further, a method of changing the position, direction, and the like of the virtual camera 620B may be selected by the user. For example, a plurality of types of viewing modes are provided as viewing modes that can be selected by the user terminal 100. When a TV mode (third viewing mode) is selected in the user terminal 100, the position, direction, and the like of the virtual camera 620B may be changed in cooperation with the camera object 630 whose position, direction, and the like are changed in response to the operation of the switcher by the operator or the performer, when a normal mode (first viewing mode) is selected in the user terminal 100, the position, direction, and the like of the virtual camera 620B may be changed in response to the swipe operation by the user, and when an AR mode (second viewing mode) is selected, the space (hereinafter, also referred to as an augmented reality space) where the avatar object 610 is arranged with respect to the image acquired by the camera 17 is generated, the virtual camera 620B is arranged in the augmented reality space, and the position, direction, and the like of the virtual camera 620B may be changed in accordance with the change in the position, direction, and the like of the camera 17. Thereby, the viewpoint of the image displayed on the touch screen 15 can be changed depending on the user's preference.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the system and method of Yasutake, to have wherein the instructions include an instruction to change a viewpoint in the virtual space, as provided by Baba. The modification provides an improved system and method for generating an image including three dimensional avatar and a virtual space and output the image to a device that manufactures products for use in real space, therefore to enhance[ing] lenticular printing processes and lenticular printing articles. (BIN-NUN, see par. [0001]).
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Yasutake (US 20150371447 A1) in view of KEBIS et al. (US 20170363780 A1), as applied claim 1 above and further in view of Yumoto (Yumoto et al. US 20140049829 A1).
Regarding claim 6. Yasutake in view of KEBIS discloses the information processing system according to claim 1 (as rejected above), but Yasutake in view of KEBIS does not disclose wherein the manufactured product includes a lenticular sheet on which an image showing the generated content is formed, and the at least one processor is further configured to output to the device the data for forming on the lenticular sheet the image showing the generated content. However,
Yumoto discloses:
wherein the manufactured product includes a lenticular sheet on which an image showing the generated content is formed, and the at least one processor is further configured to output to the device the data for forming on the lenticular sheet the image showing the generated content (Yumoto, see at least par. [0175] The lenticular sheet 61 is composed of a plurality of cylindrical lenses 61a arranged side by side. The image forming medium 62 is disposed to a side of the lenticular sheet 61 on which convex-shape of cylindrical lenses is not formed, and the image forming layer 63 is formed on the lenticular sheet 61 side. The image forming layer 63 is a layer in which images 63a of picture pattern or letter as images for observing a virtual image are printed or transferred. The image forming layer 63 is provided for the image forming medium 62 on the lenticular sheet 61 side.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and apparatus of Yasutake, with data for manufacturing products for use in a real space and output data to a device that manufactures the products for use in the real space, as provide by KEBIS. The modification provides an improved system and method for generating an image including three dimensional avatar and a virtual space and output the image to a device that manufactures products for use in real space, therefore to provide image display sheet capable of realizing smooth pseudo moving image (dynamic image) and observing the image with reduced discomfort (Yumoto, see par. [0013]).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Yasutake (US 20150371447 A1) in view of KEBIS et al. (US 20170363780 A1), as applied claim 1 above and further in view of BIN-NUN et al. (US 20150301234 A1).
Regarding claim 7. Yasutake in view of KEBIS discloses the information processing system according to claim 1 (as rejected above), Yasutake in view of KEBIS does not disclose wherein the at least one processor is further configured to output to the device data to form on the manufactured product image code for accessing the content in the virtual space. However,
BIN-NUN discloses:
wherein the at least one processor is further configured to output to the device data to form on the manufactured product image code for accessing the content in the virtual space ([0051] Reference is now made to FIG. 2, which is a flowchart 200 of a method for a production of a duplex lenticular article in a process in which a lenticular printing substrate is printed with a back image, a substantially nontransparent layer, and an interlaced color image, by a single impression operation of a printing blanket of a printing press, according to some embodiments of the present invention. 101-106 are optionally as described with reference to FIG. 1; however, in this FIG. 2 a process of loading a back image layer to the printing blanket 201 is described. The added back image is designed to be viewed from the back of the generated lenticular article (i.e. not the corrugated side). The back image may be a trademark, visual data, textual data, a barcode, such as a QR code, and/or the like. For example, in the case of a lenticular article which is used as business cards, the back image layer may include contact details.)
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the system and method of Yasutake, to have wherein the at least one processor is further configured to output to the device data to form on the manufactured product image code for accessing the content in the virtual space, as provided by BIN-NUN. The modification provides an improved system and method for generating an image including three dimensional avatar and a virtual space and output the image to a device that manufactures products for use in real space, therefore to enhance[ing] lenticular printing processes and lenticular printing articles. (BIN-NUN, see par. [0001]).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Yasutake (US 20150371447 A1) in view of KEBIS et al. (US 20170363780 A1), as applied claim 1 above and further in view of Nims et al. (US 20160227185 A1).
Regarding claim 8. Yasutake in view of KEBIS discloses the information processing system according to claim 1, but Yasutake in view of KEBIS does not disclose wherein the at least one processor is further configured to modify the content in accordance with error information for the manufactured product, the error information being input from the device. However,
Nims discloses:
wherein the at least one processor is further configured to modify the content in accordance with error information for the manufactured product, the error information being input from the device (Nims, see at least par. [0039] Yet another feature of the digital multi-dimensional photon image platform system and methods of use is to utilize a systematic approach for digital multi-dimensional image creation with inputs, calculations, and selections to simplify development of a high quality master digital multi-dimensional image, which controls manufacturing errors and reduces cross talk and distortion to provide a digital multi-dimensional image without jumping images or fuzzy features in the form of a printed hardcopy or as a viewed digital multi-dimensional image on an appropriate viewing device.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the system and method of Yasutake, to have wherein the at least one processor is further configured to modify the content in accordance with error information for the manufactured product, the error information being input from the device, as provided by Nims. The modification provides an improved system and method for generating an image including three dimensional avatar and a virtual space and output the image to a device that manufactures products for use in real space, therefore to produce consistently high quality three dimensional and other multi-dimensional images (Nims, see par. [0002]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIM THANH THI TRAN whose telephone number is (571)270-1408. The examiner can normally be reached Monday-Friday 8:00am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALICIA HARRINGTON can be reached at 5712722330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KIM THANH T TRAN/Examiner, Art Unit 2615
/JAMES A THOMPSON/Primary Examiner, Art Unit 2615