Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Amendment
This is in response to applicant’s amendment/response filed on 11/28/2025, which has been entered and made of record. Claims 1 and 15 have been amended. No claim has been cancelled. No claim has been added. Claims 1-20 are pending in the application.
Response to Arguments
Applicant's arguments filed on 11/28/2025 regarding claims rejection under 35 U.S.C 103 have been fully considered but they are not persuasive.
Applicant submits “the superimposed countdown of Sakakibara is different from the claimed "prompt information". Sakakibara discloses that the countdown is superimposed on the screen displayed on the VR goggles 21 worn by each performer to notify the performer before shooting begins. In contrast, the claimed "prompt information" represents at least an instruction for a user to perform. The countdown in Sakakibara merely provides a temporal notice and does not convey any instruction for the performer to act. Thus, Sakakibara does not disclose the feature "a first display device, configured to display prompt information and virtual interactive object information, wherein the prompt information represents at least an instruction for a user to perform" as recited in claim 1.” (Remarks, Page 8.)
The examiner disagrees with Applicant’s premises and conclusion. The claim term in question is “the prompt information represents at least an instruction for a user to perform”. Applicant misunderstood Sakakibara and only noticed the countdown notice. However, Sakakibara teaches more details of the prompt information including “an instruction for a user to perform”. In ¶0054, Sakakibara teaches “the director device 10 transmits information on the player character associated with the respective performer devices 20 to the performer devices 20. The information transmitted to the performer devices 20 includes at least the name of the player character. In addition, when a script related to the player character is prepared in advance, information on the script may also be included.” and ¶0055, “After that, when the director gives an instruction to start shooting by a predetermined input operation”. In ¶0080, “the director causes the player character to act in the virtual space by having the actor act according to the preset script and selects a position “. ¶0071, “The director prompts the performers to act their characters according to the content of the setting for each scene. When the director and the performers are at the same place, the director notifies the performers of the start and end of shooting by uttering a voice for the start and end of shooting.” “the performers may be notified by a voice emitted by the director for shooting start or shooting end through a microphone worn by the director, and the instruction may be heard through an earphone or the like worn by the performers.”. The above section has enough details to teach “wherein the prompt information represents at least an instruction for a user to perform” because any instructions from the director to the actor is “an instruction for a user to perform” and the above section gives multiple examples.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sakakibara et al. (US Pub 2025/0056101 A1) and Ishikawa (US Pub 2024/0406338 A1).
As to claim 1, Sakakibara discloses an image shooting system, comprising:
a first display device, configured to display prompt information and virtual interactive object information, wherein the prompt information represents at least an instruction for a user to perform (Fig. 1, Fig. 3-Fig. 4, ¶0004, “Using CG can set a virtual space simulating an internal space of any shooting studio. In addition, using CG allows any character to appear in the virtual space.” ¶0054, “the director device 10 transmits information on the player character associated with the respective performer devices 20 to the performer devices 20. The information transmitted to the performer devices 20 includes at least the name of the player character. In addition, when a script related to the player character is prepared in advance, information on the script may also be included.”, ¶0055, “After that, when the director gives an instruction to start shooting by a predetermined input operation”. ¶0080, “the director causes the player character to act in the virtual space by having the actor act according to the preset script and selects a position “. ¶0071, “The director prompts the performers to act their characters according to the content of the setting for each scene. When the director and the performers are at the same place, the director notifies the performers of the start and end of shooting by uttering a voice for the start and end of shooting. When the director and the performers are at different places, the performers may be notified by, for example, superimposing a countdown before shooting start and a sign of shooting end on a screen displayed on the VR goggles 21 worn by each performer in response to a predetermined input operation by the director. Alternatively, the performers may be notified by a voice emitted by the director for shooting start or shooting end through a microphone worn by the director, and the instruction may be heard through an earphone or the like worn by the performers.”);
an image shooting device, configured to shoot a user image (¶0059, “The positions at which the virtual cameras are disposed include a position (entire view position) at which the entire view of the virtual space is captured and a position (character-capturing position) at which an individual character (player character and non-player character) is captured in a close-up manner. As illustrated in FIG. 4, the character-capturing position is a position at which a target character is captured at a predetermined angle of view by a virtual camera 6, and the character-capturing position changes as the character moves.” ¶0060, “the following portions are displayed in the virtual camera image display section 42: an entire view image display portion 421 for displaying an image in which the entire view of the virtual space is captured with a wide-angle lens from the entire view position; a standard image display portion 422 for displaying an image in which the virtual space (for example, a space including a center position in the scene or a space in which the characters are mainly located in the scene, such as a sofa in the example of FIG. 3) is captured at a predetermined angle of view with a standard lens from the entire view position; character image display portions 423 to 427 (character image display portions 423 to 426 that display images of player characters a, b, c, and d, and a character image display portion 427 that displays an image of a non-player character A) for displaying images in which the characters (player character and non-player character) are captured from their respective character-capturing positions; and image shot display portions 428 and 429 (an insert image 428 and an insert video 429) for displaying image shots. Note that the number of the character image display portions 423 to 427 is appropriately changed according to the number of player characters and non-player characters appearing in the scene and the number of character-capturing positions of each character. In addition, the number of the image shot display portions 428 and 429 can also be appropriately changed according to the number of images and videos required for the scene. Although the virtual camera image display section 42 displays an original image acquired at each position, it may display, instead of displaying the original image, information (image information, text information, and the like) that can discriminate the shooting position and the shooting target at the respective shooting positions.”); and
a control device, coupled to the first display device and the image shooting device (Fig. 2, ¶0044, “a main configuration of the director device 10 and the performer device 20.”), wherein the control device is configured to:
provide the prompt information and the virtual interactive object information to the first display device (Fig. 1, Fig. 3-Fig. 4, ¶0004, “Using CG can set a virtual space simulating an internal space of any shooting studio. In addition, using CG allows any character to appear in the virtual space.” ¶0071, “The director prompts the performers to act their characters according to the content of the setting for each scene. When the director and the performers are at the same place, the director notifies the performers of the start and end of shooting by uttering a voice for the start and end of shooting. When the director and the performers are at different places, the performers may be notified by, for example, superimposing a countdown before shooting start and a sign of shooting end on a screen displayed on the VR goggles 21 worn by each performer in response to a predetermined input operation by the director. Alternatively, the performers may be notified by a voice emitted by the director for shooting start or shooting end through a microphone worn by the director, and the instruction may be heard through an earphone or the like worn by the performers.”);
perform a spatial positioning on a virtual space and a user space of a user to generate spatial positioning information (¶0042, “A theatrical-role-play-recording video creation system of the present embodiment is used to shoot a movie by causing a character imitating a real actor to act in a virtual space simulating a real space.” ¶0064, “The performers such as actors who move the player characters wear the VR goggles 21 and wear the sensors 221 at a plurality of predetermined positions of their body during shooting. During shooting, the character action controller 124 of the director device 10 transmits, to the performer devices 20, data of an image of the virtual space viewed from the v8iewpoint of the respective player characters associated with the corresponding performer devices 20 including the VR goggles 21 and causes the VR goggles 21 to display the image. Each performer can act with more immersed feelings by checking the virtual space through the VR goggles 21.” ¶0066, “During shooting, when the performer moves the body, the motion detector 222 detects the motion of each of the plurality of sensors 221 attached to the body of the performer, and transmits movement information of each sensor to the director device 10. In the director device 10, the character action controller 124 causes the player character to move in the virtual space on the basis of the received information. In addition, the character action controller 124 updates the information on the field of view of the player character on the basis of the position and the direction of the line of sight of the player character after the movement, and transmits the updated information to the performer device 20.”); and
combine a virtual image and the user image to generate an output image according to the spatial positioning information (¶0046, “The scene display data stored in the scene display data storage section 111 may include a plurality of pieces of display data related to the same space. For example, the scene display data may include display data of a temple during the day and a temple during the night, a classroom of a school in which a teacher and students (what is called supporting characters who are other than a character performed by a performer) are present and a classroom of a school in which no student is present, a landscape in which there is no person and a crowd scene (a landscape including supporting characters), and the like. Regarding the presence or absence of supporting characters, the display data of the virtual space including supporting characters and the display data of the virtual space not including supporting characters may be stored as independent display data, or the display data of a virtual space not including supporting characters and the display data of only the supporting characters may be stored and when the use of the latter is selected, both may be displayed in a superimposed manner. In addition, the display data of the supporting characters may be a still image or may have a predetermined motion (for example, students talking and laughing in a classroom, a large number of people walking on a crosswalk, and the like). In addition, the supporting characters in the present embodiment are not limited to humans or animals, and may include, for example, the sun, the stars, the moon, and the like moving at a constant speed in the sky.” Fig. 4, ¶0066, “In the director device 10, the character action controller 124 causes the player character to move in the virtual space on the basis of the received information. In addition, the character action controller 124 updates the information on the field of view of the player character on the basis of the position and the direction of the line of sight of the player character after the movement, and transmits the updated information to the performer device 20. Furthermore, in response to a predetermined input operation through the input unit 13, the character action controller 124 causes the non-player character to move in the virtual space.” ¶0068, “When each performer instructs the start of acting (action of the player character) by a predetermined input operation to the performer device 20, the character action controller 124 reads the display data of the player character from the character display data storage section 112 and causes the player character to be displayed at a predetermined position in the current scene. After that, when the performer starts acting, the motion detector 222 detects the motion of the sensors attached to the body of the performer and transmits motion data to the director device 10, and the character action controller 124 causes the character to move in the virtual space on the basis of the motion data. In addition, when the director or the assistant instructs the non-player character to appear by a predetermined input operation through the input unit 13, the character action controller 124 reads the display data of the non-player character from the character display data storage section 112, and causes the non-player character to appear in the current scene.”).
Examiner believes Sakakibara discloses every limitation of the claim based on the broadest reasonable interpretation of the claim feature. However, for the purpose of compact prosecution, Examiner includes Ishikawa to teaches a user image which is a physical image of the user (This is not in the claim but in applicant’s specification.).
Ishikawa teaches a user image which is a physical image of the user (Ishikawa, Fig .1 to Fig. 3, ¶0071, “The camera 502 collectively captures the performer 510 in the performance area 501 and the video displayed on the LED wall 505.”)
Sakakibara and Ishikawa are considered to be analogous art because all pertain to a video processing technology. It would have been obvious before the effective filing date of the claimed invention to have modified Sakakibara with the features of “a user image which is a physical image of the user” as taught by Ishikawa. The suggestion/motivation would have been in order to performs at least processing of rendering the 3D model on the basis of the relative position information between the display device and the terminal device (Ishikawa, ¶0010) and the claim would have been obvious because a particular known technique was recognized as part of the ordinary capabilities of one skilled in the art.
As to claim 2, claim 1 is incorporated and the combination of Sakakibara and Ishikawa discloses the prompt information comprises at least one of body movement information of the user, lines of the user, and movement prompt information of the user (Sakakibara, ¶0071, “The director prompts the performers to act their characters according to the content of the setting for each scene. When the director and the performers are at the same place, the director notifies the performers of the start and end of shooting by uttering a voice for the start and end of shooting. When the director and the performers are at different places, the performers may be notified by, for example, superimposing a countdown before shooting start and a sign of shooting end on a screen displayed on the VR goggles 21 worn by each performer in response to a predetermined input operation by the director. Alternatively, the performers may be notified by a voice emitted by the director for shooting start or shooting end through a microphone worn by the director, and the instruction may be heard through an earphone or the like worn by the performers.”).
As to claim 3, claim 1 is incorporated and the combination of Sakakibara and Ishikawa discloses the control device comprises: a first host, coupled to the first display device, wherein the first host is configured to store the prompt information and the interactive object information, and provides the prompt information and the interactive object information to the first display device (Sakakibara, ¶0071, “The director prompts the performers to act their characters according to the content of the setting for each scene. When the director and the performers are at the same place, the director notifies the performers of the start and end of shooting by uttering a voice for the start and end of shooting. When the director and the performers are at different places, the performers may be notified by, for example, superimposing a countdown before shooting start and a sign of shooting end on a screen displayed on the VR goggles 21 worn by each performer in response to a predetermined input operation by the director. Alternatively, the performers may be notified by a voice emitted by the director for shooting start or shooting end through a microphone worn by the director, and the instruction may be heard through an earphone or the like worn by the performers.” ¶0068, “When each performer instructs the start of acting (action of the player character) by a predetermined input operation to the performer device 20, the character action controller 124 reads the display data of the player character from the character display data storage section 112 and causes the player character to be displayed at a predetermined position in the current scene. After that, when the performer starts acting, the motion detector 222 detects the motion of the sensors attached to the body of the performer and transmits motion data to the director device 10, and the character action controller 124 causes the character to move in the virtual space on the basis of the motion data. In addition, when the director or the assistant instructs the non-player character to appear by a predetermined input operation through the input unit 13, the character action controller 124 reads the display data of the non-player character from the character display data storage section 112, and causes the non-player character to appear in the current scene.”).
As to claim 4, claim 3 is incorporated and the combination of Sakakibara and Ishikawa discloses the control device further comprises: a second host, coupled to the first host, and configured to combine the virtual image and the user image to generate the output image through an image rendering operation (Sakakibara, Fig. 1 and Fig. 2, ¶0080, “In the video creation system 1 of the present embodiment, the director causes the player character to act in the virtual space by having the actor act according to the preset script and selects a position (entire view position or character-capturing position) at which the action of the character is most effectively captured at each point in time. By only doing this, the director can easily create a video in which an image acquired by the virtual camera disposed at the position is sequentially recorded.” ¶0086, “Some or all of the functions of the storage unit and the functional blocks included in the director device 10 of the above embodiment may be provided in another computer (for example, a cloud server) connectable to the director device 10 and the performer device 20.”); and
a central controller, coupled among the first host, the second host, and the image shooting device, and configured to control operations of each of the first host, the second host, and the image shooting device (Sakakibara, Fig. 1 and Fig. 2).
As to claim 5, claim 4 is incorporated and the combination of Sakakibara and Ishikawa discloses the second host further combines at least one virtual object, the virtual image, and the user image to generate the output image (Sakakibara, ¶0082, “an existing facility such as a school, a hospital, or a hotel is set as the virtual space, the persons involved in the facility are set as player characters and made to wear the performer devices 20, and students of the school, patients of the hospital, guests of the hotel, or the like are set as non-player characters. With these settings, it is possible to perform evacuation drill assuming a case where a fire, an earthquake, or the like occurs in the facility. In that case, the performers are not informed of the script or the scenario in advance. The director plays a sound effect or causes an unexpected event (explosion of combustible gas, spread of fire in the space) to occur in the virtual space at an appropriate timing, which allows the persons involved to perform the evacuation drill with realistic feeling or sense of urgency. Then, by selecting an image in which a player character performed by a person who has taken a marked action in the evacuation drill is captured as a shooting video, it is possible to confirm items to be noted at the time of the evacuation drill or use the shooting video as an educational video.” ¶0086, “Some or all of the functions of the storage unit and the functional blocks included in the director device 10 of the above embodiment may be provided in another computer (for example, a cloud server) connectable to the director device 10 and the performer device 20.”).
As to claim 6, claim 4 is incorporated and the combination of Sakakibara and Ishikawa discloses a monitoring-end device, coupled to the control device, wherein the monitoring-end device has a second display device configured to display the user image, the interactive object information, the virtual image, and the output image (Sakakibara, Fig. 3, ¶0032, “the video creation system, the video creation device, or the video creation program according to the present invention further includes a shooting image display unit configured to display an image of the virtual space captured by the virtual camera at the entire view position and an image of the virtual space captured by the virtual camera at the character-capturing position.” ¶0047, “a display unit 14 such as a liquid crystal display.” Multiple displays are obvious to one of ordinary skill in the art.).
As to claim 7, claim 4 is incorporated and the combination of Sakakibara and Ishikawa discloses a monitoring-end device is coupled to the control device through a remote connection (Sakakibara, ¶0086, “Some or all of the functions of the storage unit and the functional blocks included in the director device 10 of the above embodiment may be provided in another computer (for example, a cloud server) connectable to the director device 10 and the performer device 20. For example, when the function of the director device 10 of the above embodiment is provided in a cloud server, it is possible to configure each functional block in the cloud server to operate in accordance with an input from the director device 10 and the performer device 20, and the created image file or video file to be downloaded from the storage unit of the cloud server to the director device 10.” Cloud server is a remote connection.).
As to claim 8, claim 4 is incorporated and the combination of Sakakibara and Ishikawa discloses a designer-end device, coupled to the control device and a monitoring-end device, wherein the designer-end device has a third display device configured to display the user image, the interactive object information, the virtual image, and the output image (Sakakibara, Fig. 3, ¶0032, “the video creation system, the video creation device, or the video creation program according to the present invention further includes a shooting image display unit configured to display an image of the virtual space captured by the virtual camera at the entire view position and an image of the virtual space captured by the virtual camera at the character-capturing position.” ¶0047, “a display unit 14 such as a liquid crystal display.” Multiple displays are obvious to one of ordinary skill in the art.).
As to claim 9, claim 8 is incorporated and the combination of Sakakibara and Ishikawa discloses the designer-end device is coupled to the monitoring-end device through a remote connection (Ishikawa, Fig. 15, connect via a cloud server is a remote connection.).
As to claim 10, claim 8 is incorporated and the combination of Sakakibara and Ishikawa discloses the first display device, a second display device, and the third display device are head-mounted display devices (Sakakibara, ¶0035, “VR goggles can be suitably used for such a character viewpoint display unit.”).
As to claim 11, claim 4 is incorporated and the combination of Sakakibara and Ishikawa discloses the first host comprises a virtual image camera for obtaining the virtual image (Sakakibara, ¶0045, “a shooting screen storage section 116 in which a screen shot by a virtual camera”).
As to claim 12, claim 11 is incorporated and the combination of Sakakibara and Ishikawa discloses the first host further performs a special effect adjustment operation on the virtual image to generate an adjusted virtual image (Sakakibara, Fig. 3, item 46, ¶0045, “information of various video effects, which are called effects in the field of video creation.” ¶0056, “an effect selector section 46” ¶0061, “The effect selector section 46 displays a list of various video effects. The video effect is what is called effect in the field of video creation, and various types of video effects conventionally known in the field can be used.”).
As to claim 13, claim 1 is incorporated and the combination of Sakakibara and Ishikawa discloses a positioning device, configured to locate the user space of the user (Sakakibara, Fig. 2, sensor group 221, motion detector 222, ¶0064, “The performers such as actors who move the player characters wear the VR goggles 21 and wear the sensors 221 at a plurality of predetermined positions of their body during shooting.” ¶0028, “The motion sensor 22 includes a plurality of sensors 221 (sensor group) to be attached to the performer on the performer's predetermined positions and a motion detector 222 for detecting the motion of the plurality of sensors 221.”).
As to claim 14, claim 13 is incorporated and the combination of Sakakibara and Ishikawa discloses the positioning device has a plurality of locators, and the locators are disposed in a plurality of different positions of the user space (Sakakibara, ¶0066, “the motion detector 222 detects the motion of each of the plurality of sensors 221 attached to the body of the performer, and transmits movement information of each sensor to the director device 10.”).
As to claim 15, the combination of Sakakibara and Ishikawa discloses an image generating method, comprising: providing a first display device to display prompt information and virtual interactive object information, wherein the prompt information represents at least an instruction for a user to perform; providing an image shooting device to shoot a user image; and providing a control device configured to: provide the prompt information and the virtual interactive object information to the first display device; perform a spatial positioning on a virtual space and a user space of a user to generate spatial positioning information; and combine a virtual image and the user image to generate an output image according to the spatial positioning information (See claim 1 for detailed analysis.).
As to claim 16, claim 15 is incorporated and the combination of Sakakibara and Ishikawa discloses combining at least one virtual object, the virtual image, and the user image to generate the output image (See claim 5 for detailed analysis.).
As to claim 17, claim 15 is incorporated and the combination of Sakakibara and Ishikawa discloses providing a monitoring-end device so that a second display device of the monitoring-end device displays the user image, the interactive object information, the virtual image, and the output image (See claim 6 for detailed analysis.).
As to claim 18, claim 17 is incorporated and the combination of Sakakibara and Ishikawa discloses providing a designer-end device so that a third display device of the designer-end device displays the user image, the interactive object information, the virtual image, and the output image (See claim 8 for detailed analysis.).
As to claim 19, claim 15 is incorporated and the combination of Sakakibara and Ishikawa discloses performing a special effect adjustment operation on the virtual image to generate an adjusted virtual image (See claim 12 for detailed analysis.).
As to claim 20, claim 15 is incorporated and the combination of Sakakibara and Ishikawa discloses respectively disposing a plurality of locators in a plurality of different positions of the user space to locate the user space of the user (See claim 14 for detailed analysis.).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YU CHEN whose telephone number is (571)270-7951. The examiner can normally be reached on M-F 8-5 PST Mid-day flex.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YU CHEN/
Primary Examiner, Art Unit 2613