DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
1. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/4/2026 has been entered.
Response to Amendment
2. This action is in response to the amendment filed on 2/4/2026. Claims 1, 3, 6, and 29 have been amended. Claims 2, 5, and 32 have been cancelled. Claims 1, 3-4, 6-17, 19-22, 24-31, and 33 remain rejected in the application.
Response to Arguments
3. Applicant’s arguments with respect to claim 1, and similarly claim 29, with respect to the rejection under 35 U.S.C. 103 regarding that the prior art does not teach the limitation(s): “generate an animation file such that at least one of a motion path or a motion teaching point related to the operations of the robot is superimposed and displayed in a region in which the virtual space displaying the operations of the robot is displayed; allow a user to individually set, for an animation in the animation file to be generated, whether to display or hide the motion path and whether to display or hide the motion teaching point” has been fully considered. In response to the argument that the combination of Watanabe, S. and Siemens does not “allow a user to individually set” “displaying only the path or displaying only the locations,” this argument is persuasive. However, upon further consideration, a new ground(s) of rejection is made in view of Watanabe, S., Siemens, and Epson.
4. Regarding arguments to claims 3-4, 6-17, 19-22, 24-28, 30-31, and 33, they are dependent on independent claims 1 and 29 respectively. Applicant does not argue anything other than independent claim 1, and similarly claim 29. The limitations in those claims, in conjunction with their combination, has previously been established and explained.
Claim Rejections - 35 USC § 103
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 1, 3, 6-8, 10, 17, 22, 24, and 26-31, and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Watanabe, S. (JP-2021024028-A, hereinafter "WatanabeS1") in view of Siemens (NPL: "Tecnomatix 15 – What’s New?," https://web.archive.org/web/20210804153316/ https://blogs.sw.siemens.com/tecnomatix/tecnomatix-15-whats-new/, 2019; "Tecnomatix 15.0 Release Notes," https://roboticsbook.com/wp-content/uploads/2019/02/Tecnomatix_15.0_Release_Notes.pdf, 2019), and further in view of Seiko Epson Corporation (NPL: "EPSON RC+ 7.0 (Ver.7.3) User's Guide Project Management and Development Rev.4," https://files.support.epson.com/far/docs/ epson_rc_pl_70_users_guide-rc700a_rc90_t(v73r4).pdf, 2017, hereinafter "Epson").
7. As per claim 1, WatanabeS1 discloses: An information processing apparatus comprising:
one or more processors configured to cause the information processing apparatus to: (WatanabeS1, Page 3, ¶ [0001], “The present invention relates to an information processing device for operating a virtual robot device ...”, and Page 12, ¶ [0066], “The present invention supplies a program that realizes ... one or more processors in the computer of the system or device reads and executes the program.”)
perform a simulation of operations of a robot in a virtual space; (WatanabeS1, Page 3, ¶ [0002], “In this type of simulator, a 3D model created based on the structure and dimensions of the actual machine is operated in a virtual space ...”)
generate an animation file such that [[at least one of a motion path or a motion teaching point]] related to the operations of the robot [[is superimposed and displayed in a region in which the virtual space displaying the operations of the robot is displayed;]] (WatanabeS1, Page 9-10, ¶ [0049], “Here, the motion information is, for example, a set of 3D models constituting the robot and coordinate information for moving them. This coordinate information is created for the frame until the robot operation program is completed. When reproducing the robot movement by animation or the like as described later, the movement corresponding to the model of the robot device displayed in the 3D model display unit 201 is moved according to the coordinate information created for each 3D model for each frame.” and Page 10, ¶ [0051], “Next, in step S7, an animation for checking the operation is created. The operation information created in step S6 is converted into an animation format that can start playback at high speed in response to the user's request. The animation format here is preferably expanded in the memory as an object that can be displayed in the software, but if the capacity is large, a general video format such as AVI or WMV is used, such as an HDD.”)
[[allow a user to individually set,]] for an animation in the animation file to be generated, [[whether to display or hide the motion path and whether to display or hide the motion teaching point;]] and (WatanabeS1, Page 9-10, ¶ [0049], “Here, the motion information is, for example, a set of 3D models constituting the robot and coordinate information for moving them. This coordinate information is created for the frame until the robot operation program is completed. When reproducing the robot movement by animation or the like as described later, the movement corresponding to the model of the robot device displayed in the 3D model display unit 201 is moved according to the coordinate information created for each 3D model for each frame.” and Page 10, ¶ [0051], “Next, in step S7, an animation for checking the operation is created. The operation information created in step S6 is converted into an animation format that can start playback at high speed in response to the user's request. The animation format here is preferably expanded in the memory as an object that can be displayed in the software, but if the capacity is large, a general video format such as AVI or WMV is used, such as an HDD.”)
output the animation file. (WatanabeS1, Page 9-10, ¶ [0049], “Here, the motion information is, for example, a set of 3D models constituting the robot and coordinate information for moving them. This coordinate information is created for the frame until the robot operation program is completed. When reproducing the robot movement by animation or the like as described later, the movement corresponding to the model of the robot device displayed in the 3D model display unit 201 is moved according to the coordinate information created for each 3D model for each frame.” and Page 10, ¶ [0051], “Next, in step S7, an animation for checking the operation is created. The operation information created in step S6 is converted into an animation format that can start playback at high speed in response to the user's request. The animation format here is preferably expanded in the memory as an object that can be displayed in the software, but if the capacity is large, a general video format such as AVI or WMV is used, such as an HDD.”)
8. WatanabeS1 doesn't explicitly disclose but Siemens discloses: [[generate an animation file such that]] at least one of a motion path or a motion teaching point [[related to the operations of the robot]] is superimposed and displayed in a region in which the virtual space displaying the operations of the robot is displayed; (Siemens, “Tecnomatix 15 – What’s New?,” Figures 1-2 below; Movie recorder and manager section, “The Movie Recorder Settings dialog contains conveniently placed options to display or hide various elements in the recorded movie without the need to open the Options dialog to access these items: Navigation cube, Working frame, Path/locations, 2D objects (line, curves, frames).” and "The Movie Manager uses Movie Recorder functionally to create high quality movies and associate them with their operation."; Examiner’s note: The “Path/locations” setting disclosed by Siemens provides the ability to superimpose motion paths/motion teaching points in a generated animation file.)
PNG
media_image1.png
502
958
media_image1.png
Greyscale
Figure 1 (Siemens, NPL: “Tecnomatix 15 – What’s New?”)
PNG
media_image2.png
620
367
media_image2.png
Greyscale
Figure 2 (Siemens, NPL: “Tecnomatix 15.0 Release Notes”)
Figure 1 above shows the Movie Recorder feature from the “Movie recorder and manager” section of the "Tecnomatix 15 – What’s New?" webpage. Figure 2, from the “Tecnomatix 15.0 Release Notes,” shows a more detailed view of the Movie Recorder settings, including the ability to set format type (MPEG-4 Video) and the option to enable “Path/Locations” under “Graphic Viewer options.”
allow a user to [[individually]] set, [[for an animation in the animation file to be generated, whether to]] display or hide the motion path [[and whether to]] display or hide the motion teaching point; (Siemens, “Tecnomatix 15 – What’s New?,” Figures 1-2 above; Movie recorder and manager section, “The Movie Recorder Settings dialog contains conveniently placed options to display or hide various elements in the recorded movie without the need to open the Options dialog to access these items: Navigation cube, Working frame, Path/locations, 2D objects (line, curves, frames).” and "The Movie Manager uses Movie Recorder functionally to create high quality movies and associate them with their operation."; Examiner’s note: The “Path/locations” setting disclosed by Siemens allows a user to display or hide both motion paths and motion teaching points in a generated animation file.)
9. Siemens is analogous art with respect to WatanabeS1 because they are from the same field of endeavor, namely robot simulation and animation generation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include superimposing at least one of a motion path or a motion teaching point in a region of virtual space where the operations of the robot are being displayed and allow a user the option to display the motion path/motion teaching point, as taught by Siemens into the teaching of WatanabeS1. The suggestion for doing so could help assist an operator with an easy way to view position coordinates on how a robot moves in virtual space. Assuming the position coordinates are calibrated with real life distances (such as centimeters or meters), this could give the operator an idea of how the robot’s real motions will impact its environment and possibly interfere or collide with any surrounding objects. In addition, the operator could decide to hide the superimposed motion path/motion teaching point information to produce a “clean animation,” say for more professional looking animation renderings. Therefore, it would have been obvious to combine Siemens with WatanabeS1.
10. WatanabeS1 in view of Siemens doesn't explicitly disclose but Epson discloses: [[allow a user to]] individually [[set, for an animation in the animation file to be generated,]] whether to [[display or hide the motion path]] and whether to [[display or hide the motion teaching point;]] (Epson, Figures 3-6 below; page 280, “Teach Point: Displays the [Teach] dialog box. Current robot position can be registered as a point.” and page 284, “Points: Visible: Shows / Not show all points; Point: Visible: Shows / Not show a point” and page 301, “Render TCP Path: Displays the trajectory of the origin point on active Tool coordinate system for a fixed time. Style: Select line or dot to indicate the trajectories.” and page 214, “6.18.2 Point to point motion: Point to point (PTP) commands move the tool center point of the robot from its current position to a specified point. Motion of the tool center point may not be in a straight line.”; Examiner’s note: As disclosed on page 284 and shown in Figure 5, checkboxes determine the visibility of the teaching points. As disclosed on page 301 and Figure 6, the Simulator Settings display window has a checkbox to render a path trajectory. “TCP” is defined as “tool center point” as mentioned on page 214. Epson discloses the option to display either points or paths for animation.)
PNG
media_image3.png
732
1400
media_image3.png
Greyscale
Figure 3: Robot Simulator. (Epson, page 279; Examiner’s note: See arrow for Teach Point Button.)
PNG
media_image4.png
346
687
media_image4.png
Greyscale
Figure 4
PNG
media_image5.png
771
652
media_image5.png
Greyscale
Figure 6
Figure 4: Teaching Point dialog to input points. (Epson, page 280)
Figure 5: Checkboxes for determining visibility of teaching points. (Epson, page 284)
Figure 6: Option to render TCP Path trajectory in Simulator Settings. (Epson, page 301)
PNG
media_image6.png
668
690
media_image6.png
Greyscale
Figure 5
11. Epson is analogous art with respect to WatanabeS1 in view of Siemens because they are from the same field of endeavor, namely robot simulation and animation generation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include the ability for a user to individually set whether to display or hide the motion path and whether to display or hide the motion teaching point, as taught by Epson into the teaching of WatanabeS1 in view of Siemens. The suggestion for doing so would provide an operator with the freedom to display either a motion path or a motion teaching point individually or decide to display both or hide both. This gives the operator flexibility to determine what motion information is rendered into an animation file or generate a “clean” animation file without the information. Therefore, it would have been obvious to combine Epson with WatanabeS1 in view of Siemens.
12. As per claim 3, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 1, wherein the one or more processors are configured to cause the information processing apparatus to change a display format of the motion teaching point in outputting the animation file from a display format of the motion teaching point in displaying results of the simulation. (WatanabeS1, Page 3, ¶ [0002], “Therefore, a simulator that can verify the operation of the robot in the virtual space may be used. In this type of simulator, a 3D model created based on the structure and dimensions of the actual machine is operated in a virtual space using the same teaching point data and robot program as the actual machine. Then, by displaying the state on a display in the form of 3D animation or the like, it is possible to verify the operation such as work transfer and assembly.”, and Page 7, ¶ [0034], “The graph display unit 202 of FIG. 3 is in a display state for displaying a graph of changes in the joint positions (joint angles) of the two joint axes (Joint1, 2) of the robot device (3D model: Robot1) along the time axis.”, and Page 9-10, ¶ [0049], “Here, the motion information is, for example, a set of 3D models constituting the robot and coordinate information for moving them. This coordinate information is created for the frame until the robot operation program is completed.”, and Page 10, ¶ [0051], “The animation format here is preferably expanded in the memory as an object that can be displayed in the software, but if the capacity is large, a general video format such as AVI or WMV is used, such as an HDD.”)
13. As per claim 6, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 1, wherein it is possible to set whether to generate the motion path or the motion teaching point in the animation file (Siemens, “Tecnomatix 15 – What’s New?,” Figures 1-2 above; Movie recorder and manager section, “The Movie Recorder Settings dialog contains conveniently placed options to display or hide various elements in the recorded movie without the need to open the Options dialog to access these items: Navigation cube, Working frame, Path/locations, 2D objects (line, curves, frames).” and Siemens, "The Movie Manager uses Movie Recorder functionally to create high quality movies and associate them with their operation."; (Examiner’s note: The “Path/locations” setting disclosed by Siemens allows a user to set whether to generate motion paths and motion teaching points in an animation file.) and WatanabeS1, Page 3, ¶ [0002], “Therefore, a simulator that can verify the operation of the robot in the virtual space may be used. In this type of simulator, a 3D model created based on the structure and dimensions of the actual machine is operated in a virtual space using the same teaching point data and robot program as the actual machine. Then, by displaying the state on a display in the form of 3D animation or the like, it is possible to verify the operation such as work transfer and assembly.”, and WatanabeS1, Page 7, ¶ [0034], “The graph display unit 202 of FIG. 3 is in a display state for displaying a graph of changes in the joint positions (joint angles) of the two joint axes (Joint1, 2) of the robot device (3D model: Robot1) along the time axis.”, and WatanabeS1, Page 9-10, ¶ [0049], “Here, the motion information is, for example, a set of 3D models constituting the robot and coordinate information for moving them. This coordinate information is created for the frame until the robot operation program is completed.” and WatanabeS1, Page 10, ¶ [0051], “The animation format here is preferably expanded in the memory as an object that can be displayed in the software, but if the capacity is large, a general video format such as AVI or WMV is used, such as an HDD.”) by differentiating a display format of at least one of the motion path or the motion teaching point in a state where the robot is interfering with a surrounding object from the display format of at least one of the motion path or the motion teaching point in a state where the robot is interfering with no surrounding object. (WatanabeS1, Page 9, ¶ [0046], “Next, the analysis conditions are set in step S2. For example, in the example of the user interface of FIG. 3, the user can select either interference (collision, contact) or thresholded interference by the analysis condition setting unit 203 using a so-called radio button type operation button. Here, "interference" means a state in which two or more 3D models are in contact with each other. Further, "interference with threshold value" means a state in which two or more 3D models are located within a distance within a value specified by the analysis condition setting unit 203. This is a function to prevent interference with the simulator due to differences in machining accuracy and robot machines.”, and WatanabeS1, Page 9, ¶ [0047], “At this time, the interference determination is performed under the conditions set in step S2, and three types of states of interference, thresholded interference, and no interference are determined.”, and WatanabeS1, Page 3, ¶ [0002], “Therefore, a simulator that can verify the operation of the robot in the virtual space may be used. In this type of simulator, a 3D model created based on the structure and dimensions of the actual machine is operated in a virtual space using the same teaching point data and robot program as the actual machine. Then, by displaying the state on a display in the form of 3D animation or the like, it is possible to verify the operation such as work transfer and assembly.”)
14. Siemens is analogous art with respect to WatanabeS1 in view of Epson because they are from the same field of endeavor, namely robot simulation and animation generation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include the ability to set whether to generate the motion path or the motion teaching point in the animation file, as taught by Siemens into the teaching of WatanabeS1 in view of Epson. The suggestion for doing so could help assist an operator with an easy way to enable generation of motion paths/motion teaching points. Doing so would make it easy to view positional information on how a robot moves in virtual space via an animation file. Therefore, it would have been obvious to combine Siemens with WatanabeS1 in view of Epson.
15. As per claim 7, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 6, wherein the display format of at least one of the motion path or the motion teaching point in a state where the robot is interfering with the surrounding object is at least one of dotted-line display, blinking display, color display, or perspective display. (WatanabeS1, Page 3, ¶ [0004], “If the interference monitoring task detects interference (collision, contact), the interference detection status is displayed on the display displaying the 3D model and the user is notified. As a display method at this time, what is generally practiced is to change the display color of the entire 3D model in which interference (collision, contact) is occurring in the virtual space, or the relevant part (for example).”, and Page 10, ¶ [0053], “As a result, it is possible to highlight only the surrounding part related to the warning event such as interference so that the user can easily recognize it.”)
16. As per claim 8, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 1, wherein the one or more processors are configured to cause the information processing apparatus to set a viewpoint in the animation file. (WatanabeS1, Page 7, ¶ [0032], “The 3D model display unit 201 displays a 3D model that reproduces the device. It is possible to change the viewpoint and the arrangement of the 3D model.”, and Page 10, ¶ [0051], “The animation format here is preferably expanded in the memory as an object that can be displayed in the software, but if the capacity is large, a general video format such as AVI or WMV is used, such as an HDD.”)
17. As per claim 10, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 1, wherein the one or more processors are configured to cause the information processing apparatus to set a start time and a stop time of the animation file. (WatanabeS1, Page 7, ¶ [0035], “The unit of the time axis of the graph display unit 202 is the elapsed time (seconds, milliseconds) from the reference time (for example, the start time of the operation described by the robot control data such as the epoch of the OS, the robot program or the teaching point data). ... The scale of the time axis on the screen of the graph display unit 202 is changed (enlarged, enlarged) via an appropriate user interface such as selecting a time width corresponding to one screen of the graph display unit 202, for example.”, and Page 6-7, ¶ [0028], “When the 3D model display unit 201 performs animation display, the display of the event display unit 200 automatically scrolls so that the position on the time axis corresponding to the display of the 3D model display unit 201 fits in the display unit 102.” and Page 10, ¶ [0051], “The animation format here is preferably expanded in the memory as an object that can be displayed in the software, but if the capacity is large, a general video format such as AVI or WMV is used, such as an HDD.”)
18. As per claim 17, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 1, wherein the one or more processors are configured to cause the information processing apparatus to set a resolution of the animation file or set a name of the animation file. (WatanabeS1, Page 10, ¶ [0051], “The animation format here is preferably expanded in the memory as an object that can be displayed in the software, but if the capacity is large, a general video format such as AVI or WMV is used, such as an HDD.”, and Siemens, “Tecnomatix 15.0 Release Notes,” See rejection for claim 1, Figure 2 that shows the Movie Recorder settings with the ability to set the resolution and file destination for an animation file.)
19. Siemens is analogous art with respect to WatanabeS1 in view of Epson because they are from the same field of endeavor, namely robot simulation and animation generation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include adding the ability to set a resolution and file name of the animation file, as taught by Siemens into the teaching of WatanabeS1 in view of Epson. The suggestion for doing so would provide an ability to adjust the level of detail of the animation file as lower resolutions may provide faster rendering and smaller files while higher resolutions may provide higher quality details in presenting the animation. In addition, allowing a user to provide an animation file with a unique name related to the animation’s purpose would give the user the ability to quickly and easily find the animation for later viewing. Therefore, it would have been obvious to combine Siemens with WatanabeS1 in view of Epson.
20. As per claim 22, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 1, wherein the animation file is independent of data for displaying the virtual space in the simulation. (WatanabeS1, Page 10, ¶ [0051], “Next, in step S7, an animation for checking the operation is created. The operation information created in step S6 is converted into an animation format that can start playback at high speed in response to the user's request. The animation format here is preferably expanded in the memory as an object that can be displayed in the software, but if the capacity is large, a general video format such as AVI or WMV is used, such as an HDD.”)
21. As per claim 24, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 1, wherein the one or more processors are configured to cause the information processing apparatus to output the animation file to an operation terminal operated by a user. (WatanabeS1, Page 12, ¶ [0064], “The robot control data for the robot device 1001 can be programmed (taught) by an operation terminal 1204 (for example, a teaching pendant) connected to the robot control device 1200, or can be edited by making minor corrections.” and Page 10, ¶ [0051], “Next, in step S7, an animation for checking the operation is created. The operation information created in step S6 is converted into an animation format that can start playback at high speed in response to the user's request. The animation format here is preferably expanded in the memory as an object that can be displayed in the software, but if the capacity is large, a general video format such as AVI or WMV is used, such as an HDD.”)
22. As per claim 26, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 24, wherein the operation terminal is a head-mounted display or a teaching pendant. (WatanabeS1, Page 12, ¶ [0064], “The robot control data for the robot device 1001 can be programmed (taught) by an operation terminal 1204 (for example, a teaching pendant) connected to the robot control device 1200, or can be edited by making minor corrections.”)
23. As per claim 27, WatanabeS1 in view of Siemens, and further in view of Epson discloses: A robot system comprising a robot of which operations are set by the information processing apparatus according to claim 1. (WatanabeS1, Page 5, ¶ [0013], “The information processing device of the present embodiment is configured as a robot simulator device that operates a virtual robot device corresponding to the hardware configuration of the robot device in a virtual environment.”)
24. As per claim 28, WatanabeS1 in view of Siemens, and further in view of Epson discloses: An article manufacturing method for manufacturing an article by using the robot system according to claim 27. (WatanabeS1, Page 12, ¶ [0065], “As a result, based on the robot control data optimized by the above processing, the robot device 1001 is operated as a production device that constitutes a production system (production line) and performs an article manufacturing operation, and the article is manufactured by the robot device 1001.”)
25. Claim 29, which is similar in scope to claim 1, is thus rejected under the same rationale as described above. In addition, the rational for modifying is the same as claim 1 above.
26. As per claim 30, WatanabeS1 in view of Siemens, and further in view of Epson discloses: A non-transitory computer-readable recording medium storing a program for executing the information processing method according to claim 29. (WatanabeS1, Page 8, ¶ [0037], “The storage means of the control program describing the control procedure according to the present invention constitutes the computer-readable recording medium of the present invention.”)
27. As per claim 31, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 1, wherein the animation file is generated such that information related to interference is displayed in the region in which the virtual space is displayed in the animation file. (WatanabeS1, Page 3, ¶ [0004], “If the interference monitoring task detects interference (collision, contact), the interference detection status is displayed on the display displaying the 3D model and the user is notified. As a display method at this time, what is generally practiced is to change the display color of the entire 3D model in which interference (collision, contact) is occurring in the virtual space, or the relevant part (for example).”, and Page 10, ¶ [0053], “As a result, it is possible to highlight only the surrounding part related to the warning event such as interference so that the user can easily recognize it.” and Page 10, ¶ [0051], “The animation format here is preferably expanded in the memory as an object that can be displayed in the software, but if the capacity is large, a general video format such as AVI or WMV is used, such as an HDD.”)
28. As per claim 33, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 1, wherein the animation file is output in a format that can be reproduced even on an apparatus that does not have a function of executing the simulation. (WatanabeS1, Page 10, ¶ [0051], “The animation format here is preferably expanded in the memory as an object that can be displayed in the software, but if the capacity is large, a general video format such as AVI or WMV is used, such as an HDD.”; Examiner’s note: A person of ordinary skill in the art would know that AVI and WMV are common file formats for video distribution. An animation distributed with one of these formats could easily be reproduced without the need to execute the simulation.)
29. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over WatanabeS1 (JP-2021024028-A) in view of Siemens (NPL: "Tecnomatix 15 – What’s New?," https://web.archive.org/web /20210804153316/https://blogs.sw.siemens.com/tecnomatix/tecnomatix-15-whats-new/, 2019; "Tecnomatix 15.0 Release Notes," https://roboticsbook.com/wp-content/uploads/2019/02 /Tecnomatix_15.0_Release_Notes.pdf, 2019), further in view of Seiko Epson Corporation (NPL: "EPSON RC+ 7.0 (Ver.7.3) User's Guide Project Management and Development Rev.4," https://files.support.epson.com/far/docs/ epson_rc_pl_70_users_guide-rc700a_rc90_t(v73r4).pdf, 2017, hereinafter "Epson"), and further in view of Hashimoto et al. (US-2021/0291369-A1, hereinafter "Hashimoto").
30. As per claim 4, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 3, wherein the one or more processors are configured to cause the information processing apparatus to [[simplify the display format of the motion teaching point]] in outputting the animation file more than the display format of the motion teaching point in displaying results of the simulation. (See rejection of claim 3 above.)
31. WatanabeS1 in view of Siemens, and further in view of Epson doesn't explicitly disclose but Hashimoto discloses: simplify the display format of the motion teaching point (Hashimoto, Page 7, ¶ [0075], lines 1-7, “For example, when the simulation device 200 is the simulation computer 40, the operation data received from the simulation device 200 may be comprised of an image of the robot model, may be comprised of a simplified image indicative of the operation of the robot model, and may be numerical value data, such as a vector and coordinates, indicative of the operation of the robot model.”)
32. Hashimoto is analogous art with respect to WatanabeS1 in view of Siemens, and further in view of Epson because they are from the same field of endeavor, namely robot simulation and animation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include simplifying the display format of the motion teaching point, as taught by Hashimoto into the teaching of WatanabeS1 in view of Siemens, and further in view of Epson. The suggestion for doing so would provide a user with an additional way to display and understand the motion teaching point data, such as displaying raw data numbers. Therefore, it would have been obvious to combine Hashimoto with WatanabeS1 in view of Siemens, and further in view of Epson.
33. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over WatanabeS1 (JP-2021024028-A) in view of Siemens (NPL: "Tecnomatix 15 – What’s New?," https://web.archive.org/web /20210804153316/https://blogs.sw.siemens.com/tecnomatix/tecnomatix-15-whats-new/, 2019; "Tecnomatix 15.0 Release Notes," https://roboticsbook.com/wp-content/uploads/2019/02 /Tecnomatix_15.0_Release_Notes.pdf, 2019), further in view of Seiko Epson Corporation (NPL: "EPSON RC+ 7.0 (Ver.7.3) User's Guide Project Management and Development Rev.4," https://files.support.epson.com/far/docs/ epson_rc_pl_70_users_guide-rc700a_rc90_t(v73r4).pdf, 2017, hereinafter "Epson"), and further in view of Kuwahara (US-10521522-B2).
34. As per claim 9, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 8, wherein the one or more processors are configured to cause the information processing apparatus to [[provide a button for automatically]] adjusting the viewpoint [[to a viewpoint for viewing an entire image of the robot or to a viewpoint for viewing the robot in a non-hidden state in a case where a motion path or a motion teaching point in the operations of the robot is partly hidden.]] (See rejection of claim 8 above.)
35. WatanabeS1 in view of Siemens, and further in view of Epson doesn't explicitly disclose but Kuwahara discloses: provide a button for automatically [[adjusting the viewpoint]] to a viewpoint for viewing an entire image of the robot or to a viewpoint for viewing the robot in a non-hidden state in a case where a motion path or a motion teaching point in the operations of the robot is partly hidden. (Kuwahara, column 5, lines 31-34, “The robot simulator 10 can ... change a viewpoint to display a virtual image seen from a certain direction, and enlarge/reduce the display based on the operation instructed by the operator.” and Figs. 5F, 5I-5K, column 9, lines 4-18, “Each of the operation components G13 to G16 is a button for changing the viewpoint of the robot system 1V in the output area G1 to a predetermined direction and position. For example, the “Default” button of the operation component G13 is clicked for changing the viewpoint of the robot system 1V to a predetermined position in an oblique direction. For example, the “Top” button of the operation component G14 is clicked for changing the same viewpoint to a plane view. For example, the “Side” button of the operation component G15 is clicked for changing the same viewpoint to what is called a side view. For example, the “Front” button of the operation component G16 is clicked for changing the same viewpoint to what is called a front view.” and Figs. 5D-5E, column 10, line 65-column 11, line 9, “FIG. 5D illustrates a state where the tip of the welding torch in the robot 30V is hidden on the back side of the workpiece WV. In such a case, for example, as illustrated in FIG. 5E, the viewpoint position in the output area G1 can be optionally rotated and changed ... In this manner ... the part hidden from a certain viewpoint can be exposed and can be easily confirmed by a viewer.”)
36. Kuwahara is analogous art with respect to WatanabeS1 in view of Siemens, and further in view of Epson because they are from the same field of endeavor, namely robot simulation and animation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include providing a button to automatically adjust the viewpoint to a viewpoint of viewing an entire image of the robot or to a viewpoint of viewing the robot in a non-hidden state in cases where a motion path or a motion teaching point is partly hidden, as taught by Kuwahara into the teaching of WatanabeS1 in view of Siemens, and further in view of Epson. The suggestion for doing so would assist a user to be able to see different orientations and perspectives of the robot animation in order to have a better understanding of how the robot would operate from those viewpoints. Therefore, it would have been obvious to combine Kuwahara with WatanabeS1 in view of Siemens, and further in view of Epson.
37. Claims 11-12, and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over WatanabeS1 (JP-2021024028-A) in view of Siemens (NPL: "Tecnomatix 15 – What’s New?," https://web.archive.org/web/20210804153316/https://blogs.sw.siemens.com/tecnomatix/tecnomatix-15-whats-new/, 2019; "Tecnomatix 15.0 Release Notes," https://roboticsbook.com/wp-content/uploads /2019/02/Tecnomatix_15.0_Release_Notes.pdf, 2019), further in view of Seiko Epson Corporation (NPL: "EPSON RC+ 7.0 (Ver.7.3) User's Guide Project Management and Development Rev.4," https://files.support.epson.com/far/docs/ epson_rc_pl_70_users_guide-rc700a_rc90_t(v73r4).pdf, 2017, hereinafter "Epson"), and further in view of Watanabe, A. et al. (US-6853881-B2, hereinafter "WatanabeA2").
38. As per claim 11, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 10, wherein, [[by selecting a first position and a second position related to the operations of the robot in a display unit displaying results of the simulation,]] the one or more processors are configured to cause the information processing apparatus to [[set a time when a predetermined portion of the robot is located at the first position as the start time, and a time when the predetermined portion is located at the second position as the stop time.]] (See rejection of claim 10 above.)
39. WatanabeS1 in view of Siemens, and further in view of Epson doesn't explicitly disclose but WatanabeA2 discloses: by selecting a first position and a second position related to the operations of the robot in a display unit displaying results of the simulation, [[the one or more processors are configured to cause the information processing apparatus to]] set a time when a predetermined portion of the robot is located at the first position as the start time, and a time when the predetermined portion is located at the second position as the stop time. (WatanabeA2, column 4, lines 12-26, “A display region of the animation and a time interval of the animation in the display section 2 a of the teaching pendant 2 which have been inputted and stored in advance are set (step 101).” and column 4, lines 33-39, “When the operation starting instruction is inputted, the execution of the robot operating program is started (step 105), a position and posture of the robot at the current time are calculated on the basis of the robot operating program and the servo delay model (step 106), and the calculated robot position and posture data (that is, operating position information) is sent to the PC 3 via the cable 5 (step 107).” and column 5, lines 28-38, “If the operation starting instruction has been inputted, the position and posture data of the robot at the current time, which is sent from the robot controller 1 through the processing in step 107, is received (step 204), a layout image of this work cell and the display information representing the position and posture of the robot at the current time are created on the basis of layout data of peripheral devices, accessories and the like in a work cell, set in advance, in which this robot is arranged, and the display information is displayed on the display section 3 a of the PC 3 on the basis of this display information (step 205).”)
40. WatanabeA2 is analogous art with respect to WatanabeS1 in view of Siemens, and further in view of Epson because they are from the same field of endeavor, namely robot simulation and animation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include setting start and stop times for when predetermined portions of the robot are located at designated positions, as taught by WatanabeA2 into the teaching of WatanabeS1 in view of Siemens, and further in view of Epson. The suggestion for doing so would allow a user to have complete control over the location and timing of robot movement at specific locations and allow multiple timed actions to be performed at precise moments. Therefore, it would have been obvious to combine WatanabeA2 with WatanabeS1 in view of Siemens, and further in view of Epson.
41. As per claim 12, WatanabeS1 in view of Siemens, further in view of Epson, and further in view of WatanabeA2 discloses: The information processing apparatus according to claim 10, wherein, based on the simulation, a time when the robot started a predetermined state is set as the start time, and a time when the robot stopped the predetermined state is set as the stop time. (WatanabeA2, column 4, lines 12-26, “A display region of the animation and a time interval of the animation in the display section 2 a of the teaching pendant 2 which have been inputted and stored in advance are set (step 101).” and column 4, lines 33-39, “When the operation starting instruction is inputted, the execution of the robot operating program is started (step 105), a position and posture of the robot at the current time are calculated on the basis of the robot operating program and the servo delay model (step 106), and the calculated robot position and posture data (that is, operating position information) is sent to the PC 3 via the cable 5 (step 107).” and column 5, lines 28-38, “If the operation starting instruction has been inputted, the position and posture data of the robot at the current time, which is sent from the robot controller 1 through the processing in step 107, is received (step 204), a layout image of this work cell and the display information representing the position and posture of the robot at the current time are created on the basis of layout data of peripheral devices, accessories and the like in a work cell, set in advance, in which this robot is arranged, and the display information is displayed on the display section 3 a of the PC 3 on the basis of this display information (step 205).”)
42. WatanabeA2 is analogous art with respect to WatanabeS1 in view of Siemens, and further in view of Epson because they are from the same field of endeavor, namely robot simulation and animation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include setting start and stop times for when predetermined portions of the robot are located at designated positions, as taught by WatanabeA2 into the teaching of WatanabeS1 in view of Siemens, and further in view of Epson. The suggestion for doing so would allow a user to have complete control over the location and timing of robot movement at specific locations and allow multiple timed actions to be performed at precise moments. Therefore, it would have been obvious to combine WatanabeA2 with WatanabeS1 in view of Siemens, and further in view of Epson.
43. As per claim 15, WatanabeS1 in view of Siemens, further in view of Epson, and further in view of WatanabeA2 discloses: The information processing apparatus according to claim 12, wherein, in a case where there is a plurality of the predetermined states, the animation file is acquired so that the predetermined states are reproduced in succession. (WatanabeA2, column 4, line 62-column 5, line 3, “After the animation image is displayed on the display section 2 a of the teaching pendant 2, a determination is made on whether or not an execution cancel instruction has been inputted (step 110). ... When the operating program has not been terminated, the process returns back to step 106, and the aforementioned processings are repeatedly performed.”, and WatanabeA2 column 5, lines 7-16, “Hereinafter, the aforementioned operations are repeatedly performed so that the simulation of the robot operation according to the operating program is displayed on the display section 2 a of the teaching pendant 2 as the animation of the robot operation. When the execution cancel instruction is inputted (step 110: Yes) or the operating program is terminated (step 111: Yes), the operating program execution is terminated, and an end signal of this program execution is sent to the PC 3 (step 112).”)
44. WatanabeA2 is analogous art with respect to WatanabeS1 in view of Siemens, and further in view of Epson because they are from the same field of endeavor, namely robot simulation and animation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to acquire an animation file to reproduce a plurality of predetermined states in succession, as taught by WatanabeA2 into the teaching of WatanabeS1 in view of Siemens, and further in view of Epson. The suggestion for doing so would allow a user to choose to repeat certain robot operations and additionally, give the user an opportunity to review the repeated actions multiple times and determine if they are correct. Therefore, it would have been obvious to combine WatanabeA2 with WatanabeS1 in view of Siemens, and further in view of Epson.
45. As per claim 16, WatanabeS1 in view of Siemens, further in view of Epson, and further in view of WatanabeA2 discloses: The information processing apparatus according to claim 15, wherein the one or more processors are configured to cause the information processing apparatus to confirm a time when each of the plurality of the predetermined states is started and a time when each of the plurality of the predetermined states is stopped. (WatanabeA2, column 4, line 62-column 5, line 3, “After the animation image is displayed on the display section 2 a of the teaching pendant 2, a determination is made on whether or not an execution cancel instruction has been inputted (step 110). ... When the operating program has not been terminated, the process returns back to step 106, and the aforementioned processings are repeatedly performed.” and column 5, lines 7-16, “Hereinafter, the aforementioned operations are repeatedly performed so that the simulation of the robot operation according to the operating program is displayed on the display section 2 a of the teaching pendant 2 as the animation of the robot operation. When the execution cancel instruction is inputted (step 110: Yes) or the operating program is terminated (step 111: Yes), the operating program execution is terminated, and an end signal of this program execution is sent to the PC 3 (step 112).”)
46. WatanabeA2 is analogous art with respect to WatanabeS1 in view of Siemens, and further in view of Epson because they are from the same field of endeavor, namely robot simulation and animation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include confirming start and stop times for a plurality of predetermined states, as taught by WatanabeA2 into the teaching of WatanabeS1 in view of Siemens, and further in view of Epson. The suggestion for doing so would allow a user to choose the timing of multiple predetermined states and determine how long each of the robot’s operations would start, end, and/or repeat. Therefore, it would have been obvious to combine WatanabeA2 with WatanabeS1 in view of Siemens, and further in view of Epson.
47. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over WatanabeS1 (JP-2021024028-A) in view of Siemens (NPL: "Tecnomatix 15 – What’s New?," https://web.archive.org/web /20210804153316/https://blogs.sw.siemens.com/tecnomatix/tecnomatix-15-whats-new/, 2019; "Tecnomatix 15.0 Release Notes," https://roboticsbook.com/wp-content/uploads/2019/02 /Tecnomatix_15.0_Release_Notes.pdf, 2019), further in view of Seiko Epson Corporation (NPL: "EPSON RC+ 7.0 (Ver.7.3) User's Guide Project Management and Development Rev.4," https://files.support.epson.com/far/docs/ epson_rc_pl_70_users_guide-rc700a_rc90_t(v73r4).pdf, 2017, hereinafter "Epson"), further in view of WatanabeA2 (US-6853881-B2), and further in view of Kawanishi et al. (JP-6632783-B1, hereinafter "Kawanishi").
48. As per claim 13, WatanabeS1 in view of Siemens, further in view of Epson, and further in view of WatanabeA2 discloses: The information processing apparatus according to claim 12, wherein the predetermined state includes at least one of a state where the robot is interfering with a surrounding object, a state where the robot is a singular point, [[and a state where a mechanical mechanism of the robot is out of an operation range.]] (WatanabeS1, Page 9, ¶ [0046], “Next, the analysis conditions are set in step S2. For example, in the example of the user interface of FIG. 3, the user can select either interference (collision, contact) or thresholded interference by the analysis condition setting unit 203 using a so-called radio button type operation button. Here, "interference" means a state in which two or more 3D models are in contact with each other. Further, "interference with threshold value" means a state in which two or more 3D models are located within a distance within a value specified by the analysis condition setting unit 203. This is a function to prevent interference with the simulator due to differences in machining accuracy and robot machines.”, and Page 9, ¶ [0047], “At this time, the interference determination is performed under the conditions set in step S2, and three types of states of interference, thresholded interference, and no interference are determined.”, and Page 3, ¶ [0002], “Therefore, a simulator that can verify the operation of the robot in the virtual space may be used. In this type of simulator, a 3D model created based on the structure and dimensions of the actual machine is operated in a virtual space using the same teaching point data and robot program as the actual machine. Then, by displaying the state on a display in the form of 3D animation or the like, it is possible to verify the operation such as work transfer and assembly.”)
49. WatanabeS1 in view of Siemens, further in view of Epson, and further in view of WatanabeA2 doesn't explicitly disclose but Kawanishi discloses: and a state where a mechanical mechanism of the robot is out of an operation range. (Kawanishi, Page 6, ¶ [0031]-[0033], “The motion limits may be the range of possible positions and orientations of the robot 30 in an arbitrary coordinate system, or the range of possible joint angles of each joint of the robot 30. Motion limits can be defined by the user setting ranges of translational position and orientation values in a Cartesian coordinate system. The motion limit may be the range of motion of the robot 30. The movable range of the robot 30 here refers to the range that the joints of the robot 30 can physically take. The movable range of the robot 30 may be a design value or a value determined by specifications. Alternatively, the user may specify a range within which the robot 30 and devices connected to the robot 30 will not collide with structures present around the robot 30, thereby determining the range of movement of the robot 30. ... In a simulation space that simulates the movement of the robot 30, the arrangement of structures around the robot 30 may be simulated, and the range in which the robot 30 can operate without colliding with surrounding structures may be calculated in the simulation, and the range of motion of the robot 30 may be set.”)
50. Kawanishi is analogous art with respect to WatanabeS1 in view of Siemens, further in view of Epson, and further in view of WatanabeA2 because they are from the same field of endeavor, namely robot simulation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include a robot state for when it is out of operation range, as taught by Kawanishi into the teaching of WatanabeS1 in view of Siemens, further in view of Epson, and further in view of WatanabeA2. The suggestion for doing so would provide a user with an ability to see how the robot handles an out of operation range state, particularly for preventing errors. Therefore, it would have been obvious to combine Kawanishi with WatanabeS1 in view of Siemens, further in view of Epson, and further in view of WatanabeA2.
51. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over WatanabeS1 (JP-2021024028-A) in view of Siemens (NPL: "Tecnomatix 15 – What’s New?," https://web.archive.org/web /20210804153316/https://blogs.sw.siemens.com/tecnomatix/tecnomatix-15-whats-new/, 2019; "Tecnomatix 15.0 Release Notes," https://roboticsbook.com/wp-content/uploads/2019/02 /Tecnomatix_15.0_Release_Notes.pdf, 2019), further in view of Seiko Epson Corporation (NPL: "EPSON RC+ 7.0 (Ver.7.3) User's Guide Project Management and Development Rev.4," https://files.support.epson.com/far/docs/ epson_rc_pl_70_users_guide-rc700a_rc90_t(v73r4).pdf, 2017, hereinafter "Epson"), further in view of WatanabeA2 (US-6853881-B2), further in view of Kawanishi et al. (JP-6632783-B1), and further in view of Seki et al. (JP-2020185631-A, hereinafter "Seki").
52. As per claim 14, WatanabeS1 in view of Siemens, further in view of Epson, further in view of WatanabeA2, and further in view of Kawanishi discloses: The information processing apparatus according to claim 13, wherein the one or more processors are configured to cause the information processing apparatus to [[select the predetermined state from a pull-down menu.]] (See rejection of claim 13 above.)
53. WatanabeS1 in view of Siemens, further in view of Epson, further in view of WatanabeA2, and further in view of Kawanishi doesn't explicitly disclose but Seki discloses: select the predetermined state from a pull-down menu. (Seki, Page 9, ¶ [0038], “The work specification field 230 has a pull-down menu 231 for selecting a work location model as an area, a pull-down menu 232 for selecting work content, a pull-down menu 233 for selecting basic operation data as a sequence, and a button 234 for launching an evaluation calculation (S500) by the operation evaluation unit 33.” and Page 9, ¶ [0039], “As displayed in the pull-down menu 233, the state transitions from S001 (moving) in FIG. 13 to S002 (working) in FIG. As a result, the display field 210 simultaneously displays the robot model currently working and the model of the object of work.”)
54. Seki is analogous art with respect to WatanabeS1 in view of Siemens, further in view of Epson, further in view of WatanabeA2, and further in view of Kawanishi because they are from the same field of endeavor, namely robot simulation and animation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include selecting predetermined states from a pull-down menu, as taught by Seki into the teaching of WatanabeS1 in view of Siemens, further in view of Epson, further in view of WatanabeA2, and further in view of Kawanishi. The suggestion for doing so would allow a user to easily select between various robot states and quickly change those states. This can be particularly useful for testing purposes. Therefore, it would have been obvious to combine Seki with WatanabeS1 in view of Siemens, further in view of Epson, further in view of WatanabeA2, and further in view of Kawanishi.
55. Claims 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over WatanabeS1 (JP-2021024028-A) in view of Siemens (NPL: "Tecnomatix 15 – What’s New?," https://web.archive.org /web/20210804153316/https://blogs.sw.siemens.com/tecnomatix/tecnomatix-15-whats-new/, 2019; "Tecnomatix 15.0 Release Notes," https://roboticsbook.com/wp-content/uploads/2019/02 /Tecnomatix_15.0_Release_Notes.pdf, 2019), further in view of Seiko Epson Corporation (NPL: "EPSON RC+ 7.0 (Ver.7.3) User's Guide Project Management and Development Rev.4," https://files.support.epson.com/far/docs/ epson_rc_pl_70_users_guide-rc700a_rc90_t(v73r4).pdf, 2017, hereinafter "Epson"), and further in view of Ghormley (NPL: "Tutorial: Creating 3D Animations in TNTmips® TNTedit™ TNTview®," 1-16, 25 April 2005).
56. As per claim 19, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 1, wherein the one or more processors are configured to cause the information processing apparatus to [[preview the animation file.]] (See rejection of claim 1 above.)
57. WatanabeS1 in view of Siemens, and further in view of Epson doesn't explicitly disclose but Ghormley discloses: preview the animation file. (Ghormley, Page 2, ¶ 1, “After you define the 3D animation, you can view a wireframe preview, … or create an MPEG file for later viewing and wider distribution.”)
58. Ghormley is analogous art with respect to WatanabeS1 in view of Siemens, and further in view of Epson because they are from the same field of endeavor, namely animation generation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include previewing the animation file, as taught by Ghormley into the teaching of WatanabeS1 in view of Siemens, and further in view of Epson. The suggestion for doing so would allow a user to confirm that an animation sequence in correct before performing actions such as rendering at a higher resolution and/or exporting the animation as a video file. Therefore, it would have been obvious to combine Ghormley with WatanabeS1 in view of Siemens, and further in view of Epson.
59. As per claim 20, WatanabeS1 in view of Siemens, further in view of Epson, and further in view of Ghormley discloses: The information processing apparatus according to claim 19, wherein the animation file is previewed as a pop-up window, and wherein the pop-up window displays a preview screen for previewing the animation file, (Ghormley, Page 4, ¶ 2, “TNT opens three windows: an Overhead View window (a familiar 2D view), a Perspective View window (familiar from the 3D Perspective Visualization tutorial), and an Animation Controls window. The Perspective View window contains a wireframe preview ...”) a playback button for playing back the animation file, a pause button for pausing the animation file, a stop button for stopping the animation file, a fast-forward button for fast-forwarding the animation file, a fast-reverse button for fast-reversing the animation file, (Ghormley, Page 8, ¶ 2, “First click the Play button, which runs the animation from the first position to the last. ... Try the Fast Reverse and Fast Forward buttons which drop frames to render the animation at 4X speed. The Pause button stops the animation at its current position so than any of the Play or Fast buttons resume the animation from that position. The Stop button also stops the animation at its current position ...”) and a time display for displaying a playback time of the animation file. (Ghormley, Page 13, ¶ 2, “The Motion panel lets you specify a length of time for the animation; the process automatically adjusts the relative speed.”)
60. Ghormley is analogous art with respect to WatanabeS1 in view of Siemens, and further in view of Epson because they are from the same field of endeavor, namely animation generation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include a preview pop-up window, animation playback and control buttons, and a playback time display, as taught by Ghormley into the teaching of WatanabeS1 in view of Siemens, and further in view of Epson. The suggestion for doing so would allow a user to quickly and conveniently interact with a graphical and timed display to play a preview animation at a desired playback method or speed. Therefore, it would have been obvious to combine Ghormley with WatanabeS1 in view of Siemens, and further in view of Epson.
61. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over WatanabeS1 (JP-2021024028-A) in view of Siemens (NPL: "Tecnomatix 15 – What’s New?," https://web.archive.org/web /20210804153316/https://blogs.sw.siemens.com/tecnomatix/tecnomatix-15-whats-new/, 2019; "Tecnomatix 15.0 Release Notes," https://roboticsbook.com/wp-content/uploads/2019/02 /Tecnomatix_15.0_Release_Notes.pdf, 2019), further in view of Seiko Epson Corporation (NPL: "EPSON RC+ 7.0 (Ver.7.3) User's Guide Project Management and Development Rev.4," https://files.support.epson.com/far/docs/ epson_rc_pl_70_users_guide-rc700a_rc90_t(v73r4).pdf, 2017, hereinafter "Epson"), and further in view of Rowoldt (US-11762716-B2).
62. As per claim 21, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 1, wherein the information processing apparatus is connected to a network, and wherein the one or more processors are configured to cause the information processing apparatus to [[upload the animation file to a moving image sharing service via the network.]] (See rejection of claim 1 above.)
63. WatanabeS1 in view of Siemens, and further in view of Epson doesn't explicitly disclose but Rowoldt discloses: upload the animation data to a moving image sharing service via the network. (Rowoldt, Abstract, “A system includes ... an animation project, ... render the animation project into a video, and store the video in a database and generate a uniform resource locator (URL) for the video.” and Rowoldt, column 7, line 67-column 8, line 6, “The video may be uploaded to a particular server or service and the system may send a message such as an email or a push notification with the URL. In addition, the system may transmit the video file to a streaming service, upload the video file to YOUTUBE, or distribute the video file in another way.”)
64. Rowoldt is analogous art with respect to WatanabeS1 in view of Siemens, and further in view of Epson because they are from the same field of endeavor, namely animation generation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include the ability to upload animation files, such as videos, to "moving image sharing services" (video sharing services), as taught by Rowoldt into the teaching of WatanabeS1 in view of Siemens, and further in view of Epson. The suggestion for doing so would provide an ability to easily distribute and playback an animation with any network connected device outside of the "information processing apparatus" for demonstration and convenience. Therefore, it would have been obvious to combine Rowoldt with WatanabeS1 in view of Siemens, and further in view of Epson.
65. Claims 25 is rejected under 35 U.S.C. 103 as being unpatentable over WatanabeS1 (JP-2021024028-A) in view of Siemens (NPL: "Tecnomatix 15 – What’s New?," https://web.archive.org/web /20210804153316/https://blogs.sw.siemens.com/tecnomatix/tecnomatix-15-whats-new/, 2019; "Tecnomatix 15.0 Release Notes," https://roboticsbook.com/wp-content/uploads/2019/02 /Tecnomatix_15.0_Release_Notes.pdf, 2019), further in view of Seiko Epson Corporation (NPL: "EPSON RC+ 7.0 (Ver.7.3) User's Guide Project Management and Development Rev.4," https://files.support.epson.com/far/docs/ epson_rc_pl_70_users_guide-rc700a_rc90_t(v73r4).pdf, 2017, hereinafter "Epson"), and further in view of Nakazato (US-10076840-B2).
66. As per claim 25, WatanabeS1 in view of Siemens, and further in view of Epson discloses: The information processing apparatus according to claim 24, wherein the one or more processors are configured to cause the information processing apparatus to [[make settings regarding display]] of the positional information with the operation terminal. (See rejection of claim 24 above.)
67. WatanabeS1 in view of Siemens, and further in view of Epson doesn't explicitly disclose but Nakazato discloses: make settings regarding display (Nakazato, column 10, lines 51-56, “The display apparatus 1051 can be the optical see-through type or the video see-through type. Further, the display apparatus 1051 can be an HMD, a display device, a projector, a tablet terminal, or a smartphone and can be a display device attached to a teaching pendant of the robot.” and column 10, lines 39-41, “Alternatively, the setting unit 101 can acquire the information about the display apparatus 1051 according to an operation of the observer 1001.” and column 5, line 63-col 6, line 1, “Further, the information processing apparatus 1 can sense a movement of the observer 1001. The setting unit 101 may set a parameter according to a gesture of the observer 1001. The parameters having been set can be displayed on the display device 15 or a display device of the input device 16 (e.g., the remote controller).”)
68. Nakazato is analogous art with respect to WatanabeS1 in view of Siemens, and further in view of Epson because they are from the same field of endeavor, namely robot simulation. At the time the application was filed, it would have been obvious to a person of ordinary skill in the art to include make settings regarding outputting a display to an operation terminal, as taught by Nakazato into the teaching of WatanabeS1 in view of Siemens, and further in view of Epson. The suggestion for doing so would allow the user to adjust display settings dependent on the terminal, such as a head-mounted display or teaching pendant, to appropriately account for the screen space and user input. Therefore, it would have been obvious to combine Nakazato with WatanabeS1 in view of Siemens, and further in view of Epson.
Conclusion
69. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW CLOTHIER whose telephone number is (571)272-4667. The examiner can normally be reached Mon-Fri 8:00am-4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW CLOTHIER/Examiner, Art Unit 2614
/KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614