DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 17, 2025 has been entered.
Response to Amendment
The amendment filed December 17, 2025 has been entered. Claims 1-3, 6-10, 13-17, and 20 remain pending in the application.
Response to Arguments
Applicant’s arguments, see Pages 10-12 of Remarks, filed December 17, 2025, with respect to the rejection(s) of claim(s) 1-3, 6-10, 13-17, and 20 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 3S Cloud Render Farm (3S Cloud Render Farm | How To Render - Fast & Easy-to-use - Speed up rendering).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 8-10, and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Delfino (US 20230185425 A1) in view of Mitsui et al. (US 20180150989 A1) and 3S Cloud Render Farm (3S Cloud Render Farm | How To Render - Fast & Easy-to-use - Speed up rendering), hereinafter Delfino, Mitsui, and 3S respectively.
Regarding claim 1, Delfino teaches rendering as a Service (RaaS) platform for generating a rendering (Paragraph 0013, 0066 – “It is therefore provided a computer-implemented method for rendering at least two visualization modes of a 3D model…typical example of computer-implementation of a method is to perform the method with a system adapted for this purpose, e.g. a server…The example of a system showed in FIG. 22 may also be a server, e.g., a server hosting a database”; Note: the system is equivalent to the rendering as a service platform) of industrial equipment (Fig. 21 – The figure shows a rendered display including a 3D model of headphones, which is a type of industrial equipment used for noise protection; see screenshot of Fig. 21 below), the RaaS platform comprising:
PNG
media_image1.png
353
489
media_image1.png
Greyscale
Screenshot of Fig. 21 (taken from Delfino)
one or more memory devices having instructions stored thereon that, when executed by one or more processors, cause the one or more processors to perform operations (Paragraph 0066-0068 – “The system may comprise a processor coupled to a memory; the memory having recorded thereon a computer program comprising instructions for performing the method… A mass storage device controller 1020 manages accesses to a mass memory device, such as hard drive 1030…Method steps may be performed by a programmable processor executing a program of instructions to perform functions of the method”) comprising:
generating, at the RaaS platform (Paragraph 0068 – “Method steps may be performed by a programmable processor executing a program of instructions to perform functions of the method”; Note: the steps are performed at the system by the processor), an interface comprising (i) a graphical representation of the rendering of the industrial equipment based on a 3D model of the industrial equipment (Fig. 21, Paragraph 0056, 0060 – “the user may perform a selection of the category of visualization modes to render…The method renders S40 the 3D model in each split view 100 according to its corresponding visualization mode and determined rendering area”; Note: the rendering of the industrial equipment is shown on an interface in Fig. 21, where the headphone is industrial equipment for noise protection);
generating, at the RaaS platform (Paragraph 0068 – “Method steps may be performed by a programmable processor executing a program of instructions to perform functions of the method”; Note: the steps are performed at the system by the processor), and based on a user selection of a selectable icon, a graphical representation of a plurality of selectable rendering options (Paragraph 0055-0056, 0069, 0072 – “the main view 200 is pre-existent to our method. As an example, it is the view wherein the design of the 3D model is performed. The main view 200 may also render the 3D model based on a preferred visualization mode. The preferred visualization mode may be chosen by the user or automatically selected by the application…The number of visualization modes to render may be comprised between 2 to more than 100, and more preferably between 2 to 8. When the number of visualization modes is too large, for example more than 10, it is preferable to group the visualization modes by categories and to render only the visualization modes of a selected category. The grouping of the visualization may be provided or performed between steps S20 and S20 by the user with any standard user interface such as a multiple selection listbox. Then the user may perform a selection of the category of visualization modes to render… a default visualization mode may be preselected in order to ensure a real-time rendering. Then the user can just confirm or not this preselection… the receiving S50 may comprise any user interaction. For example, a mouse click on the split view 100 may be received”; Note: the multiple selection listbox is equivalent to the graphical representation of a plurality of selectable rendering options. It is implied to be generated because it could not be displayed to the user interface otherwise. While it is not explicitly stated that there is a selection of an icon, it would have been obvious to one of ordinary skill in the art to generate the rendering options based on a selection of an icon because the user must somehow indicate that they want to select their preferred visualization mode in order for the options of visualization modes to appear. Especially in the case where a default mode is preselected, the user must confirm it or not, which would require user selection);
transmitting the graphical representation of the plurality of selectable rendering options from the RaaS platform to the user device for presentation via the graphical user interface of the user device (Fig. 3, Paragraph 0055-0056, 0065, 0068 – “the user may perform a selection of the category of visualization modes to render…The grouping of the visualization may be provided or performed between steps S20 and S20 by the user with any standard user interface such as a multiple selection listbox. Then the user may perform a selection of the category of visualization modes to render…The display 1080 may be a monitor or the like as known in the art. The display 1080 may be touch-sensitive display 1080…The processor may thus be programmable and coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device”; Note: The display is equivalent to the user device. It is implied that the processor of the system must transmit the visualization modes to the display in order for the user to select them. An example interface showing selectable rendering options is shown in Fig. 3; see screenshot of Fig. 3 below);
PNG
media_image2.png
258
303
media_image2.png
Greyscale
Screenshot of Fig. 3 (taken from Delfino)
and receiving, at the RaaS platform (Paragraph 0068 – “Method steps may be performed by a programmable processor executing a program of instructions to perform functions of the method”; Note: the steps are performed at the system by the processor), a user selection of rendering options via a second interaction with the graphical user interface of the user device (Paragraph 0056 – “The grouping of the visualization may be provided or performed between steps S20 and S20 by the user with any standard user interface such as a multiple selection listbox. Then the user may perform a selection of the category of visualization modes to render. For example, the user can launch a command to choose a category of visualization modes grouped under ‘lightning’ for the current 3D model or the 3D scene and will get an immersive selector with viewers showing various lightning visualization modes. Next, the user can launch a rendering command and the viewers will be updated with rendering visualization options”; Note: it is implied that the user selection for the visualization mode is received by the system because the viewers would not be updated otherwise).
Delfino does not teach “a preview of the rendering” in the limitation: “generating, at the RaaS platform, an interface comprising (i) a graphical representation of a preview of the rendering of the industrial equipment based on a 3D model of the industrial equipment”. However, Mitsui teaches generating, at the RaaS platform, a graphical representation of a preview of the rendering of the industrial equipment based on a 3D model of the industrial equipment (Paragraph 0057, 0072 – “The section 20 displays a two dimensional image that is generated by performing a projective transformation to a partial region of a full-view spherical image, which corresponds to a three dimensional model…when the user selects the “preview” button displayed on the section 23 on the application screen illustrated in FIG. 7(b), an animation illustrated in FIG. 8 is played and displayed on the section 20. The user can check or confirm whether the animation of the full-view spherical image is generated in line with the user intention by using the preview playing function”; Note: the animation is a graphical representation of a preview of a rendering. The RaaS platform and industrial equipment were previously taught by Delfino above). A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the rendering of Delfino could have been substituted for the preview of the rendering of Mitsui because both the rendering and preview of the rendering serve the purpose of showing a visualization of a subject. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of displaying a visualization of a subject with certain viewing settings. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the rendering of Delfino for the preview of the rendering of Mitsui according to known methods to yield the predictable result of displaying a visualization of a subject with certain viewing settings.
Moreover, Delfino does not teach the plurality of selectable rendering options comprising (i) a plurality of views of the 3D model from a plurality of different viewpoints of a virtual camera, (ii) a plurality of different center points of the 3D model, and (iii) a plurality of different movement paths of the virtual camera in a 3D space relative to the 3D model; nor user selection of the rendering options comprising the user selection of the rendering options comprising (i) a viewpoint of the virtual camera from the plurality of different viewpoints, (ii) an identification of a center point of the 3D model from the plurality of different center points, and (iii) a movement path of the virtual camera from the plurality of different movement paths. However, Mitsui teaches the plurality of selectable rendering options comprising (i) a plurality of views of the 3D model from a plurality of different viewpoints of a virtual camera, (ii) a plurality of different center points of the 3D model (Fig. 27, Paragraph 0150-0152 – “the parameters include, for example, a viewpoint specified as a desired position in a partial image, and an angle of view “a” indicating a range of angle capturable by the camera 202. As illustrated in FIG. 23, the parameters further include camera coordinates indicating a position of the camera 202, camera UP indicating the upward direction of the camera 202 as a vector, camera gazing point coordinates indicating a point that the camera 202 is gazing (i.e., gazing point), and a radius of the sphere 200 that is a virtual three dimensional object used for mapping a full-view spherical image. Hereinafter, the camera gazing point may be also referred to as the gazing point… It should be noted that these parameters are parameters necessary for changing the projection type or projection mode”; Note: the parameters are rendering options, and the gazing point is equivalent to a center point. Fig. 27 shows graphical representations for choosing different types that correspond to different parameters; see screenshot of Fig. 27 below), and (iii) a plurality of different movement paths of the virtual camera in a 3D space relative to the 3D model (Paragraph 0062, 0119-120 – “When a first viewpoint and a second viewpoint next to the first viewpoint are specified or designated in the three dimensional model of the full-view spherical image, the viewpoint control unit 106 performs a transition of viewpoints from the first viewpoint to the second viewpoint along a transition path interpolating between the first viewpoint and the second viewpoint…when a user designates a viewpoint in a partial image displayed on the section 20, an application screen transits to a state illustrated in FIG. 16A(b), and an icon 29 indicating the registration completion of the viewpoint is displayed in the partial image… The icon 29 includes a direction instruction button having four arrows for specifying the vertical direction (i.e., tilt direction) and the lateral direction (i.e., pan direction) of the partial image. As illustrated in FIG. 16B(c), in response to a selection of any one of the four arrows of the direction instruction button by the user, a transition direction to the next to-be-registered viewpoint is set”; Note: the icon is a graphical representation of a rendering option corresponding to the transition between viewpoints, which is equivalent to the camera movement path. The spherical image represents a 3D space); and user selection of the rendering options comprising the user selection of the rendering options comprising (i) a viewpoint of the virtual camera from the plurality of different viewpoints, (ii) an identification of a center point of the 3D model from the plurality of different center points (Fig. 27, Paragraph 0150-0152 – “the parameters include, for example, a viewpoint specified as a desired position in a partial image, and an angle of view “a” indicating a range of angle capturable by the camera 202. As illustrated in FIG. 23, the parameters further include camera coordinates indicating a position of the camera 202, camera UP indicating the upward direction of the camera 202 as a vector, camera gazing point coordinates indicating a point that the camera 202 is gazing (i.e., gazing point), and a radius of the sphere 200 that is a virtual three dimensional object used for mapping a full-view spherical image. Hereinafter, the camera gazing point may be also referred to as the gazing point… It should be noted that these parameters are parameters necessary for changing the projection type or projection mode”; Note: the parameters are rendering options, and the gazing point is equivalent to a center point. Fig. 27 shows graphical representations for choosing different types that correspond to different parameters; see screenshot of Fig. 27 below), and (iii) a movement path of the virtual camera from the plurality of different movement paths (Paragraph 0062, 0119-120 – “When a first viewpoint and a second viewpoint next to the first viewpoint are specified or designated in the three dimensional model of the full-view spherical image, the viewpoint control unit 106 performs a transition of viewpoints from the first viewpoint to the second viewpoint along a transition path interpolating between the first viewpoint and the second viewpoint…when a user designates a viewpoint in a partial image displayed on the section 20, an application screen transits to a state illustrated in FIG. 16A(b), and an icon 29 indicating the registration completion of the viewpoint is displayed in the partial image… The icon 29 includes a direction instruction button having four arrows for specifying the vertical direction (i.e., tilt direction) and the lateral direction (i.e., pan direction) of the partial image. As illustrated in FIG. 16B(c), in response to a selection of any one of the four arrows of the direction instruction button by the user, a transition direction to the next to-be-registered viewpoint is set”; Note: the icon is a graphical representation of a rendering option corresponding to the transition between viewpoints, which is equivalent to the camera movement path. The spherical image represents a 3D space). Since Delfino already allows “a modification of the point of view and/or a modification of the position and/or orientation of the 3D model” (Delfino: Paragraph 0050), it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of Mitsui to have selectable rendering options include viewpoint, center point, and camera movement path because having selectable options for the point of view would enhance the user experience by making it easier for users to view the model as desired. Specifically, choosing the viewpoint and center point allows the user to easily see the desired part of the model, and choosing the camera movement path allows the user to dictate the order and direction of the viewing.
PNG
media_image3.png
341
444
media_image3.png
Greyscale
Screenshot of Fig. 27 (taken from Mitsui)
Lastly, Delfino does not teach generating and transmitting, based on the 3D model and the user selection of the rendering options, to generate the preview of the rendering of the industrial equipment to the user device, wherein the preview of the rendering of the industrial equipment comprises an animation of the industrial equipment taken from the selected viewpoint and with movement of the virtual camera along the selected movement path relative to the selected center point. However, Mitsui teaches generating and transmitting, based on the 3D model and the user selection of the rendering options, to generate the preview of the rendering of the industrial equipment to the user device (Paragraph 0057, 0072, 0135 – “The section 20 displays a two dimensional image that is generated by performing a projective transformation to a partial region of a full-view spherical image, which corresponds to a three dimensional model…when the user selects the “preview” button displayed on the section 23 on the application screen illustrated in FIG. 7(b), an animation illustrated in FIG. 8 is played and displayed on the section 20. The user can check or confirm whether the animation of the full-view spherical image is generated in line with the user intention by using the preview playing function… the user can variously change and set the above described each value (e.g., preset data, background audio, transition speed, crop region), and the preview can be played each time the value is changed and set”; Note: a preview is generated and displayed onto the user device. The preview is based on a 3D model and values set by the user), wherein the preview of the rendering of the industrial equipment comprises an animation of the industrial equipment taken from the selected viewpoint and with movement of the virtual camera along the selected movement path relative to the selected center point (Paragraph 0143, 0156 – “When the user selects the “preview” button at this timing, the animation is generated based on the updated preset data 600. In this case, the viewpoint control unit 106 transits the viewpoints along a modified transition path interpolated based on the changed end viewpoint, the calculation unit 104 generates a plurality of partial images based on the viewpoints transiting along the modified transition path and the changed angle of view of the end viewpoint, and the control unit 103 connects the plurality of partial images generated in this way as an animation, and display the animation on the section 40… When the first projection type is applied, as illustrated in FIG. 24A, the gazing point is set at the origin 201 that is the center of the sphere 200, and the camera 202 is disposed at a position outside the radius of the sphere 200”; Note: the gazing point is equivalent to the center point, and the transition path is equivalent to movement path). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of Mitsui to generate and transmit the preview to the user device because it would allow the user to see the preview and determine their preferences based on the preview. Logically, there would be no purpose in having a preview if it could not be seen by the user.
Delfino modified by Mitsui still does not teach generating, at the RaaS platform, an interface comprising (ii) a preview of a cost of generating the rendering of the industrial equipment based on the 3D model and a number of views of the rendering to be generated, and (iii) a selectable icon indicating user approval of the preview of the rendering and the preview of the cost; transmitting the interface to a user device for presentation on a graphical user interface of the user device; receiving, at the RaaS platform, a user selection of the selectable icon via a first interaction with the graphical user interface on the user device. However, 3S teaches generating, at the RaaS platform, an interface comprising (ii) a preview of a cost of generating the rendering of the industrial equipment based on the 3D model and a number of views of the rendering to be generated (Screenshots 1-3 – There is a render preview option on the RaaS platform that shows a preview of the render and a cost estimation, which is equivalent to a preview of a cost of generating the rendering. The cost estimation is shown on screenshot 3. The cost estimation is based on the uploaded scene (screenshot 1) and the input parameters (screenshot 2). The input parameters include the camera, which is a view. Only one camera can be input, which means the cost estimation is based on one view. The 3D model was previously taught by Delfino, and is represented by the uploaded scene in the context of 3S), and (iii) a selectable icon indicating user approval of the preview of the rendering and the preview of the cost (Screenshot 3 – There is a “Review & Launch Render” button, which is a selectable icon indicating user approval of the preview and cost); transmitting the interface to a user device for presentation on a graphical user interface of the user device (Screenshot 3 – There is an interface shown to the user on a user device, which implies that it was transmitted to the user device); receiving, at the RaaS platform, a user selection of the selectable icon via a first interaction with the graphical user interface on the user device (Screenshot 4 – This screenshot shows the loading screen after the selectable icon, “Review & Launch Render”, was clicked on by the user).
PNG
media_image4.png
1198
1909
media_image4.png
Greyscale
Screenshot 1 (taken from 3S)
PNG
media_image5.png
1197
1909
media_image5.png
Greyscale
Screenshot 2 (taken from 3S)
PNG
media_image6.png
1198
1919
media_image6.png
Greyscale
Screenshot 3 (taken from 3S)
PNG
media_image7.png
1191
1910
media_image7.png
Greyscale
Screenshot 4 (taken from 3S)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of 3S to generate a preview of a cost to render based on the 3D model and number of views for the benefit of allowing the user to make a more informed decision on the rendering they want to generate. For instance, if it costs a lot to generate, the user is able to know ahead of time so that they can perfect their 3D model and edit the parameters to achieve the desired rendering at an appropriate cost. Furthermore, the 3D model file and number of views would affect how much computational resources or time is used and thus would affect the cost of the render. It also would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of 3S to have a selectable icon to approve the preview and cost because having buttons for users to click on before moving onto a next page is common in the art and known for being user-friendly. For example, in screenshots 1-4 of 3S, there are buttons in each page for users to click on to indicate approval, and in Delfino, there are options on screen for the user to select (Paragraph 0065 – “the pointer control device allows the user to select various commands, and input control signals”). Additionally, having a selectable icon specifically for approving the preview and cost helps ensure that the user does not waste money or time on a render that does not meet their needs.
Regarding claim 2, Delfino in view of Mitsui and 3S teaches the RaaS platform of claim 1. Delfino further teaches wherein the plurality of selectable rendering options comprise an identification of a light source used to illuminate the 3D model of industrial equipment when generating the rendering of the industrial equipment (Paragraph 0056 – “The grouping of the visualization may be provided or performed between steps S20 and S20 by the user with any standard user interface such as a multiple selection listbox. Then the user may perform a selection of the category of visualization modes to render…A visualization mode is defined by a technique of rendering and a set of parameters. As an example, a plurality of lightning visualization modes could comprise a parameter for setting the number of lights, the orientation and the position of each light, the type of each light (punctual, infinite or directional), the intensity and the color of each light”; Note: the selection of a visualization mode is equivalent to a selectable rendering option for setting a light. Additionally, the rendering of the industrial equipment was previously taught in the claim 1 rejection above).
Regarding claim 3, Delfino in view of Mitsui and 3S teaches the RaaS platform of claim 1. Delfino further teaches wherein the plurality of selectable rendering options comprise an identification of a placement of a light source used to illuminate the 3D model of industrial equipment when generating the rendering of the industrial equipment (Paragraph 0056 – “The grouping of the visualization may be provided or performed between steps S20 and S20 by the user with any standard user interface such as a multiple selection listbox. Then the user may perform a selection of the category of visualization modes to render…A visualization mode is defined by a technique of rendering and a set of parameters. As an example, a plurality of lightning visualization modes could comprise a parameter for setting the number of lights, the orientation and the position of each light, the type of each light (punctual, infinite or directional), the intensity and the color of each light”; Note: the selection of a visualization mode is equivalent to a selectable option for setting the position of a light. Additionally, the rendering of the industrial equipment was previously taught in the claim 1 rejection above).
Regarding claim 8, Delfino teaches a system for generating a rendering (Paragraph 0013, 0066 – “It is therefore provided a computer-implemented method for rendering at least two visualization modes of a 3D model…typical example of computer-implementation of a method is to perform the method with a system adapted for this purpose, e.g. a server…The example of a system showed in FIG. 22 may also be a server, e.g., a server hosting a database”) of industrial equipment (Fig. 21 – The figure shows a rendered display including a 3D model of headphones, which is a type of industrial equipment used for noise protection; see screenshot of Fig. 21 above), the system comprising:
one or more memory devices having instructions stored thereon that, when executed by one or more processors, cause the one or more processors to perform operations (Paragraph 0066-0068 – “The system may comprise a processor coupled to a memory; the memory having recorded thereon a computer program comprising instructions for performing the method… A mass storage device controller 1020 manages accesses to a mass memory device, such as hard drive 1030…Method steps may be performed by a programmable processor executing a program of instructions to perform functions of the method”) comprising:
generating an interface comprising (i) a graphical representation of the rendering of the industrial equipment based on a 3D model of the industrial equipment (Fig. 21, Paragraph 0056, 0060 – “the user may perform a selection of the category of visualization modes to render…The method renders S40 the 3D model in each split view 100 according to its corresponding visualization mode and determined rendering area”; Note: the rendering of the industrial equipment is shown on an interface in Fig. 21, where the headphone is industrial equipment for noise protection);
generating based on a user selection of a selectable icon, a graphical representation of a plurality of selectable rendering options (Paragraph 0055-0056, 0069, 0072 – “the main view 200 is pre-existent to our method. As an example, it is the view wherein the design of the 3D model is performed. The main view 200 may also render the 3D model based on a preferred visualization mode. The preferred visualization mode may be chosen by the user or automatically selected by the application…The number of visualization modes to render may be comprised between 2 to more than 100, and more preferably between 2 to 8. When the number of visualization modes is too large, for example more than 10, it is preferable to group the visualization modes by categories and to render only the visualization modes of a selected category. The grouping of the visualization may be provided or performed between steps S20 and S20 by the user with any standard user interface such as a multiple selection listbox. Then the user may perform a selection of the category of visualization modes to render… a default visualization mode may be preselected in order to ensure a real-time rendering. Then the user can just confirm or not this preselection… the receiving S50 may comprise any user interaction. For example, a mouse click on the split view 100 may be received”; Note: the multiple selection listbox is equivalent to the graphical representation of a plurality of selectable rendering options. It is implied to be generated because it could not be displayed to the user interface otherwise. While it is not explicitly stated that there is a selection of an icon, it would have been obvious to one of ordinary skill in the art to generate the rendering options based on a selection of an icon because the user must somehow indicate that they want to select their preferred visualization mode in order for the options of visualization modes to appear. Especially in the case where a default mode is preselected, the user must confirm it or not, which would require user selection);
transmitting the graphical representation of the plurality of selectable rendering options from the RaaS platform to the user device for presentation via the graphical user interface of the user device (Fig. 3, Paragraph 0055-0056, 0065, 0068 – “the user may perform a selection of the category of visualization modes to render…The grouping of the visualization may be provided or performed between steps S20 and S20 by the user with any standard user interface such as a multiple selection listbox. Then the user may perform a selection of the category of visualization modes to render…The display 1080 may be a monitor or the like as known in the art. The display 1080 may be touch-sensitive display 1080…The processor may thus be programmable and coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device”; Note: The display is equivalent to the user device. It is implied that the processor of the system must transmit the visualization modes to the display in order for the user to select them. An example interface showing selectable rendering options is shown in Fig. 3; see screenshot of Fig. 3 above);
and receiving a user selection of rendering options via a second interaction with the graphical user interface of the user device (Paragraph 0056 – “The grouping of the visualization may be provided or performed between steps S20 and S20 by the user with any standard user interface such as a multiple selection listbox. Then the user may perform a selection of the category of visualization modes to render. For example, the user can launch a command to choose a category of visualization modes grouped under ‘lightning’ for the current 3D model or the 3D scene and will get an immersive selector with viewers showing various lightning visualization modes. Next, the user can launch a rendering command and the viewers will be updated with rendering visualization options”; Note: it is implied that the user selection for the visualization mode is received by the system because the viewers would not be updated otherwise).
Delfino does not teach “a preview of the rendering” in the limitation: “generating an interface comprising (i) a graphical representation of a preview of the rendering of the industrial equipment based on a 3D model of the industrial equipment”. However, Mitsui teaches generating a graphical representation of a preview of the rendering of the industrial equipment based on a 3D model of the industrial equipment (Paragraph 0057, 0072 – “The section 20 displays a two dimensional image that is generated by performing a projective transformation to a partial region of a full-view spherical image, which corresponds to a three dimensional model…when the user selects the “preview” button displayed on the section 23 on the application screen illustrated in FIG. 7(b), an animation illustrated in FIG. 8 is played and displayed on the section 20. The user can check or confirm whether the animation of the full-view spherical image is generated in line with the user intention by using the preview playing function”; Note: the animation is a graphical representation of a preview of a rendering. The RaaS platform and industrial equipment were previously taught by Delfino above). A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the rendering of Delfino could have been substituted for the preview of the rendering of Mitsui because both the rendering and preview of the rendering serve the purpose of showing a visualization of a subject. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of displaying a visualization of a subject with certain viewing settings. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the rendering of Delfino for the preview of the rendering of Mitsui according to known methods to yield the predictable result of displaying a visualization of a subject with certain viewing settings.
Moreover, Delfino does not teach the plurality of selectable rendering options comprising (i) a plurality of views of the 3D model from a plurality of different viewpoints of a virtual camera, (ii) a plurality of different center points of the 3D model, and (iii) a plurality of different movement paths of the virtual camera in a 3D space relative to the 3D model; nor user selection of the rendering options comprising the user selection of the rendering options comprising (i) a viewpoint of the virtual camera from the plurality of different viewpoints, (ii) an identification of a center point of the 3D model from the plurality of different center points, and (iii) a movement path of the virtual camera from the plurality of different movement paths. However, Mitsui teaches the plurality of selectable rendering options comprising (i) a plurality of views of the 3D model from a plurality of different viewpoints of a virtual camera, (ii) a plurality of different center points of the 3D model (Fig. 27, Paragraph 0150-0152 – “the parameters include, for example, a viewpoint specified as a desired position in a partial image, and an angle of view “a” indicating a range of angle capturable by the camera 202. As illustrated in FIG. 23, the parameters further include camera coordinates indicating a position of the camera 202, camera UP indicating the upward direction of the camera 202 as a vector, camera gazing point coordinates indicating a point that the camera 202 is gazing (i.e., gazing point), and a radius of the sphere 200 that is a virtual three dimensional object used for mapping a full-view spherical image. Hereinafter, the camera gazing point may be also referred to as the gazing point… It should be noted that these parameters are parameters necessary for changing the projection type or projection mode”; Note: the parameters are rendering options, and the gazing point is equivalent to a center point. Fig. 27 shows graphical representations for choosing different types that correspond to different parameters; see screenshot of Fig. 27 above), and (iii) a plurality of different movement paths of the virtual camera in a 3D space relative to the 3D model (Paragraph 0062, 0119-120 – “When a first viewpoint and a second viewpoint next to the first viewpoint are specified or designated in the three dimensional model of the full-view spherical image, the viewpoint control unit 106 performs a transition of viewpoints from the first viewpoint to the second viewpoint along a transition path interpolating between the first viewpoint and the second viewpoint…when a user designates a viewpoint in a partial image displayed on the section 20, an application screen transits to a state illustrated in FIG. 16A(b), and an icon 29 indicating the registration completion of the viewpoint is displayed in the partial image… The icon 29 includes a direction instruction button having four arrows for specifying the vertical direction (i.e., tilt direction) and the lateral direction (i.e., pan direction) of the partial image. As illustrated in FIG. 16B(c), in response to a selection of any one of the four arrows of the direction instruction button by the user, a transition direction to the next to-be-registered viewpoint is set”; Note: the icon is a graphical representation of a rendering option corresponding to the transition between viewpoints, which is equivalent to the camera movement path. The spherical image represents a 3D space); and user selection of the rendering options comprising the user selection of the rendering options comprising (i) a viewpoint of the virtual camera from the plurality of different viewpoints, (ii) an identification of a center point of the 3D model from the plurality of different center points (Fig. 27, Paragraph 0150-0152 – “the parameters include, for example, a viewpoint specified as a desired position in a partial image, and an angle of view “a” indicating a range of angle capturable by the camera 202. As illustrated in FIG. 23, the parameters further include camera coordinates indicating a position of the camera 202, camera UP indicating the upward direction of the camera 202 as a vector, camera gazing point coordinates indicating a point that the camera 202 is gazing (i.e., gazing point), and a radius of the sphere 200 that is a virtual three dimensional object used for mapping a full-view spherical image. Hereinafter, the camera gazing point may be also referred to as the gazing point… It should be noted that these parameters are parameters necessary for changing the projection type or projection mode”; Note: the parameters are rendering options, and the gazing point is equivalent to a center point. Fig. 27 shows graphical representations for choosing different types that correspond to different parameters; see screenshot of Fig. 27 above), and (iii) a movement path of the virtual camera from the plurality of different movement paths (Paragraph 0062, 0119-120 – “When a first viewpoint and a second viewpoint next to the first viewpoint are specified or designated in the three dimensional model of the full-view spherical image, the viewpoint control unit 106 performs a transition of viewpoints from the first viewpoint to the second viewpoint along a transition path interpolating between the first viewpoint and the second viewpoint…when a user designates a viewpoint in a partial image displayed on the section 20, an application screen transits to a state illustrated in FIG. 16A(b), and an icon 29 indicating the registration completion of the viewpoint is displayed in the partial image… The icon 29 includes a direction instruction button having four arrows for specifying the vertical direction (i.e., tilt direction) and the lateral direction (i.e., pan direction) of the partial image. As illustrated in FIG. 16B(c), in response to a selection of any one of the four arrows of the direction instruction button by the user, a transition direction to the next to-be-registered viewpoint is set”; Note: the icon is a graphical representation of a rendering option corresponding to the transition between viewpoints, which is equivalent to the camera movement path. The spherical image represents a 3D space). Since Delfino already allows “a modification of the point of view and/or a modification of the position and/or orientation of the 3D model” (Delfino: Paragraph 0050), it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of Mitsui to have selectable rendering options include viewpoint, center point, and camera movement path because having selectable options for the point of view would enhance the user experience by making it easier for users to view the model as desired. Specifically, choosing the viewpoint and center point allows the user to easily see the desired part of the model, and choosing the camera movement path allows the user to dictate the order and direction of the viewing.
Lastly, Delfino does not teach generating and transmitting, based on the 3D model and the user selection of the rendering options, to generate the preview of the rendering of the industrial equipment to the user device, wherein the preview of the rendering of the industrial equipment comprises an animation of the industrial equipment taken from the selected viewpoint and with movement of the virtual camera along the selected movement path relative to the selected center point. However, Mitsui teaches generating and transmitting, based on the 3D model and the user selection of the rendering options, to generate the preview of the rendering of the industrial equipment to the user device (Paragraph 0057, 0072, 0135 – “The section 20 displays a two dimensional image that is generated by performing a projective transformation to a partial region of a full-view spherical image, which corresponds to a three dimensional model…when the user selects the “preview” button displayed on the section 23 on the application screen illustrated in FIG. 7(b), an animation illustrated in FIG. 8 is played and displayed on the section 20. The user can check or confirm whether the animation of the full-view spherical image is generated in line with the user intention by using the preview playing function… the user can variously change and set the above described each value (e.g., preset data, background audio, transition speed, crop region), and the preview can be played each time the value is changed and set”; Note: a preview is generated and displayed onto the user device. The preview is based on a 3D model and values set by the user), wherein the preview of the rendering of the industrial equipment comprises an animation of the industrial equipment taken from the selected viewpoint and with movement of the virtual camera along the selected movement path relative to the selected center point (Paragraph 0143, 0156 – “When the user selects the “preview” button at this timing, the animation is generated based on the updated preset data 600. In this case, the viewpoint control unit 106 transits the viewpoints along a modified transition path interpolated based on the changed end viewpoint, the calculation unit 104 generates a plurality of partial images based on the viewpoints transiting along the modified transition path and the changed angle of view of the end viewpoint, and the control unit 103 connects the plurality of partial images generated in this way as an animation, and display the animation on the section 40… When the first projection type is applied, as illustrated in FIG. 24A, the gazing point is set at the origin 201 that is the center of the sphere 200, and the camera 202 is disposed at a position outside the radius of the sphere 200”; Note: the gazing point is equivalent to the center point, and the transition path is equivalent to movement path). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of Mitsui to generate and transmit the preview to the user device because it would allow the user to see the preview and determine their preferences based on the preview. Logically, there would be no purpose in having a preview if it could not be seen by the user.
Delfino modified by Mitsui still does not teach generating an interface comprising (ii) a preview of a cost of generating the rendering of the industrial equipment based on the 3D model and a number of views of the rendering to be generated, and (iii) a selectable icon indicating user approval of the preview of the rendering and the preview of the cost; transmitting the interface to a user device for presentation on a graphical user interface of the user device; receiving, at the RaaS platform, a user selection of the selectable icon via a first interaction with the graphical user interface on the user device. However, 3S teaches generating an interface comprising (ii) a preview of a cost of generating the rendering of the industrial equipment based on the 3D model and a number of views of the rendering to be generated (Screenshots 1-3 – There is a render preview option on the RaaS platform that shows a preview of the render and a cost estimation, which is equivalent to a preview of a cost of generating the rendering. The cost estimation is shown on screenshot 3. The cost estimation is based on the uploaded scene (screenshot 1) and the input parameters (screenshot 2). The input parameters include the camera, which is a view. Only one camera can be input, which means the cost estimation is based on one view. The 3D model was previously taught by Delfino, and is represented by the uploaded scene in the context of 3S), and (iii) a selectable icon indicating user approval of the preview of the rendering and the preview of the cost (Screenshot 3 – There is a “Review & Launch Render” button, which is a selectable icon indicating user approval of the preview and cost); transmitting the interface to a user device for presentation on a graphical user interface of the user device (Screenshot 3 – There is an interface shown to the user on a user device, which implies that it was transmitted to the user device); receiving, at the RaaS platform, a user selection of the selectable icon via a first interaction with the graphical user interface on the user device (Screenshot 4 – This screenshot shows the loading screen after the selectable icon, “Review & Launch Render”, was clicked on by the user). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of 3S to generate a preview of a cost to render based on the 3D model and number of views for the benefit of allowing the user to make a more informed decision on the rendering they want to generate. For instance, if it costs a lot to generate, the user is able to know ahead of time so that they can perfect their 3D model and edit the parameters to achieve the desired rendering at an appropriate cost. Furthermore, the 3D model file and number of views would affect how much computational resources or time is used and thus would affect the cost of the render. It also would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of 3S to have a selectable icon to approve the preview and cost because having buttons for users to click on before moving onto a next page is common in the art and known for being user-friendly. For example, in screenshots 1-4 of 3S, there are buttons in each page for users to click on to indicate approval, and in Delfino, there are options on screen for the user to select (Paragraph 0065 – “the pointer control device allows the user to select various commands, and input control signals”). Additionally, having a selectable icon specifically for approving the preview and cost helps ensure that the user does not waste money or time on a render that does not meet their needs.
Regarding claim 9, Delfino in view of Mitsui and 3S teaches the system of claim 8. Delfino further teaches wherein the plurality of selectable rendering options comprise an identification of a light source used to illuminate the 3D model of industrial equipment when generating the rendering of the industrial equipment (Paragraph 0056 – “The grouping of the visualization may be provided or performed between steps S20 and S20 by the user with any standard user interface such as a multiple selection listbox. Then the user may perform a selection of the category of visualization modes to render…A visualization mode is defined by a technique of rendering and a set of parameters. As an example, a plurality of lightning visualization modes could comprise a parameter for setting the number of lights, the orientation and the position of each light, the type of each light (punctual, infinite or directional), the intensity and the color of each light”; Note: the selection of a visualization mode is equivalent to a selectable rendering option for setting a light. Additionally, the rendering of the industrial equipment was previously taught in the claim 8 rejection above).
Regarding claim 10, Delfino in view of Mitsui and 3S teaches the system of claim 8. Delfino further teaches wherein the plurality of selectable rendering options comprise an identification of a placement of a light source used to illuminate the 3D model of industrial equipment when generating the rendering of the industrial equipment (Paragraph 0056 – “The grouping of the visualization may be provided or performed between steps S20 and S20 by the user with any standard user interface such as a multiple selection listbox. Then the user may perform a selection of the category of visualization modes to render…A visualization mode is defined by a technique of rendering and a set of parameters. As an example, a plurality of lightning visualization modes could comprise a parameter for setting the number of lights, the orientation and the position of each light, the type of each light (punctual, infinite or directional), the intensity and the color of each light”; Note: the selection of a visualization mode is equivalent to a selectable option for setting the position of a light. Additionally, the rendering of the industrial equipment was previously taught in the claim 8 rejection above).
Regarding claim 15, Delfino teaches a non-transitory computer readable medium comprising instructions stored thereon that, when executed by one or more processors (Paragraph 0066-0068 – “The system may comprise a processor coupled to a memory; the memory having recorded thereon a computer program comprising instructions for performing the method…A mass storage device controller 1020 manages accesses to a mass memory device, such as hard drive 1030. Mass memory devices suitable for tangibly embodying computer program instructions and data include all forms of nonvolatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM disks 1040…Method steps may be performed by a programmable processor executing a program of instructions to perform functions of the method”; Note: the memory device can be a hard drive, which is non-transitory), cause the one or more processors to:
generate an interface comprising (i) a graphical representation of the rendering of the industrial equipment based on a 3D model of the industrial equipment (Fig. 21, Paragraph 0056, 0060 – “the user may perform a selection of the category of visualization modes to render…The method renders S40 the 3D model in each split view 100 according to its corresponding visualization mode and determined rendering area”; Note: the rendering of the industrial equipment is shown on an interface in Fig. 21, where the headphone is industrial equipment for noise protection);
generate based on a user selection of a selectable icon, a graphical representation of a plurality of selectable rendering options (Paragraph 0055-0056, 0069, 0072 – “the main view 200 is pre-existent to our method. As an example, it is the view wherein the design of the 3D model is performed. The main view 200 may also render the 3D model based on a preferred visualization mode. The preferred visualization mode may be chosen by the user or automatically selected by the application…The number of visualization modes to render may be comprised between 2 to more than 100, and more preferably between 2 to 8. When the number of visualization modes is too large, for example more than 10, it is preferable to group the visualization modes by categories and to render only the visualization modes of a selected category. The grouping of the visualization may be provided or performed between steps S20 and S20 by the user with any standard user interface such as a multiple selection listbox. Then the user may perform a selection of the category of visualization modes to render… a default visualization mode may be preselected in order to ensure a real-time rendering. Then the user can just confirm or not this preselection… the receiving S50 may comprise any user interaction. For example, a mouse click on the split view 100 may be received”; Note: the multiple selection listbox is equivalent to the graphical representation of a plurality of selectable rendering options. It is implied to be generated because it could not be displayed to the user interface otherwise. While it is not explicitly stated that there is a selection of an icon, it would have been obvious to one of ordinary skill in the art to generate the rendering options based on a selection of an icon because the user must somehow indicate that they want to select their preferred visualization mode in order for the options of visualization modes to appear. Especially in the case where a default mode is preselected, the user must confirm it or not, which would require user selection);
transmit the graphical representation of the plurality of selectable rendering options from the RaaS platform to the user device for presentation via the graphical user interface of the user device (Fig. 3, Paragraph 0055-0056, 0065, 0068 – “the user may perform a selection of the category of visualization modes to render…The grouping of the visualization may be provided or performed between steps S20 and S20 by the user with any standard user interface such as a multiple selection listbox. Then the user may perform a selection of the category of visualization modes to render…The display 1080 may be a monitor or the like as known in the art. The display 1080 may be touch-sensitive display 1080…The processor may thus be programmable and coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device”; Note: The display is equivalent to the user device. It is implied that the processor of the system must transmit the visualization modes to the display in order for the user to select them. An example interface showing selectable rendering options is shown in Fig. 3; see screenshot of Fig. 3 above);
and receive a user selection of rendering options via a second interaction with the graphical user interface of the user device (Paragraph 0056 – “The grouping of the visualization may be provided or performed between steps S20 and S20 by the user with any standard user interface such as a multiple selection listbox. Then the user may perform a selection of the category of visualization modes to render. For example, the user can launch a command to choose a category of visualization modes grouped under ‘lightning’ for the current 3D model or the 3D scene and will get an immersive selector with viewers showing various lightning visualization modes. Next, the user can launch a rendering command and the viewers will be updated with rendering visualization options”; Note: it is implied that the user selection for the visualization mode is received by the system because the viewers would not be updated otherwise).
Delfino does not teach “a preview of the rendering” in the limitation: “generate an interface comprising (i) a graphical representation of a preview of the rendering of the industrial equipment based on a 3D model of the industrial equipment”. However, Mitsui teaches generating a graphical representation of a preview of the rendering of the industrial equipment based on a 3D model of the industrial equipment (Paragraph 0057, 0072 – “The section 20 displays a two dimensional image that is generated by performing a projective transformation to a partial region of a full-view spherical image, which corresponds to a three dimensional model…when the user selects the “preview” button displayed on the section 23 on the application screen illustrated in FIG. 7(b), an animation illustrated in FIG. 8 is played and displayed on the section 20. The user can check or confirm whether the animation of the full-view spherical image is generated in line with the user intention by using the preview playing function”; Note: the animation is a graphical representation of a preview of a rendering. The RaaS platform and industrial equipment were previously taught by Delfino above). A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the rendering of Delfino could have been substituted for the preview of the rendering of Mitsui because both the rendering and preview of the rendering serve the purpose of showing a visualization of a subject. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of displaying a visualization of a subject with certain viewing settings. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the rendering of Delfino for the preview of the rendering of Mitsui according to known methods to yield the predictable result of displaying a visualization of a subject with certain viewing settings.
Moreover, Delfino does not teach the plurality of selectable rendering options comprising (i) a plurality of views of the 3D model from a plurality of different viewpoints of a virtual camera, (ii) a plurality of different center points of the 3D model, and (iii) a plurality of different movement paths of the virtual camera in a 3D space relative to the 3D model; nor user selection of the rendering options comprising the user selection of the rendering options comprising (i) a viewpoint of the virtual camera from the plurality of different viewpoints, (ii) an identification of a center point of the 3D model from the plurality of different center points, and (iii) a movement path of the virtual camera from the plurality of different movement paths. However, Mitsui teaches the plurality of selectable rendering options comprising (i) a plurality of views of the 3D model from a plurality of different viewpoints of a virtual camera, (ii) a plurality of different center points of the 3D model (Fig. 27, Paragraph 0150-0152 – “the parameters include, for example, a viewpoint specified as a desired position in a partial image, and an angle of view “a” indicating a range of angle capturable by the camera 202. As illustrated in FIG. 23, the parameters further include camera coordinates indicating a position of the camera 202, camera UP indicating the upward direction of the camera 202 as a vector, camera gazing point coordinates indicating a point that the camera 202 is gazing (i.e., gazing point), and a radius of the sphere 200 that is a virtual three dimensional object used for mapping a full-view spherical image. Hereinafter, the camera gazing point may be also referred to as the gazing point… It should be noted that these parameters are parameters necessary for changing the projection type or projection mode”; Note: the parameters are rendering options, and the gazing point is equivalent to a center point. Fig. 27 shows graphical representations for choosing different types that correspond to different parameters; see screenshot of Fig. 27 above), and (iii) a plurality of different movement paths of the virtual camera in a 3D space relative to the 3D model (Paragraph 0062, 0119-120 – “When a first viewpoint and a second viewpoint next to the first viewpoint are specified or designated in the three dimensional model of the full-view spherical image, the viewpoint control unit 106 performs a transition of viewpoints from the first viewpoint to the second viewpoint along a transition path interpolating between the first viewpoint and the second viewpoint…when a user designates a viewpoint in a partial image displayed on the section 20, an application screen transits to a state illustrated in FIG. 16A(b), and an icon 29 indicating the registration completion of the viewpoint is displayed in the partial image… The icon 29 includes a direction instruction button having four arrows for specifying the vertical direction (i.e., tilt direction) and the lateral direction (i.e., pan direction) of the partial image. As illustrated in FIG. 16B(c), in response to a selection of any one of the four arrows of the direction instruction button by the user, a transition direction to the next to-be-registered viewpoint is set”; Note: the icon is a graphical representation of a rendering option corresponding to the transition between viewpoints, which is equivalent to the camera movement path. The spherical image represents a 3D space); and user selection of the rendering options comprising the user selection of the rendering options comprising (i) a viewpoint of the virtual camera from the plurality of different viewpoints, (ii) an identification of a center point of the 3D model from the plurality of different center points (Fig. 27, Paragraph 0150-0152 – “the parameters include, for example, a viewpoint specified as a desired position in a partial image, and an angle of view “a” indicating a range of angle capturable by the camera 202. As illustrated in FIG. 23, the parameters further include camera coordinates indicating a position of the camera 202, camera UP indicating the upward direction of the camera 202 as a vector, camera gazing point coordinates indicating a point that the camera 202 is gazing (i.e., gazing point), and a radius of the sphere 200 that is a virtual three dimensional object used for mapping a full-view spherical image. Hereinafter, the camera gazing point may be also referred to as the gazing point… It should be noted that these parameters are parameters necessary for changing the projection type or projection mode”; Note: the parameters are rendering options, and the gazing point is equivalent to a center point. Fig. 27 shows graphical representations for choosing different types that correspond to different parameters; see screenshot of Fig. 27 above), and (iii) a movement path of the virtual camera from the plurality of different movement paths (Paragraph 0062, 0119-120 – “When a first viewpoint and a second viewpoint next to the first viewpoint are specified or designated in the three dimensional model of the full-view spherical image, the viewpoint control unit 106 performs a transition of viewpoints from the first viewpoint to the second viewpoint along a transition path interpolating between the first viewpoint and the second viewpoint…when a user designates a viewpoint in a partial image displayed on the section 20, an application screen transits to a state illustrated in FIG. 16A(b), and an icon 29 indicating the registration completion of the viewpoint is displayed in the partial image… The icon 29 includes a direction instruction button having four arrows for specifying the vertical direction (i.e., tilt direction) and the lateral direction (i.e., pan direction) of the partial image. As illustrated in FIG. 16B(c), in response to a selection of any one of the four arrows of the direction instruction button by the user, a transition direction to the next to-be-registered viewpoint is set”; Note: the icon is a graphical representation of a rendering option corresponding to the transition between viewpoints, which is equivalent to the camera movement path. The spherical image represents a 3D space). Since Delfino already allows “a modification of the point of view and/or a modification of the position and/or orientation of the 3D model” (Delfino: Paragraph 0050), it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of Mitsui to have selectable rendering options include viewpoint, center point, and camera movement path because having selectable options for the point of view would enhance the user experience by making it easier for users to view the model as desired. Specifically, choosing the viewpoint and center point allows the user to easily see the desired part of the model, and choosing the camera movement path allows the user to dictate the order and direction of the viewing.
Lastly, Delfino does not teach generating and transmitting, based on the 3D model and the user selection of the rendering options, to generate the preview of the rendering of the industrial equipment to the user device, wherein the preview of the rendering of the industrial equipment comprises an animation of the industrial equipment taken from the selected viewpoint and with movement of the virtual camera along the selected movement path relative to the selected center point. However, Mitsui teaches generating and transmitting, based on the 3D model and the user selection of the rendering options, to generate the preview of the rendering of the industrial equipment to the user device (Paragraph 0057, 0072, 0135 – “The section 20 displays a two dimensional image that is generated by performing a projective transformation to a partial region of a full-view spherical image, which corresponds to a three dimensional model…when the user selects the “preview” button displayed on the section 23 on the application screen illustrated in FIG. 7(b), an animation illustrated in FIG. 8 is played and displayed on the section 20. The user can check or confirm whether the animation of the full-view spherical image is generated in line with the user intention by using the preview playing function… the user can variously change and set the above described each value (e.g., preset data, background audio, transition speed, crop region), and the preview can be played each time the value is changed and set”; Note: a preview is generated and displayed onto the user device. The preview is based on a 3D model and values set by the user), wherein the preview of the rendering of the industrial equipment comprises an animation of the industrial equipment taken from the selected viewpoint and with movement of the virtual camera along the selected movement path relative to the selected center point (Paragraph 0143, 0156 – “When the user selects the “preview” button at this timing, the animation is generated based on the updated preset data 600. In this case, the viewpoint control unit 106 transits the viewpoints along a modified transition path interpolated based on the changed end viewpoint, the calculation unit 104 generates a plurality of partial images based on the viewpoints transiting along the modified transition path and the changed angle of view of the end viewpoint, and the control unit 103 connects the plurality of partial images generated in this way as an animation, and display the animation on the section 40… When the first projection type is applied, as illustrated in FIG. 24A, the gazing point is set at the origin 201 that is the center of the sphere 200, and the camera 202 is disposed at a position outside the radius of the sphere 200”; Note: the gazing point is equivalent to the center point, and the transition path is equivalent to movement path). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of Mitsui to generate and transmit the preview to the user device because it would allow the user to see the preview and determine their preferences based on the preview. Logically, there would be no purpose in having a preview if it could not be seen by the user.
Delfino modified by Mitsui still does not teach generating an interface comprising (ii) a preview of a cost of generating the rendering of the industrial equipment based on the 3D model and a number of views of the rendering to be generated, and (iii) a selectable icon indicating user approval of the preview of the rendering and the preview of the cost; transmitting the interface to a user device for presentation on a graphical user interface of the user device; receiving, at the RaaS platform, a user selection of the selectable icon via a first interaction with the graphical user interface on the user device. However, 3S teaches generating an interface comprising (ii) a preview of a cost of generating the rendering of the industrial equipment based on the 3D model and a number of views of the rendering to be generated (Screenshots 1-3 – There is a render preview option on the RaaS platform that shows a preview of the render and a cost estimation, which is equivalent to a preview of a cost of generating the rendering. The cost estimation is shown on screenshot 3. The cost estimation is based on the uploaded scene (screenshot 1) and the input parameters (screenshot 2). The input parameters include the camera, which is a view. Only one camera can be input, which means the cost estimation is based on one view. The 3D model was previously taught by Delfino, and is represented by the uploaded scene in the context of 3S), and (iii) a selectable icon indicating user approval of the preview of the rendering and the preview of the cost (Screenshot 3 – There is a “Review & Launch Render” button, which is a selectable icon indicating user approval of the preview and cost); transmitting the interface to a user device for presentation on a graphical user interface of the user device (Screenshot 3 – There is an interface shown to the user on a user device, which implies that it was transmitted to the user device); receiving, at the RaaS platform, a user selection of the selectable icon via a first interaction with the graphical user interface on the user device (Screenshot 4 – This screenshot shows the loading screen after the selectable icon, “Review & Launch Render”, was clicked on by the user). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of 3S to generate a preview of a cost to render based on the 3D model and number of views for the benefit of allowing the user to make a more informed decision on the rendering they want to generate. For instance, if it costs a lot to generate, the user is able to know ahead of time so that they can perfect their 3D model and edit the parameters to achieve the desired rendering at an appropriate cost. Furthermore, the 3D model file and number of views would affect how much computational resources or time is used and thus would affect the cost of the render. It also would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of 3S to have a selectable icon to approve the preview and cost because having buttons for users to click on before moving onto a next page is common in the art and known for being user-friendly. For example, in screenshots 1-4 of 3S, there are buttons in each page for users to click on to indicate approval, and in Delfino, there are options on screen for the user to select (Paragraph 0065 – “the pointer control device allows the user to select various commands, and input control signals”). Additionally, having a selectable icon specifically for approving the preview and cost helps ensure that the user does not waste money or time on a render that does not meet their needs.
Regarding claim 16, Delfino in view of Mitsui and 3S teaches the non-transitory computer readable medium of claim 15. Delfino further teaches wherein the plurality of selectable rendering options comprise an identification of a light source used to illuminate the 3D model of industrial equipment when generating the rendering of the industrial equipment (Paragraph 0056 – “The grouping of the visualization may be provided or performed between steps S20 and S20 by the user with any standard user interface such as a multiple selection listbox. Then the user may perform a selection of the category of visualization modes to render…A visualization mode is defined by a technique of rendering and a set of parameters. As an example, a plurality of lightning visualization modes could comprise a parameter for setting the number of lights, the orientation and the position of each light, the type of each light (punctual, infinite or directional), the intensity and the color of each light”; Note: the selection of a visualization mode is equivalent to a selectable rendering option for setting a light. Additionally, the rendering of the industrial equipment was previously taught in the claim 15 rejection above).
Regarding claim 17, Delfino in view of Mitsui and 3S teaches the non-transitory computer readable medium of claim 15. Delfino further teaches wherein the plurality of selectable rendering options comprise an identification of a placement of a light source used to illuminate the 3D model of industrial equipment when generating the rendering of the industrial equipment (Paragraph 0056 – “The grouping of the visualization may be provided or performed between steps S20 and S20 by the user with any standard user interface such as a multiple selection listbox. Then the user may perform a selection of the category of visualization modes to render…A visualization mode is defined by a technique of rendering and a set of parameters. As an example, a plurality of lightning visualization modes could comprise a parameter for setting the number of lights, the orientation and the position of each light, the type of each light (punctual, infinite or directional), the intensity and the color of each light”; Note: the selection of a visualization mode is equivalent to a selectable option for setting the position of a light. Additionally, the rendering of the industrial equipment was previously taught in the claim 15 rejection above).
Claims 6, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Delfino in view of Mitsui, 3S, Gay et al. (US 10582182 B2), and Adobe (Interacting with 3D models), hereinafter Gay and Adobe respectively.
Regarding claim 6, Delfino in view of Mitsui and 3S teaches the RaaS platform of claim 1. Delfino does not teach wherein the plurality of selectable rendering options comprise an identification of a distance of the virtual camera from the center point of the 3D model, wherein the rendering of the industrial equipment is from a perspective of the virtual camera. However, Gay teaches wherein the rendering is from a perspective of the virtual camera (Col. 3 lines 40-60 – “Master controller 150 may direct virtual rendering system 110 to control virtual cameras 121a-121b according to particular parameters…The particular parameters of camera behavior might be dictated by manual control, by tracking the motion of a particular object…Once the virtual and robotic cameras are properly configured by appropriately programming the motion paths of data 122a-122b and 142a-142b, master controller 150 may then query virtual rendering system 110 for virtually rendered feeds and video capture system 130 for video capture feeds. Master controller 150 may then act as a rendering controller by combining the feeds smoothly using standard broadcast key technology such as chroma key or key/fill to generate composite render”; Note: rendering occurs based on the video feed, and thus the perspective, of a virtual camera). Since Delfino already allows “a modification of the point of view and/or a modification of the position and/or orientation of the 3D model” (Delfino: Paragraph 0050), it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of Gay to render the 3D model based on a virtual camera perspective for the benefit of “high viewer impact and engagement” (Gay: Col. 1 lines 39-41). Rendering based on the perspective of the virtual camera makes it easier to view the 3D model within the virtual environment. Furthermore, Delfino modified by Gay still does not teach wherein the plurality of selectable rendering options comprise an identification of a distance of the virtual camera from the center point of the 3D model. However, Adobe teaches wherein the plurality of selectable rendering options comprise an identification of a distance of the virtual camera from the center point of the 3D model (Page 13 – “Target Refers to the point in the 3D model that the camera is aimed at. By setting a camera target, you can focus the camera on a specific area or element in the 3D model…Angle units Changes the Camera X, Camera Y, and Camera Z values to Azimuth, Altitude, and Distance. These values enable you to manipulate the camera by azimuth (distance) and altitude (X axis), and to zoom using the distance value”; Note: zooming using the distance value is a selectable option that identifies a distance from the camera to the target of the 3D model, which is the center point). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of Adobe to have a selectable rendering option be an identification of a distance of the virtual camera from the center point because it would give the user the ability to zoom in and out, which is useful for when the user wants to have a closer look at a specific part of the 3D model.
Regarding claim 13, Delfino in view of Mitsui and 3S teaches the system of claim 8. Delfino does not teach wherein the plurality of selectable rendering options comprise an identification of a distance of the virtual camera from the center point of the 3D model, wherein the rendering of the industrial equipment is from a perspective of the virtual camera. However, Gay teaches wherein the rendering is from a perspective of the virtual camera (Col. 3 lines 40-60 – “Master controller 150 may direct virtual rendering system 110 to control virtual cameras 121a-121b according to particular parameters…The particular parameters of camera behavior might be dictated by manual control, by tracking the motion of a particular object…Once the virtual and robotic cameras are properly configured by appropriately programming the motion paths of data 122a-122b and 142a-142b, master controller 150 may then query virtual rendering system 110 for virtually rendered feeds and video capture system 130 for video capture feeds. Master controller 150 may then act as a rendering controller by combining the feeds smoothly using standard broadcast key technology such as chroma key or key/fill to generate composite render”; Note: rendering occurs based on the video feed, and thus the perspective, of a virtual camera). Since Delfino already allows “a modification of the point of view and/or a modification of the position and/or orientation of the 3D model” (Delfino: Paragraph 0050), it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of Gay to render the 3D model based on a virtual camera perspective for the benefit of “high viewer impact and engagement” (Gay: Col. 1 lines 39-41). Rendering based on the perspective of the virtual camera makes it easier to view the 3D model within the virtual environment. Furthermore, Delfino modified by Gay still does not teach wherein the plurality of selectable rendering options comprise an identification of a distance of the virtual camera from the center point of the 3D model. However, Adobe teaches wherein the plurality of selectable rendering options comprise an identification of a distance of the virtual camera from the center point of the 3D model (Page 13 – “Target Refers to the point in the 3D model that the camera is aimed at. By setting a camera target, you can focus the camera on a specific area or element in the 3D model…Angle units Changes the Camera X, Camera Y, and Camera Z values to Azimuth, Altitude, and Distance. These values enable you to manipulate the camera by azimuth (distance) and altitude (X axis), and to zoom using the distance value”; Note: zooming using the distance value is a selectable option that identifies a distance from the camera to the target of the 3D model, which is the center point). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of Adobe to have a selectable rendering option be an identification of a distance of the virtual camera from the center point because it would give the user the ability to zoom in and out, which is useful for when the user wants to have a closer look at a specific part of the 3D model.
Regarding claim 20, Delfino in view of Mitsui and 3S teaches non-transitory computer readable medium of claim 15. Delfino does not teach wherein the plurality of selectable rendering options comprise an identification of a distance of the virtual camera from the center point of the 3D model, wherein the rendering of the industrial equipment is from a perspective of the virtual camera. However, Gay teaches wherein the rendering is from a perspective of the virtual camera (Col. 3 lines 40-60 – “Master controller 150 may direct virtual rendering system 110 to control virtual cameras 121a-121b according to particular parameters…The particular parameters of camera behavior might be dictated by manual control, by tracking the motion of a particular object…Once the virtual and robotic cameras are properly configured by appropriately programming the motion paths of data 122a-122b and 142a-142b, master controller 150 may then query virtual rendering system 110 for virtually rendered feeds and video capture system 130 for video capture feeds. Master controller 150 may then act as a rendering controller by combining the feeds smoothly using standard broadcast key technology such as chroma key or key/fill to generate composite render”; Note: rendering occurs based on the video feed, and thus the perspective, of a virtual camera). Since Delfino already allows “a modification of the point of view and/or a modification of the position and/or orientation of the 3D model” (Delfino: Paragraph 0050), it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of Gay to render the 3D model based on a virtual camera perspective for the benefit of “high viewer impact and engagement” (Gay: Col. 1 lines 39-41). Rendering based on the perspective of the virtual camera makes it easier to view the 3D model within the virtual environment. Furthermore, Delfino modified by Gay still does not teach wherein the plurality of selectable rendering options comprise an identification of a distance of the virtual camera from the center point of the 3D model. However, Adobe teaches wherein the plurality of selectable rendering options comprise an identification of a distance of the virtual camera from the center point of the 3D model (Page 13 – “Target Refers to the point in the 3D model that the camera is aimed at. By setting a camera target, you can focus the camera on a specific area or element in the 3D model…Angle units Changes the Camera X, Camera Y, and Camera Z values to Azimuth, Altitude, and Distance. These values enable you to manipulate the camera by azimuth (distance) and altitude (X axis), and to zoom using the distance value”; Note: zooming using the distance value is a selectable option that identifies a distance from the camera to the target of the 3D model, which is the center point). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of Adobe to have a selectable rendering option be an identification of a distance of the virtual camera from the center point because it would give the user the ability to zoom in and out, which is useful for when the user wants to have a closer look at a specific part of the 3D model.
Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Delfino in view of Mitsui, 3S, and TriMech (Video Tech Tip: Modify Camera Paths in SOLIDWORKS Composer), hereinafter TriMech.
Regarding claim 7, Delfino in view of Mitsui and 3S teaches the RaaS platform of claim 1. Delfino does not teach wherein the movement path of the virtual camera is through components of the 3D model when generating the preview of the rendering of the industrial equipment. However, TriMech teaches wherein the movement path of the virtual camera is through components of the 3D model when generating the preview of the rendering of the industrial equipment (All Images – The first screenshot shows the line of the movement path. Part of the movement path goes through components of the 3D model; see the first screenshot below. The rest of the screenshots show a preview of the perspective of the virtual camera going through the path). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of TriMech to have the movement path of the virtual camera go through components of the 3D model for the benefit of having an animated viewing of the 3D model, which allows the user to view the 3D model from different viewpoints without having to manually navigate the virtual environment or render each individual viewpoint. Additionally, being able to go through components of the 3D model provides the user an inside view of the model, which the user would not be able to see without the movement path of the virtual camera.
PNG
media_image8.png
478
957
media_image8.png
Greyscale
1st Screenshot (taken from TriMech)
Regarding claim 14, Delfino in view of Mitsui and 3S teaches the system of claim 8. Delfino does not teach wherein the movement path of the virtual camera is through components of the 3D model when generating the preview of the rendering of the industrial equipment. However, TriMech teaches wherein the movement path of the virtual camera is through components of the 3D model when generating the preview of the rendering of the industrial equipment (All Images – The first screenshot shows the line of the movement path. Part of the movement path goes through components of the 3D model; see the first screenshot above. The rest of the screenshots show a preview of the perspective of the virtual camera going through the path). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Delfino to incorporate the teachings of TriMech to have the movement path of the virtual camera go through components of the 3D model for the benefit of having an animated viewing of the 3D model, which allows the user to view the 3D model from different viewpoints without having to manually navigate the virtual environment or render each individual viewpoint. Additionally, being able to go through components of the 3D model provides the user an inside view of the model, which the user would not be able to see without the movement path of the virtual camera.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Li (CN 115567652 A) teaches a method of generating a preview of a 3D model.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE HAU MA whose telephone number is (571)272-2187. The examiner can normally be reached M-Th 7-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571) 270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHELLE HAU MA/Examiner, Art Unit 2617
/KING Y POON/Supervisory Patent Examiner, Art Unit 2617