DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-18 is/are rejected under 35 U.S.C. 102(a)(1) and/or 102(a)(2) as being anticipated by Yen et al1 (“Yen”).
Regarding claim 1, Yen teaches a method for generating a three-dimensional (3D) content from a video game (note that “generating a three-dimensional (3D) content” is somewhat awkwardly phrased but technically such 3D content could be any content that depicts 3D objects or content that contains 3D information which either explicitly contains data in 3 dimensions or represents 3D information and the 3D video asset generated below is such content as the 3D video asset is seen as breathing life and meaning into this 3D content; see Yen, paragraphs 0135-0146 teaching “creating a non-curated viewing perspective in a video game platform based on a curated viewing perspective” and “control circuitry 504 (FIG. 5) receives, at user equipment (e.g., via I/O Path 502 (FIG. 5)), a first stream of video contents that is depicted using a curated viewing perspective of a video game environment that is simultaneously transmitted to a plurality of user equipment” where “first stream of video contents may be any video that is depicting video game gameplay” such that here game play video from a videogame is the basis for generating a 3D content in the form of the 3D video asset as addressed below where the 3D video asset generated is 3D content generated from a video game), comprising:
capturing two-dimensional (2D) gameplay video generated from a session of a video game and with a first point of view (see Yen, paragraphs 0135-0146 teaching “creating a non-curated viewing perspective in a video game platform based on a curated viewing perspective” and “control circuitry 504 (FIG. 5) receives, at user equipment (e.g., via I/O Path 502 (FIG. 5)), a first stream of video contents that is depicted using a curated viewing perspective of a video game environment that is simultaneously transmitted to a plurality of user equipment” where “first stream of video contents may be any video that is depicting video game gameplay” such that here there is capturing of 2D gameplay video generated from a session of a videogame where for example “the user is viewing a live stream video of gameplay of a tennis simulation game such as “Tennis World Tour 2018” on a streaming website such as YouTube. The first stream may depict the video game environment as rendered by a video game console such as the PlayStation 4. The rendered content that makes up the video game environment may include game objects such as player models, the tennis stadium, the court, the net, the ball, etc. The video game environment may include visible game mechanics (e.g., the movement of the characters and the ball). The curated viewing perspective may comprise a camera angle and an in-game viewing position” the “curated viewing perspective” is showing the video of the gameplay with a first point of view);
analyzing the 2D gameplay video to determine 3D geometry of a scene depicted in the 2D gameplay video (see Yen, paragraphs 0135-0146 teaching “control circuitry 504 may screen capture a frame of the first stream depicting the video game environment and may apply imaging processes (e.g., coefficient of correlation calculation, and keypoint comparisons) to compare with gameplay images on the Internet or in a video game image database stored in storage 508 (FIG. 8). In response to determining a match (e.g., using correlation techniques) to an image on the Internet or in the video game image database, control circuitry 504 may determine the name of the video game the matched image is associated with” and “Control circuitry 504 may use the video game engine data to identify the game objects rendered in the video game environment, as depicted in the first stream. For example, the video game engine data may provide a predetermined list of game objects, graphics, positions, mechanics and audio associated with each frame of the first stream. If the video game engine data does not provide this information for each frame of the first stream, control circuitry 504 may utilize objection recognition to identify objects in the first stream and compare them to the video game engine data. For example, control circuitry 504 may identify a tennis ball in a frame of the first stream. In response, control circuitry 504 may search for the term “tennis ball” in the video game engine data to determine the code, graphics and mechanics associated with a game object “tennis ball.” Control circuitry 504 may then utilize this information to render the game object “tennis ball” in the video game platform” and “control circuitry 504 (FIG. 5) generates for display (e.g., on display 512 (FIG. 5) the video contents within the video game platform using the curated viewing perspective. The video contents of the second stream, unlike the first stream, are based in the video game platform” such that here the 2D gameplay video is analyzed to determine the 3D geometry of the objects and their 3D relationship to the viewing camera such that since the game engine has recreated the environment which is made up of 3D objects then this determines the 3D geometry of a scene depicted in 2D; see further paragraphs 0180-0183 further explaining determining of the 3D geometry where “control circuitry 504 (FIG. 5) searches the video game engine data for a virtual game space that matches the dimensional space. The dimensional space is associated with the first stream and the virtual game space is associated with the second stream. For example, in response to determining that the first stream is associated with a three-dimensional space, control circuitry 504 may search the video game engine data for code that describes three-dimensional objects and environments. In response to determining that the video game engine data is associated with three-dimensional objects and rendering, control circuitry 504 may determine that the virtual game space of the video game is expressed in three dimensions” and note for example figures 1 and 2 show an example of the 3D tennis game where 3D geometry has been determined of a scene depicted in the 2D video);
using the 3D geometry of the scene to generate a 3D video asset with a second point of view that occurred in the gameplay video (note that a “3D video asset” is any asset that can be considered related to video and 3 dimensional information in any way such that for example this could comprise a video depicting 3D objects, a video depicting 2D object with depth information represented, a frame of video with depth information, a pair of images that can be viewed together to see a 3D effect or a 3D asset that may be used to create video as if a video can be created of such a 3D asset then the asset would be a 3D video asset; see Yen, paragraphs 0135-0146 teaching as explained above that the 2D gameplay video is analyzed to determine the 3D geometry of the objects and their 3D relationship to the viewing camera such that since the game engine has recreated the environment which is made up of 3D objects then this determines the 3D geometry of a scene depicted in 2D and as the 3D geometry of the scene can now be rendered from any viewpoint and/or position then it used to generate a 3D video asset with a second point of view where “the video game engine data may provide a predetermined list of game objects, graphics, positions, mechanics and audio associated with each frame of the first stream. If the video game engine data does not provide this information for each frame of the first stream, control circuitry 504 may utilize objection recognition to identify objects in the first stream and compare them to the video game engine data” and “Control circuitry 504 may repeat this process until for all identified objects from the first stream” and “control circuitry 504 (FIG. 5) generates for display (e.g., on display 512 (FIG. 5) the video contents within the video game platform using the curated viewing perspective. The video contents of the second stream, unlike the first stream, are based in the video game platform” such that here the 3D geometry of the scene is used to generated a 3D video asset such as the “second stream” and “video contents of the second stream, unlike the first stream, are based in the video game platform. Therefore, the second stream is not simply a video and the user can interact with the second stream via I/O Path 502 (FIG. 5) through user commands” and then “suppose that the curated viewing perspective is associated with an in-game position at an elevated spot behind a first tennis player. The in-game position may be high enough to allow the camera angle to capture the entire court and the second tennis player on the opposite side. Control circuitry 504 may receive a user selection of an arbitrary position, highlighted by the cursor, in the virtual environment (e.g., on the virtual tennis court). Suppose that the selected position is in the audience (e.g., as depicted in second stream 210 (FIG. 2))” where then further “control circuitry 504 (FIG. 5) generates for display, using the video game engine data, the viewing contents from the non-curated viewing perspective. For example, control circuitry 504 may change the virtual position of the user and the camera angle based on the requested non-curated viewing perspective. Using the video game engine data, control circuitry 504 may re-render all game objects, audio, graphics and virtual environments associated with the first stream at a new set of positions with respect to the user's new virtual position. This allows the user to view the gameplay from a different perspective (e.g., the non-curated perspective)” such that a 3D video asset with a second point of view that occurred in the gameplay video is also generated as a re-rendering of the scene from the second point of view); and
storing the 3D video asset to a user account (note that “storing…to a user account” is extremely broad as the type of storage is not defined, nor is the type of account limited such that here an account is considered any system or device that a user may deposit or store information to thus making an account of the information; see Yen, paragraph 0081 teaching “Content may be recorded, played, displayed or accessed by user equipment devices, but can also be part of a live performance” such that recording of the content by user equipment devices corresponds to storing the 3D video asset which is content of the system to a user account corresponding to where the content was recorded to and further note that such user equipment devices are further explained in relation to storage of relevant content to a user account teaching “user equipment devices may operate in a cloud computing environment to access cloud services. In a cloud computing environment, various types of computing services for content sharing, storage or distribution (e.g., video sharing sites or social networking sites) are provided by a collection of network-accessible computing and storage resources, referred to as “the cloud.”” and “cloud provides access to services, such as content storage, content sharing, or social networking services, among other examples, as well as access to any content described above, for user equipment devices. Services can be provided in the cloud through cloud computing service providers, or through other providers of online services. For example, the cloud-based services can include a content storage service, a content sharing site, a social networking site, or other services via which user-sourced content is distributed for viewing by others on connected devices. These cloud-based services may allow a user equipment device to store content to the cloud and to receive content from the cloud rather than storing content locally and accessing locally-stored content” such that again the 3D video asset created as the non-curated viewing perspective of the scene is content of the system which can be recorded and stored to a user account).
Regarding claim 2, Yen teaches all that is required as applied to claim 1 above and further teaches wherein analyzing the 2D gameplay video includes identifying and tracking objects depicted in the 2D gameplay video (see Davis, paragraphs 0141-0143 teaching “the video game engine data may provide a predetermined list of game objects, graphics, positions, mechanics and audio associated with each frame of the first stream. If the video game engine data does not provide this information for each frame of the first stream, control circuitry 504 may utilize objection recognition to identify objects in the first stream and compare them to the video game engine data. For example, control circuitry 504 may identify a tennis ball in a frame of the first stream. In response, control circuitry 504 may search for the term “tennis ball” in the video game engine data to determine the code, graphics and mechanics associated with a game object “tennis ball.” Control circuitry 504 may then utilize this information to render the game object “tennis ball” in the video game platform. This process is discussed in-depth in FIGS. 13-14. Control circuitry 504 may repeat this process until for all identified objects from the first stream” such that here objects are detected in each frame for all identified objects from the first stream such that this detection of the objects over time is a tracking of the objects in the 2D gameplay video).
Regarding claim 3, Yen teaches all that is required as applied to claim 1 above and further teaches providing an interface that renders a view of the 3D video asset for presentation on a display (see Yen, paragraph 0100 teaching a physical user interface including a display where “user may send instructions to control circuitry 504 using user input interface 510. User input interface 510 may be any suitable user interface, such as a remote control, mouse, trackball, keypad, keyboard, touch screen, touchpad, stylus input, joystick, voice recognition interface, or other user input interfaces. Display 512 may be provided as a stand-alone device or integrated with other elements of user equipment device 500. For example, display 512 may be a touchscreen or touch-sensitive display. In such circumstances, user input interface 510 may be integrated with or combined with display 512” and as in the example sections used above such as paragraphs 0144-0145 and figure 2 teaching “control circuitry 504 (FIG. 5) receives input at the user equipment (e.g., via I/O Path 502) requesting to view the video contents from a non-curated viewing perspective. Control circuitry 504 may generate for display a cursor that allows the user to interact with the virtual environment in the video game platform. For example, the cursor may be movable within the virtual boundaries as dictated by the video game engine data. As discussed previously, suppose that the curated viewing perspective is associated with an in-game position at an elevated spot behind a first tennis player. The in-game position may be high enough to allow the camera angle to capture the entire court and the second tennis player on the opposite side. Control circuitry 504 may receive a user selection of an arbitrary position, highlighted by the cursor, in the virtual environment (e.g., on the virtual tennis court). Suppose that the selected position is in the audience (e.g., as depicted in second stream 210 (FIG. 2))” and “control circuitry 504 (FIG. 5) generates for display, using the video game engine data, the viewing contents from the non-curated viewing perspective” such that here a rendering for display of the 3D video asset “viewing contents” is done in connection with an interface that renders it for display).
Regarding claim 4, Yen teaches all that is required as applied to claim 3 and further teaches wherein the interface enables adjustment of a perspective of the view of the 3D video asset (see Yen, paragraphs 0144-0145 and figure 2 as explained above teaching such interface allowing adjustment of a perspective of the view of the 3D video asset teaching “control circuitry 504 (FIG. 5) receives input at the user equipment (e.g., via I/O Path 502) requesting to view the video contents from a non-curated viewing perspective. Control circuitry 504 may generate for display a cursor that allows the user to interact with the virtual environment in the video game platform. For example, the cursor may be movable within the virtual boundaries as dictated by the video game engine data. As discussed previously, suppose that the curated viewing perspective is associated with an in-game position at an elevated spot behind a first tennis player. The in-game position may be high enough to allow the camera angle to capture the entire court and the second tennis player on the opposite side. Control circuitry 504 may receive a user selection of an arbitrary position, highlighted by the cursor, in the virtual environment (e.g., on the virtual tennis court). Suppose that the selected position is in the audience (e.g., as depicted in second stream 210 (FIG. 2))” and “control circuitry 504 (FIG. 5) generates for display, using the video game engine data, the viewing contents from the non-curated viewing perspective”).
Regarding claim 5, Yen teaches all that is required as applied to claim 1 above and further teaches wherein the 3D video asset defines a 3D content model of the scene depicted in the 2D gameplay video (see Yen, paragraphs 0141-0144 teaching “to generate a second stream comprising a replication of the video contents that is viewable within a video game platform” and “video contents of the second stream, unlike the first stream, are based in the video game platform. Therefore, the second stream is not simply a video and the user can interact with the second stream via I/O Path 502 (FIG. 5) through user commands” such that here the 3D video asset is “based in the video game platform” such that the 3D content model of the scene depicted in the 2D gameplay video can be viewed by rendering a view of the 3D content model as a 3D video asset).
Regarding claim 6, Yen teaches all that is required as applied to claim 1 above and further teaches wherein analyzing the 2D gameplay video is further configured to determine a texture, shading or lighting of the scene, and wherein said determined texture, shading, or lighting is incorporated in the 3D video asset (see Yen, paragraphs 0142-0145 teaching that from the gameplay video the system determines all of the information about the 3D video asset where “Control circuitry 504 may use the video game engine data to identify the game objects rendered in the video game environment, as depicted in the first stream. For example, the video game engine data may provide a predetermined list of game objects, graphics, positions, mechanics and audio associated with each frame of the first stream” and “Using the video game engine data, control circuitry 504 may re-render all game objects, audio, graphics and virtual environments associated with the first stream at a new set of positions with respect to the user's new virtual position” such that the rendering of the curated and non-curated viewpoints of the 3D video asset also determine a texture, shading or lighting of the scene when it ”may re-render all game objects” to match the appearance of the objects in the first video stream where for example as in paragraph 0172 a texture of a tennis player’s clothing items are determined to be “blue shirt with black shorts”; further note paragraph 0128 teaching “video game environment is created by fully rendering a video game on a video game console (e.g., PlayStation 4, Xbox One, PC, etc.). In particular, the video game environment refers to the rendered content of the video game (e.g., game objects, scenes, character models, game mechanics, etc)” such that as the rendering is done by a modern graphics rendering engine and console on modern 3D games then such rendering also teaches determining a texture, shading or lighting of the scene as these are crucial parts of the rendering pipeline for such game engines and consoles).
Regarding claims 7-12, the instant claims are recited as an apparatus in the form of a “non-transitory computer readable medium having program instructions embodied thereon that, when executed by at least one computing device, cause said at least one computing device to perform a method” where the method corresponds to the method performed in claims 1-6, respectively. Yen teaches such an apparatus (see Yen, paragraphs 0100-0101 teaching “the above methods may be carried out on conventional hardware suitably adapted as applicable by software instruction or by the inclusion or substitution of dedicated hardware” and “the required adaptation to existing parts of a conventional equivalent device may be implemented in the form of a computer program product comprising processor implementable instructions stored on a non-transitory machine-readable medium such as a floppy disk, optical disk, hard disk, PROM, RAM, flash memory or any combination of these or other storage media, or realised in hardware as an ASIC (application specific integrated circuit) or an FPGA (field programmable gate array) or other configurable circuit suitable to use in adapting the conventional equivalent device”) and teaches the method of the apparatus as addressed in the rejections of claims 1-6 above. In light of this, the limitations of claims 7-12, correspond to the limitations of claims 1-6, respectively; thus they are rejected on the same grounds as claims 1-6, respectively.
Regarding claims 13-18, the instant claims are recited as an apparatus in the form of a system comprising a computing device that performs a method as in claims 1-6, respectively. Yen teaches such an apparatus (see Yen, paragraphs 0100-0101 teaching “the above methods may be carried out on conventional hardware suitably adapted as applicable by software instruction or by the inclusion or substitution of dedicated hardware” and “the required adaptation to existing parts of a conventional equivalent device may be implemented in the form of a computer program product comprising processor implementable instructions stored on a non-transitory machine-readable medium such as a floppy disk, optical disk, hard disk, PROM, RAM, flash memory or any combination of these or other storage media, or realised in hardware as an ASIC (application specific integrated circuit) or an FPGA (field programmable gate array) or other configurable circuit suitable to use in adapting the conventional equivalent device”) and teaches the method of the apparatus as addressed in the rejections of claims 1-6 above. In light of this, the limitations of claims 13-18, correspond to the limitations of claims 1-6, respectively; thus they are rejected on the same grounds as claims 1-6, respectively.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yen in view of Cappello et al2 (“Cappello”).
Regarding claim 19, Yen teaches all that is required as applied to claim 1 above and further teaches wherein the second point of view changes over time in the 3D video asset, and
wherein generating the 3D video asset with the second point of view comprises, using a virtual camera, moving along a spline having the second point of view (see Yen, paragraphs 0130-0131 teaching “control circuitry 504 (FIG. 5) receives input from the user (e.g., via I/O Path 502) requesting to view the video contents from a non-curated viewing perspective. For example, the user may move a virtual cursor in the second stream to an arbitrary spot of the virtual environment depicted in the second stream (e.g., on the virtual tennis court) and select a viewing position. The viewing position alongside the camera angle associated with the viewing position represent the non-curated viewing perspective” such that here a virtual camera perspective is controlled having the second point of view such that the viewing position and viewing angle can be set by the user). Yen teaches all of the above but fails to specifically teach that the second point of view changes over time in the 3D video asset and, using a virtual camera, moving along a spline having the second point of view. Thus Yen stands as a base device upon which the claimed invention can be seen as an improvement comprising the ability to define the second point of view to change over time in the 3D video asset and to move a virtual camera along a spline having the second point of view when re-rendering content.
In the same field of endeavor relating to capturing 2D gameplay video and converting it into a 3D video asset as a 3D model that can be reconstructed and viewed from different views of a virtual camera (see Cappello, paragraphs 0082-0088 and figure 10, teaching “receiving a video stream, the video stream comprising a two-dimensional video of a three-dimensional scene captured by a video camera” and “determining a mapping between locations in the two-dimensional video of the scene and locations in a three-dimensional representation of the scene” and “generating a three-dimensional graphical representation of the scene based on the determined mapping” and “determining a virtual camera angle from which the three-dimensional graphical representation of the scene is to be viewed” and “rendering an image corresponding to the graphical representation of the scene viewed from the determined virtual camera angle” and “outputting the rendered image for display”), Cappello teaches that when a user is setting the virtual camera angle that this second point of view change can occur over time in the 3D video asset and generating the 3D video asset with the second point of view comprises using a virtual camera, moving along a spline having the second point of view (note that functionally something “moving along a spline” is interpreted as the virtual camera somehow moves along any curve or line that goes through designated points as something moving in such a way would be moving along a spline; is see Cappello, paragraphs 0071-0081 teaching “a received 2D stream of a live event can be used to drive an augmented or virtual representation of the event in which one or more of the live event participants are replaced by virtual avatars, and alternatively or in addition, optionally the viewpoint of the event can also be modified by the viewer” and “input includes data indicating the graphical representation of the scene generated by the image generator 508. From this, the view processor 506 is configured to determine a virtual camera angle from which the graphical representation of the scene is to be displayed” and “this camera angle is different from the camera angle that was used to capture the original video footage” and specifically “virtual camera angle may be variable. For example, the view processor 506 may be configured to determine an initial position of the virtual camera, and how the virtual camera is to be moved from that position” such that here a virtual camera “moved from that position” from the initial position is a view of the 3D video asset where the virtual camera position is variable over time and thus when moving from the initial position to where “the virtual camera is to be moved from that position” this is the virtual camera moving along a spline having the second point of view; furthermore Cappello teaches other instances of a virtual camera moving along a spline over time in the 3D video asset disclosing other examples of the user’s ability to change the camera position over time in the 3D video asset teaching “user may then be presented with an initial view of the graphical representation from that viewpoint, and may further adjust the position and/or orientation of the virtual camera by providing a further input. The further input may include e.g. moving the user device, thereby causing the virtual camera to be moved in a corresponding manner (thereby allowing the user to act as a virtual camera man)” such that here moving the user device defines a spline along which the virtual camera viewpoint is moved; additionally it is taught “the view processor 506 may determine a corresponding virtual camera angle from which that event is to be viewed in the graphical representation of the scene. In some examples, this may involve selecting a predetermined position and/or orientation of the virtual camera that has been determined (e.g. by a developer) as being appropriate for that event. Moreover, this may also involve selecting a pre-determined motion of the virtual camera that has been identified as being appropriate for capturing that event” where such “motion” of the virtual camera is movement along a spline defining the motion of the camera as is done in such events such that the 3D video asset can be viewed using such camera motion). Thus Cappello teaches known techniques applicable to the base device/system of Yen.
Therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify Yen to apply the known teachings of Cappello as doing so would be no more than application of a known technique to a base device/system that is ready for improvement which would predictable results and result in an improved system. Here Yen already teaches to create 3D content by recreating objects in a 2D video stream as a 3D model of such objects which can be viewed as a 3D video asset and to change a viewing position from an initial viewing position and teaches that such is done in the context of game rendering engines and 3D modeling which already utilize virtual cameras which of course are predictively programmable and thus the predictable result of the application of Cappello’s teachings to Yen would be that the user would be allowed to set variable camera position and angle over time to move a camera providing the perspective for the 3D video content such that the 3D video asset would reflect a perspective of the camera moving along a spline in some camera motion. This would result in an improved system as the user would be afforded more capabilities to view and experience the content and would allow the user to see even more appropriate views as suggested by Cappello (see Cappello, paragraph 0079 teaching “selecting a predetermined position and/or orientation of the virtual camera that has been determined (e.g. by a developer) as being appropriate for that event” and “selecting a pre-determined motion of the virtual camera that has been identified as being appropriate for capturing that event. In some examples, the position and/or orientation of the virtual camera may be determined based on historic data, indicating where other users have commonly positioned and oriented the virtual camera for events of a similar nature”).
Regarding claim 20, Yen as modified teaches all that is required as applied to claim 19 above and further teaches wherein generating the 3D video asset comprises determining at least one of a path of the spline, an adjustment to lighting in the 3D geometry of the scene, or a setting of the virtual camera to use while generating the 3D video asset (see Yen as modified where Yen already teaches in the combination that generating the 3D video asset comprises determining a setting of the virtual camera to use while generating the 3D video asset as the user is able to set the non-curated position and angle of the virtual camera to user while generating the 3D video asset as in paragraph 0144-0145 teaching “Control circuitry 504 may receive a user selection of an arbitrary position, highlighted by the cursor, in the virtual environment (e.g., on the virtual tennis court). Suppose that the selected position is in the audience (e.g., as depicted in second stream 210 (FIG. 2))” and “control circuitry 504 (FIG. 5) generates for display, using the video game engine data, the viewing contents from the non-curated viewing perspective. For example, control circuitry 504 may change the virtual position of the user and the camera angle based on the requested non-curated viewing perspective. Using the video game engine data, control circuitry 504 may re-render all game objects, audio, graphics and virtual environments associated with the first stream at a new set of positions with respect to the user's new virtual position. This allows the user to view the gameplay from a different perspective (e.g., the non-curated perspective)” and furthermore in combination Cappello also teaches this aspect as the camera settings of the angle and position and motion are set and further Cappello teaches determining the path of the spline as this is the camera motion of the virtual camera explained above in Cappello, paragraphs 0071-0081).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim 1-18 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of copending Application No. 18361632 (reference application) in view of Forster. Although the claims at issue are not identical, they are not patentably distinct from each other as explained below and with reference to the following table.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Conflicting US Patent No. 18361632
Pending Application 18361608
Claim 1.
A method for generating a view of an event in a video game, comprising:
capturing two-dimensional (2D) gameplay video generated from a session of a video game;
analyzing the 2D gameplay video to identify an event occurring in a scene depicted in the 2D gameplay video and identifying one or more elements involved in said event;
further analyzing the 2D gameplay video to determine 3D geometry of the scene;
using the 3D geometry of the scene to generate a 3D video asset of the event that occurred in the gameplay video;
generating a 2D view of the 3D video asset for presentation on a display, wherein generating said 2D view includes determining a field of view (FOV) to apply for the 2D view, the FOV being configured to include the elements involved in the event.
Claim 1.
A method for generating a three-dimensional (3D) content from a video game, comprising:
capturing two-dimensional (2D) gameplay video generated from a session of a video game and with a first point of view;
analyzing the 2D gameplay video to determine 3D geometry of a scene depicted in the 2D gameplay video;
using the 3D geometry of the scene to generate a 3D video asset with a second point of view that occurred in the gameplay video;
storing the 3D video asset to a user account.
Thus, it can be seen that the instant claim 1 differs through no recitation of a “storing” step in conflicting claim 1. However, this feature and the remaining features of the claim and dependent claims of the pending application are at least taught by Yen as Yen teaches such storing in the context of the same field of endeavor. Thus modifying the conflicting claim 1 to arrive at the claimed invention for each dependent claim using the applicable techniques taught above by Yen would have been obvious for one of ordinary skill in the art before the effective filing date of the invention as adding such features is known as explained above and doing so would yield predictable results and result in an improved system. Note that the abbreviated rationale above is provided in the interest of brevity and given that the claims are likely subject to further amendment. Note that the dependent claims also recite similar subject matter and thus are rejected on similar non-statutory double patenting grounds as the parent claims.
Claim 1-18 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of copending Application No. 18361624 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other as explained below and with reference to the following table.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Conflicting US Patent No. 18361624
Pending Application 18361608
Claim 1.
A method for generating a three-dimensional (3D) content moment from a video game, comprising:
capturing two-dimensional (2D) gameplay video generated from a session of a video game;
analyzing the 2D gameplay video to determine 3D geometry of a scene depicted in the 2D gameplay video;
using the 3D geometry of the scene to generate a 3D video asset of a moment that occurred in the gameplay video;
storing the 3D video asset to a user account.
Claim 1.
A method for generating a three-dimensional (3D) content from a video game, comprising:
capturing two-dimensional (2D) gameplay video generated from a session of a video game and with a first point of view;
analyzing the 2D gameplay video to determine 3D geometry of a scene depicted in the 2D gameplay video;
using the 3D geometry of the scene to generate a 3D video asset with a second point of view that occurred in the gameplay video;
storing the 3D video asset to a user account.
Thus it can be seen that the instant claim 1 and each independent claim is a slightly differently worded version of the conflicting claim. However, given that a 3D video asset can be broadly interpreted to be any 3D asset which comes from video and it is of a moment then such 3D video asset may include within its scope a 3D still asset and thus the limitations anticipate one another. Furthermore, the remaining independent claims are rejected for the same reasons as claim 1. Note that the dependent claims also recite similar subject matter and thus are rejected on similar non-statutory double patenting grounds as the parent claims.
Claim 1, 5, 7, 11, 13 and 17 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 7, 8, 13 and 14 of copending Application No. 18361641 (reference application) in view of Forster et al3 (“Forster”). Although the claims at issue are not identical, they are not patentably distinct from each other as explained below and with reference to the following table.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Conflicting US App No. 18361641
Pending Application 18361608
Claim 1.
A method for generating a physical object, comprising:
capturing two-dimensional (2D) gameplay video generated from a session of a video game;
analyzing the 2D gameplay video to identify a virtual object depicted in the 2D gameplay video;
further analyzing the 2D gameplay video to determine 3D geometry of the virtual object;
using the 3D geometry of the object to generate a 3D model of the virtual object;
storing the 3D model to a user account;
using the 3D model to generate a physical object resembling the virtual object.
Claim 1.
A method for generating a three-dimensional (3D) content from a video game, comprising:
capturing two-dimensional (2D) gameplay video generated from a session of a video game and with a first point of view;
analyzing the 2D gameplay video to determine 3D geometry of a scene depicted in the 2D gameplay video;
using the 3D geometry of the scene to generate a 3D video asset with a second point of view that occurred in the gameplay video;
storing the 3D video asset to a user account.
Thus it can be seen that the instant claim 1 is a slightly different version of the conflicting claim and differs in that some physical model is generated from the 3D geometry.
However, this feature and the remaining features of the claim and dependent claims of the pending application are at least taught by Forster as Forster teaches such storing in the context of the same field of endeavor and teaches generation of a 3D model which could be the basis for generating a physical object (see Forster, paragraphs 0099-0132 teaching a physical object can be printed from a 3D modeling of the object which can be an object recognized from a video game session).
Claims 3-6, 9-12, and 15-18 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 7, and 13 of copending Application No. 18361641 (reference application) in view of Forster. Although the claims at issue are not identical, they are not patentably distinct from each other. The remaining features of the dependent claims of the pending application are at least taught by Forster as explained above. Thus modifying the conflicting claim 1 to arrive at the claimed invention for each dependent claim using the applicable techniques taught above by Forster would have been obvious for one of ordinary skill in the art before the effective filing date of the invention as adding such features is known as explained above and doing so would yield predictable results and result in an improved system. Note that the abbreviated rationale above is provided in the interest of brevity and given that the claims are likely subject to further amendment.
Response to Arguments
Applicant’s arguments filed 7/11/2025 with respect to claim(s) 1-18 as being anticipated by Forster have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Note that the double patenting rejections cannot be held in abeyance and are applied as above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT E SONNERS whose telephone number is (571)270-7504. The examiner can normally be reached Mon-Friday 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SCOTT E SONNERS/Examiner, Art Unit 2613
/XIAO M WU/Supervisory Patent Examiner, Art Unit 2613