Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Election/Restrictions
Claims 5-9 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected group I and II, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 12/19/25.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 14-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rady U.S. Patent/PG Publication 9583140.
Regarding claim 1 (independent):
An information processing apparatus comprising: one or more processors and one or more memories storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions to: (Rady C47 L50-60 Computer system 1100 also includes a main memory 1106, such as a random access memory (RAM) or other dynamic or volatile storage device, coupled to bus 1102 for storing information and instructions to be executed by processor 1104. Main memory 1106 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1104. Such instructions, when stored in non-transitory storage media accessible to processor 1104, render computer system 1100 into a special-purpose machine that is customized to perform the operations specified in the instructions.).
obtain user information indicating a user (Rady C16 L20-25 In other embodiments, clients 210 may only see contents that have been uploaded in association with a user that is currently logged in to the client 210.).
obtain a plurality of pieces of contents data whose current owner is the user based on the user information (Rady C14 L55-65 In some embodiments, there may furthermore be separate asset repositories 264 for different users, groups of users, categories of assets, and so forth.) (Rady C16 L15-25 In some embodiments, at least some of the assets in asset repository 264 may be shared between different clients 210. For instance, a client 210 may upload a video clip and mark the clip as shareable with all other clients 210 or with certain users or groups of users. Any user with whom the content has been shared may see the content in a library, directory, or search interface when the user is logged into a client 210. In other embodiments, clients 210 may only see contents that have been uploaded in association with a user that is currently logged in to the client 210.)
and generate output control data indicating contents in which rendering results of at least two pieces of contents data among the plurality of pieces of contents data are output (Rady C18 L40-65 In an embodiment, the editor component 222, interface components 223, and video rendering components 224 are coupled together in such a manner as to allow for editing of the project data while the project is being rendered and played in real-time, thus allowing users to see the effects of their edits immediately after making them. For instance, the project may be rendered to a stage area of the video editor GUI. A user may drag and drop representations of assets into the stage area or a timeline area, or move such representations around, as the project is being played, thereby editing various aspects of the project. Or the user may update options in a menu that affect attributes of the assets as the project is being played. The editor component 222 immediately updates the project data to reflect the change. Any newly added asset is requested from the server system 250, if necessary, and the renderer 224 begins rendering the asset as part of the project as soon as enough asset data has been received. Any changed asset attributes or scene layout changes are rendered immediately with those changes.)
and controlled as a time series signal (Rady C18 L40-65 In an embodiment, using interface components 223, a looping timespan within the project timeline may defined. Rendering of the project may be looped through this timespan repeatedly as edits are received, to allow the user to more quickly see the effect of changes on a specific timespan in the project.).
Rady discloses the above elements in several embodiments. With the embodiments being disclosed in a single reference, one of ordinary skill in the art at the time of the filing of the invention being aware of one embodiment would also have been aware of the others, and it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have combined these elements from two or more embodiments into a single arrangement for the benefit of enjoying the advantages of all the embodiments disclosed combined into a single arrangement.
Regarding claim 2:
The information processing apparatus according to claim 1, has all of its limitations taught by Rady. Rady further teaches wherein the output control data includes contents identification information for identifying each of the at least two pieces of contents data (Rady C8 L20-35 Generally, the project data specifies a number of assets that are in the project, typically by reference to location(s) where the asset data are stored (though in some embodiments asset data may in fact be stored within the project data). The project data may also store various metadata describing an asset may also be stored, such as an asset type, asset length, asset size, asset bitrate, edit(s) specified for the asset, temporal location of the asset in the project, spatial location in a scene, project-specified animation data, and so forth.)(Rady C14 L65-C15 L10 An asset repository 264 may further store metadata describing each asset stored therein. For each asset, the metadata may include, without limitation, a title, description, preview image(s), ownership data, licensing information, and one or more categories of genres. Moreover, in some embodiments, a number of versions of each asset may be stored in asset repository 264 for different contexts, such as different target platforms, different streaming bitrates, and so forth. For instance, there may be both a high-definition version and a low-definition version of each item of media content. The metadata may describe which versions of the asset exist, and where the version may be found.).
Regarding claim 3:
The information processing apparatus according to claim 1, has all of its limitations taught by Rady. Rady further teaches wherein the output control data is described in markup language (Rady C22 L20-35 Client 300 executes one or more processes referred to herein as a browser engine 310. A browser engine 310 is typically the primary component of a web browser application, such as Microsoft Internet Explorer or Google Chrome. A browser engine 310 processes data and instructions to generate and manipulate presentation(s) of information referred to herein as a document(s). As used herein, the term document does not necessarily refer to a single data structure, and may in fact include elements from a number of files or other data structures. A document also need not be a static presentation of information, but may include interactive elements by which a user may manipulate the presentation and/or issue commands to the browser engine 310.)(Rady C23 L35-55 (140) The data and instructions processed by a browser engine 310 may take a variety of forms. One common form is that of data elements described in the Hyper-Text Markup Language (HTML). A browser engine 310 typically comprises rendering logic that parses and interprets the HTML elements, along with accompanying formatting instructions such as Cascading Style Sheets (CSS), as instructions for arranging and presenting information and interactive elements within a document. A common form of instructions executed by a browser engine 310 is that of dynamic client-side scripting instructions, such as JavaScript instructions. A browser engine 310 typically comprises script processing component(s), such as a JavaScript engine or virtual machine, that interpret and/or compile such instructions, and then execute the interpreted or compiled instructions in such a manner as to affect the presentation of information within a document. Scripting instructions may refer to, rely upon, and/or manipulate various other types of data, such as data structures formatted in an eXtensible Markup Language (XML) format, JSON format, or other suitable format.)(Rady C26 L60-67 Project data 362 may be stored in one or more local data stores 360 at client 300, such as data stores provided by the browser engine 310, including but not limited to HTML5 (or W3C) web storage, memory, etc. Project data 362 may also be downloaded from and synchronized with project data from a server system, such as may be found in asset repository 264 of FIG. 2.).
Regarding claim 4:
The information processing apparatus according to claim 1, has all of its limitations taught by Rady. Rady further teaches wherein the output control data does not include part or the whole of each piece of contents data in the at least two pieces of contents data (Rady C3 L40-50 In an embodiment, the video editing application also or instead includes graphical user interfaces for specifying edits to the video clip, such as trimming operations or image effects, without changing the video clip at the streaming video source or having to download and modify a working copy of the video clip the video rendering application in advance. This is done in a non-destructive manner, allowing the edits to be changed or undone without affecting the original video/media.).
Regarding claim 14:
The information processing apparatus according to claim 1, has all of its limitations taught by Rady. Rady further teaches wherein the one or more programs further include instructions to:
output and control the rendering results of the at least two pieces of contents data as the time series signal based on the output control data in a case where the user is a current owner of the output control data (Rady C16 L20-25 In other embodiments, clients 210 may only see contents that have been uploaded in association with a user that is currently logged in to the client 210.) and the at least two pieces of contents data in which the rendering results are output and controlled as the time series signal by the output control data (Rady C43 L45-60 In at least some of the above embodiments, the media project associates a plurality of assets with timespans in the project timeline, the plurality of assets including both graphic assets and media assets; wherein rendering the video data for the media project comprises rendering the graphic assets during their associated timespans in the project timeline. In at least some of the above embodiments, at least one of the graphic assets comprises an animated three-dimensional geometry. In at least some of the above embodiments, rendering the video data comprises rendering at least one of the graphic assets over the first media of the first streaming media asset. In at least some of the above embodiments, rendering the graphic assets comprises drawing the graphic assets using WebGL instructions.).
Regarding claim 15 (independent):
The claim is a parallel version of claim 1. As such it is rejected under the same teachings.
Regarding claim 16 (independent):
The claim is a parallel version of claim 1. As such it is rejected under the same teachings.
Claim(s) 10-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rady U.S. Patent/PG Publication 9583140 in view of Yoshikawa U.S. Patent/PG Publication 20190174109.
Regarding claim 10:
The information processing apparatus according to claim 1, has all of its limitations taught by Rady. Rady further teaches wherein at least one of the at least two pieces of contents data is data for virtual viewpoint image capable of generating a virtual viewpoint image (Rady C5 L45-60 Techniques described herein may involve the rendering of graphic assets to generate video data. As used herein, a graphic asset is data structure describing a two or three dimensional object or collection of objects. Any physical object or collection of objects may be modeled—from cars, trees, and people, to cities and complex landscapes. The data structure includes a model, which is a mathematical representation of the appearance of described object(s) within a two-dimensional coordinate space (for two-dimensional objects) or three-dimensional coordinate space (for three-dimensional objects), depending on the dimensionality of the described object(s). The coordinate space relative to which the object(s) of an asset are described may also be referred to herein as the “object space.”)(Rady C7 L5-15 Graphic assets may be laid out in a scene, which describes spatial relationships between the graphic assets, including location and size. For scenes involving three-dimensional objects, the scene is a three-dimensional space, often referred to as a “world space” or “scene space,” and each asset is said to belong to a particular position in this space. Two-dimensional graphic assets may also be placed within a three-dimensional world space by mapping the graphic assets to planes or other geometries.)
and the output control data includes virtual camera (Rady C7 L20-30 Other animations may occur, for example, on account of changes to the lighting model or field-of-view of the camera from which the scene is being rendered.)
which is used in a case where the virtual viewpoint image is generated, as contents output (Rady C7 L25-45 Once laid out, the entire scene may be rendered. Each asset within the scene is rendered as it would appear in two-dimensions from a viewpoint defined for the scene, taking into account the spatial layouts of all of the other graphic assets in the scene as well as the properties of the virtual camera. For three-dimensional scenes, this process may be viewed as involving four steps: (1) rendering each texture of each graphical asset in a two-dimensional texture space, including applying any filters (or image effects) to those textures animated at the current time; (2) rendering those textures within their respective three-dimensional object spaces through mapping and/or parameterization; (3) rendering the graphic assets within the three-dimensional world space using translation, rotations, and other transformations; and (4) projecting the scene on to a two-dimensional screen space from a particular viewpoint within the world space (ie, the virtual camera).)
and controlled as the time series signal (Rady C18 L40-65 In an embodiment, using interface components 223, a looping timespan within the project timeline may defined. Rendering of the project may be looped through this timespan repeatedly as edits are received, to allow the user to more quickly see the effect of changes on a specific timespan in the project.).
Rady does not teach a camera path although they teach a virtual camera having a viewpoint and properties, as well as changing the FOV of the camera. In a related field of endeavor, Yoshikawa teaches:
wherein at least one of the at least two pieces of contents data is data for virtual viewpoint image capable of generating a virtual viewpoint image (Yoshikawa [0093] Next, distributor 126 distributes one or more free-viewpoint videos 156 generated by rendering unit 124 to viewing devices 103 (S119). At this point, distributor 126 may distribute, in addition to free-viewpoint videos 156, at least one of three-dimensional model 152, information indicating a viewpoint (the position of the virtual camera), and camerawork 154 to viewing devices 103.)
and the output control data includes virtual camera path information indicating a transition in position and orientation of a virtual viewpoint, which is used in a case where the virtual viewpoint image is generated, as contents output (Yoshikawa [0073] In camerawork display field 201, three-dimensional model 152 and a camera path that is a path of camerawork 154 are displayed. There may be one camera path displayed or a plurality of candidates for a camera path displayed. In addition, colors and line types may be applied to the plurality of camera paths to represent information such as recommendation levels of the respective camera paths. Here, the recommendation levels each indicate a degree of match between the recommendation level and a preference of a user or a degree of viewing frequency. Displaying the plurality of camera paths can provide choices to a user (viewer) or an editor.)(Yoshikawa [0030] For example, in the generating of the camerawork, the position and the orientation of the virtual camera may be determined such that an object of a predetermined type associated with the target scene is included in the free-viewpoint video.)
Therefore, it would have been obvious before the effective filing date of the claimed invention to use a virtual camera with a path as taught by Yoshikawa. The rationale for doing so would have been that it is a simple substitution of one known element for another to obtain predictable results since Rady has assets that include 3D environment with a virtual camera and can import numerous types of assets, and Yoshikawa creates an asset that includes a 3D environment with a virtual camera and camera paths, where it is merely substituting one type of asset for another and there are predictable results since the end result generation remains the same. Therefore it would have been obvious to combine Yoshikawa with Rady to obtain the invention.
Regarding claim 11:
The information processing apparatus according to claim 10, has all of its limitations taught by Rady in view of Yoshikawa. Rady further teaches wherein the data for virtual viewpoint image is three-dimensional shape data indicating a three-dimensional shape of an object (Rady C5 L45-60 Techniques described herein may involve the rendering of graphic assets to generate video data. As used herein, a graphic asset is data structure describing a two or three dimensional object or collection of objects. Any physical object or collection of objects may be modeled—from cars, trees, and people, to cities and complex landscapes. The data structure includes a model, which is a mathematical representation of the appearance of described object(s) within a two-dimensional coordinate space (for two-dimensional objects) or three-dimensional coordinate space (for three-dimensional objects), depending on the dimensionality of the described object(s). The coordinate space relative to which the object(s) of an asset are described may also be referred to herein as the “object space.”)(Rady C7 L5-15 Graphic assets may be laid out in a scene, which describes spatial relationships between the graphic assets, including location and size. For scenes involving three-dimensional objects, the scene is a three-dimensional space, often referred to as a “world space” or “scene space,” and each asset is said to belong to a particular position in this space. Two-dimensional graphic assets may also be placed within a three-dimensional world space by mapping the graphic assets to planes or other geometries.)
Regarding claim 12:
The information processing apparatus according to claim 11, has all of its limitations taught by Rady in view of Yoshikawa. Rady further teaches wherein the three-dimensional shape data is data obtained by coloring the three-dimensional shape (Rady C6 L1-15 A three-dimensional model typically describes the object as a set of interconnected points known as vertices, having specified coordinates within the three-dimensional object space. The vertices are connected by geometrical entities referred to as surfaces, including, without limitation, triangles, polygons, lines, curved surfaces, and so forth. The surfaces may be shaded different colors and/or decorated with using processes such as texture mapping, height mapping, bump mapping, reflection mapping, and so forth. For instance, texture mapping refers to a process of mapping pixels described by a two-dimensional graphic or model to a surface of a three-dimensional object. Each surface may have its own texture. Alternative representations of three-dimensional objects may also be used, such as voxels.).
Regarding claim 13:
The information processing apparatus according to claim 10, has all of its limitations taught by Rady in view of Yoshikawa. Rady further teaches wherein the at least two pieces of contents data include three-dimensional shape data indicating a three-dimensional shape of an object and texture data for coloring the three-dimensional shape as the data for virtual viewpoint image (Rady C6 L1-15 A three-dimensional model typically describes the object as a set of interconnected points known as vertices, having specified coordinates within the three-dimensional object space. The vertices are connected by geometrical entities referred to as surfaces, including, without limitation, triangles, polygons, lines, curved surfaces, and so forth. The surfaces may be shaded different colors and/or decorated with using processes such as texture mapping, height mapping, bump mapping, reflection mapping, and so forth. For instance, texture mapping refers to a process of mapping pixels described by a two-dimensional graphic or model to a surface of a three-dimensional object. Each surface may have its own texture. Alternative representations of three-dimensional objects may also be used, such as voxels.).
Conclusion
For the prior art referenced and the prior art considered pertinent to Applicant’s disclosure but not relied upon, see PTO-892 “Notice of References Cited”.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON PRINGLE-PARKER whose telephone number is (571) 272-5690 and e-mail is jason.pringle-parker@uspto.gov. The examiner can normally be reached on 8:30am-5:00pm est Monday-Friday. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, King Poon can be reached on (571) 270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, seehttp://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JASON A PRINGLE-PARKER/
Primary Examiner, Art Unit 2617