DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1 recites the limitation "the popped event types" in lines 14-15. There is insufficient antecedent basis for this limitation in the claim.
Claims that are noted above as being rejected and not specifically cited are rejected based their dependency on a rejected independent claim.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over French et al., U.S. Patent Number 6,266,053 B1, in view of Chen et al., U.S. Patent Publication Number 2024/0355082 A1, further in view of Heinen et al., U.S. Patent Publication Number 2020/0312029 A1.
Regarding claim 1, French discloses performing an event editing operation, generating an event stream corresponding to the target object (col. 7, lines 54-56, graph editor enable the user to create and manipulate a data structure referred to as the dependency or scene graph), the event stream including: event information respectively corresponding to a plurality of nodes (col. 7, lines 56-57, the scene graph consists of a set of nodes; col. 8, lines 44-45, events are passed across the connections to notify operators of data changes; col. 11, lines 25-28, changes to the traversal context are propagated through the graph, with further changes made within each node), wherein the event information is determined based on the event editing operation and is used for describing an event action executed (col. 11, lines 22, event-driven execution model; col. 11, lines 67, node actions are dispatched); controlling the object to execute an event action corresponding to the event information in the virtual three-dimensional scene based on the event information respectively corresponding to the plurality of nodes in the event stream, and acquiring a first target video of the target object when controlling the target object to execute the event action. (col. 7, lines 33-37, provides object-oriented representations for the scene; objects are defined with reference to a virtual stage that represents the three-dimensional spatial characteristics of the scene; col. 8, lines 65-66, demonstration scene of a computer animated dinosaur walking into a live scene such as an office environment).
However it is noted that French fails to specifically disclose generating a virtual three-dimensional scene, the virtual three-dimensional scene including at least one target object and on the target object the editing performed.
Chen discloses a video generation method, comprising: generating a virtual three-dimensional scene, the virtual three-dimensional scene including at least one target object (paragraph 0053, render an image of the virtual scene based on the obtained scene data, and display the image of the virtual scene of a target (virtual) object; figure 1, virtual scene); in response to performing an event editing operation on the target object (paragraph 0074, in response to an editing instruction for a three-dimensional model and triggered by a target object, an editing interface configured for editing); (paragraph 0155, to achieve decoupling, an event system may also be used; each click event is sent in an event manner, and is registered on demand), and acquiring a first target video of the target object when controlling the target object to execute the event action (Paragraph 0057, receives the scene data of the virtual scene, renders an image of the virtual scene based on the scene data, to display the virtual object in an interface of the virtual scene, and when a presentation condition of the three-dimensional model of the virtual object is satisfied, presents the target three-dimensional model (the edited three-dimensional model of the virtual object based on the editing interface) of the virtual object at the presentation position in the virtual scene).
However it is noted that both French and Chen fail to disclose wherein the generating the event stream corresponding to the target object comprises triggering an event adding button in the virtual three-dimensional scene to select, from the popped event types, events with target types to add, and generating the event stream with respect to the target object according to an adding order.
Heinen discloses paragraph 0153, adding an animation to a 2D or 3D object or editing an existing animation associated with a 2D or 3D object; may start with the select of the target object from the scene currently displayed in the editor; user may decide whether they want to add a new animation or edit an existing one; can select from a list of given animations. Heinen discloses wherein the generating the event stream corresponding to the target object comprises triggering an event adding button in the virtual three-dimensional scene (paragraph 0177, event add button) to select, from the popped event types, events with target types to add (NP1-NP4, creating a virtual reality scene; inserting the object from a pre-defined list; selecting the object from a newly created list; appointing a trigger with the object to cause an event), and generating the event stream with respect to the target object according to an adding order (NP8, indicating the relationship of the image data and second image data as linked scenes; paragraph 0177, information may be used to generate the correct order for time-based scenes; figure 2k and 2l; paragraph 0179, if a series of image or video data is added, the first image to the user may be displayed in order).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to include the editing operations for a target object as disclosed by Chen, in the event nodes as disclosed by French, in order to create discrete events as applied to particular object without the need to “program” the underlying mechanisms for interpreting the scene model and its dynamics as disclosed by French. It further would have been obvious to one of ordinary skill in the art before the effective filing data to include in the virtual scenes created as disclosed by French and Chen, the adding button and event stream with a correct order as disclosed by Heinen, to create virtual reality scene with added scenes in a correct generated order for time accuracy.
Regarding claim 2, French discloses wherein the method further comprises: acquiring a second target video of a real scene; performing a fusion process for the first target video and the second target video to obtain a target video including the target object and the real scene (col. 8, line 66, live scene such as an office environment; col. 9, lines 6 -8, live scene and the dinosaur model loader are pushed down; col. 8, lines 65-66, demonstration scene of a computer animated dinosaur walking into a live scene such as an office environment).
Regarding claim 3, French discloses wherein the generating a virtual three-dimensional scene comprises: generating a virtual three-dimensional space (col. 5, lines 15-20, spatial context can either be a 2-D or 3-D context); adding a three-dimensional model corresponding to the target object into the virtual three-dimensional space to form the virtual three-dimensional scene (col. 5, lines 15-20, nodes in the graph may also represent rendering processes for the spatial transforms that, for example, transform a 3-D spatial context into a 2-D spatial context, to generate visual image frames from a 3-D scene model).
Chen discloses a virtual three-dimensional space corresponding to a three-dimensional scene and determining a coordinate value of at least one target object in the virtual three-dimensional space; based on the coordinate value of the target object in the virtual three-dimensional space, adding a three-dimensional model corresponding to the target object into the virtual three-dimensional space to form the virtual three-dimensional scene (paragraph 0057, presents the target three-dimensional model (the edited three-dimensional model of the virtual object based on the editing interface) of the virtual object at the presentation position in the virtual scene).
Regarding claim 4, French discloses wherein the method further comprises: in response to an adding operation of adding the target object into the scene, determining an initial feature of the target object in the virtual three-dimensional scene, the initial feature including at least one of: an initial pose, an initial animation, an initial light and shadow type, and an initial lens view angle; based on the initial feature, adding a three-dimensional model corresponding to the target object into the scene (col. 8, lines 55-57, parameters can, for example, take animated input from functions curves, or (explicitly or implicitly) from user interface elements; col. 16, line 38- col. 17, line 14, actual parameter structures will also include bounded ranges and default values; fundamental aggregate types include: temporal color; point, vector, control vertex; depth map channel (coordinates); material parameters; camera parameters; light parameters (color, intensity, shadow flag, shader etc.).
Chen discloses a virtual three-dimensional space corresponding to a three-dimensional scene and determining a coordinate value of at least one target object in the virtual three-dimensional space; based on the coordinate value of the target object in the virtual three-dimensional space, adding a three-dimensional model corresponding to the target object into the virtual three-dimensional space to form the virtual three-dimensional scene (paragraph 0057, presents the target three-dimensional model (the edited three-dimensional model of the virtual object based on the editing interface) of the virtual object at the presentation position in the virtual scene).
Regarding claim 5, French discloses wherein the performing an event editing operation on the target object comprises: generating the node, and receiving a basic event material corresponding to the generated node; based on the basic event material, generating event information corresponding to the generated node (col. 3, lines 58-64, elements of a scene are processed within the nodes of the graph; the nodes may process media data, such as images, video sequences, 3-D geometry, audio, or other data; node may also specify modify control values or parameters for media elements; col. 17, lines 13- 21, material parameters and light parameters).
Regarding claim 6, French discloses wherein the node includes a time node, and the generating the node comprises: determining an event execution time on a time axis, and based on the event execution time, generating a time node corresponding to the event execution time (col. 3, lines 40-44, time-based or event-based behaviors are therefore either assumed to be part of the traversal engine, or are encoded within nodes that interact ; col. 18, lines 12-16, scene graph can be presented and/or manipulated in a user interface; time transforms and time extents can also be presented and/or manipulated in a user interface as tracks and time intervals in a time; col. 21, lines 12-13, world time coordinates are defined by the complete shot, and a global time extent is part of the Scene node).
Regarding claim 7, French discloses wherein the controlling the target object to execute an event action corresponding to the event information in the scene based on the event information respectively corresponding to the plurality of nodes in the event stream comprises: with respect to each time node in the event stream, controlling the target object to execute an event action corresponding to the time node in the virtual three-dimensional scene; in response to not reaching a next time node corresponding to the time node after completion of executing the event action corresponding to the time node, repeatedly executing the event action corresponding to the time node (col. 4, lines 25-31, evaluate the appearance or behavior of the scene and in particular the time-based values of particular elements at a given time instant, the graph is traversed in a direction from a root node down toward the leaf nodes. The root node specifies an initial temporal context with a time scale and time interval associated with the overall choreographed media production).
Chen discloses a virtual three-dimensional space corresponding to a three-dimensional scene and determining a coordinate value of at least one target object in the virtual three-dimensional space; based on the coordinate value of the target object in the virtual three-dimensional space, adding a three-dimensional model corresponding to the target object into the virtual three-dimensional space to form the virtual three-dimensional scene (paragraph 0057, presents the target three-dimensional model (the edited three-dimensional model of the virtual object based on the editing interface) of the virtual object at the presentation position in the virtual scene).
Regarding claim 8, French discloses wherein the node includes an event node, and the controlling the target object to execute an event action corresponding to the event information in the virtual three-dimensional scene based on the event information respectively corresponding to the plurality of nodes in the event stream comprises: with respect to each event node in the event stream, controlling the target object to execute an event action corresponding to the event node in the scene, and after completion of executing the event action corresponding to the event node, executing an event action corresponding to a next event node (col. 10, lines 39-50, the scene graph 40 can be traversed by visiting each node 42 in a particular order, via connections in the graph. Traversals are used for inquiries and generating output. The scene graph uses a depth-first traversal: for each node visited, a pre-order action is invoked, the downstream connections are recursively traversed, then the node visit is completed with a post-order action. Traversal state is maintained in a traversal context. The context can be inquired and modified by traversal actions within the nodes. The traversal context contains information such as the current time for the scene, and the renderer being used to display output).
Chen discloses a virtual three-dimensional space corresponding to a three-dimensional scene and determining a coordinate value of at least one target object in the virtual three-dimensional space; based on the coordinate value of the target object in the virtual three-dimensional space, adding a three-dimensional model corresponding to the target object into the virtual three-dimensional space to form the virtual three-dimensional scene (paragraph 0057, presents the target three-dimensional model (the edited three-dimensional model of the virtual object based on the editing interface) of the virtual object at the presentation position in the virtual scene).
Regarding claim 9, French discloses wherein the target object includes a plurality of target objects, and the controlling the target object to execute an event action corresponding to the event information in the scene based on the event information respectively corresponding to the plurality of nodes in the event stream comprises: based on an event execution time of a plurality of nodes of the event streams respectively associated with a plurality of target objects, performing a merging operation on event information in the event streams respectively associated with the plurality of target objects to obtain an event execution script (col. 10, lines 15-19, a traversal context changed event must be added to the graph; to include a complete hierarch of data changed events, which specialize the type of data being changed); based on the event execution script, controlling the plurality of target objects to execute event actions respectively corresponding to the plurality of target objects in the scene (col. 20, lines 35-45, script nodes are lightweight control operators, which can coordinate parameters; can have the system extract the function body and link it into the runtime system).
Chen discloses a virtual three-dimensional space corresponding to a three-dimensional scene and determining a coordinate value of at least one target object in the virtual three-dimensional space; based on the coordinate value of the target object in the virtual three-dimensional space, adding a three-dimensional model corresponding to the target object into the virtual three-dimensional space to form the virtual three-dimensional scene (paragraph 0057, presents the target three-dimensional model (the edited three-dimensional model of the virtual object based on the editing interface) of the virtual object at the presentation position in the virtual scene).
Regarding claims 10-15, they are rejected based upon similar rational as above claims 1-5, respectively. French further discloses a computer device, comprising: a processor, and a memory having machine readable instructions executable by the processor stored thereon, wherein the processor is used for executing the machine readable instructions stored in the memory; and when the machine readable instructions are executed by the processor, the processor executes the steps of the video generation method according to Claim 1.
Regarding claims 16-20, they are rejected based upon similar rational as above claims 1-5, respectively. French further discloses a non-transitory computer readable storage medium, wherein the computer readable storage medium has a computer program stored thereon, and when the computer program is performed by a computer device, the computer device executes the steps of the video generation method according to Claim 1.
Response to Arguments
Applicant’s arguments, see page 9, filed 12/16/2025, with respect to the rejection(s) of claim(s) 1-20 under 103, French in view of Chen have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 103, French, Chen and Heinen.
Applicant argues French and Chen fail to disclose wherein the generating the event stream corresponding to the target object comprises triggering an event adding button in the virtual three-dimensional scene to select, from the popped event types, events with target types to add, and generating the event stream with respect to the target object according to an adding order.
Examiner responds Heinen discloses paragraph 0153, adding an animation to a 2D or 3D object or editing an existing animation associated with a 2D or 3D object; may start with the select of the target object from the scene currently displayed in the editor; user may decide whether they want to add a new animation or edit an existing one; can select from a list of given animations. Heinen discloses wherein the generating the event stream corresponding to the target object comprises triggering an event adding button in the virtual three-dimensional scene (paragraph 0177, event add button) to select, from the popped event types, events with target types to add (NP1-NP4, creating a virtual reality scene; inserting the object from a pre-defined list; selecting the object from a newly created list; appointing a trigger with the object to cause an event), and generating the event stream with respect to the target object according to an adding order (NP8, indicating the relationship of the image data and second image data as linked scenes; paragraph 0177, information may be used to generate the correct order for time-based scenes; figure 2k and 2l; paragraph 0179, if a series of image or video data is added, the first image to the user may be displayed in order).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Schnitzer et al., U.S. Patent Publication Number 2012/0327088 A1
Schnitzer discloses paragraph 0009, aggregated actions may represent actions for multiple locations in the virtual scene. The defined location may be graphically represented in the virtual scene. The graphical representation in the virtual scene may represent multiple defined locations based upon a viewing scale of the virtual scene. The defined time may be graphically represented on the timeline. The graphical representation on the timeline may represent multiple defined times based upon the viewing scale of the timeline. The presented user interface may be relocatable to at least one of another location in the virtual scene and another time represented in the timeline. One or more of the actions included in the aggregated actions represented in the user interface may be relocatable to at least one of another location in the virtual scene and another time represented in the timeline.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Motilewa Good-Johnson whose telephone number is (571)272-7658. The examiner can normally be reached Monday - Friday 6am-2:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MOTILEWA . GOOD JOHNSON
Primary Examiner
Art Unit 2616
/MOTILEWA GOOD-JOHNSON/ Primary Examiner, Art Unit 2619