DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 8-16, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Riggins et al. (US 2014/0250392 A1) in view of Zhang et al. (US 2024/0412436 A1).
Regarding claim 1, Riggins teaches:
A method comprising:
in response to an effect behavior editing request, presenting an effect behavior panel for an effect in an edit mode; ([0044], “The user interface of the development tool simplifies creating interactions between a trigger object and a target object. In certain embodiments, the method is initiated by a user selecting an action tool, for example from a menu on the user interface.”)
providing at least one command edit region in the effect behavior panel, ([0044], “FIG. 3 illustrates an exemplary method of creating an action using a development application, according to certain embodiments. The user interface of the development tool simplifies creating interactions between a trigger object and a target object.”) a command edit region comprising an object select box to select at least one object in the effect, ([0045]-[0046], “As shown in FIG. 3, an exemplary method involves receiving a selection of a trigger object, as shown in block 310. For example, this may involve identifying that a user has positioned a mouse curser over a button and clicked and held the mouse button. The method shown in FIG. 3 further comprises receiving a selection of a target object, as shown in block 320. For example, this may involve identifying that the user has dragged the mouse curser from the trigger object to a target object and released the mouse button.”) an action select box to select an action to be performed by the at least one object, ([0047]-[0048], “The method shown in FIG. 3 further comprises providing a menu of available actions for the target object, as shown in block 330. The method shown in FIG. 3 further comprises receiving a selection of an action that will be performed by the target object when a trigger event occurs, as shown in block 340.”) and a trigger select box to select a trigger for triggering the action; ([0048], “The triggering event may also be specified by the user. For example, the method may involve providing a menu of available trigger events and receiving a selection of a trigger event.”) and
applying a target action command for a target object into the effect based on receiving, within a command edit region, a selection of a target object, a selection of a target action to be performed by the target object, and a selection of a target trigger for triggering the target action, the target action command defining that the target object performs the target action when the target trigger occurs.(FIG. 3, [0048], “The method shown in FIG. 3 further comprises receiving a selection of an action that will be performed by the target object when a trigger event occurs, as shown in block 340. In certain embodiments, the triggering event may be identified by default, for example, it may default to the down click of a displayed button that is selected as a trigger. The triggering event may also be specified by the user. For example, the method may involve providing a menu of available trigger events and receiving a selection of a trigger event. In one embodiment, the trigger object is a video and the trigger action is a cue point in the video.”)
However, Riggins does not explicitly teach, but Zhang teaches:
The editing action is about an effect behavior, and the panel is effect behavior editing panel, the selection regions are in a form of selection box (FIG. 8)
Riggins teaches a user interface to applying an action to a target object. Zhang specific teaches an effect behavior editing action that users can choose to apply to a target object and the interface format is using a selection box.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have applied the user interface of Riggins to the effect behavior editing action of Zhang to provide users a clear user interface to editing a target object effect.
Regarding claim 2, Riggins in view of Zhang teaches:
The method of claim 1, wherein presenting the effect behavior panel comprises: in accordance with a determination that the effect behaviour editing request is directed to at least one first object in the effect, presenting the effect behavior panel with a first command edit region provided therein, the at least one first object being selected within a first object select box of the first command edit region; and in accordance with a determination that the effect behaviour editing request is directed to the effect, presenting the effect behavior panel with no command edit region or with at least one command edit region corresponding to at least one action command that is under editing or applied for the effect.(Zhang, FIG. 8 [0110]-[0111] teaches displaying the different regions for the editing panel. The combination rationale of claim 1 is incorporated here.)
Regarding claim 3, Riggins in view of Zhang teaches:
The method of claim 1, further comprising: for a second command edit region in the effect behaviour panel, in accordance with a detection of a trigger operation on a second trigger select box of the second command edit region, presenting a list of candidate triggers; in accordance with a detection of a trigger operation on a second object select box of the second command edit region, presenting a list of candidate objects that are selectable for the effect; and in accordance with a detection of a trigger operation on a second action select box of the second command edit region, presenting a list of candidate actions that are selectable for at least one second object selected in the second object select box. (Riggins [0045]-[0048], “ As shown in FIG. 3, an exemplary method involves receiving a selection of a trigger object, as shown in block 310. For example, this may involve identifying that a user has positioned a mouse curser over a button and clicked and held the mouse button. The method shown in FIG. 3 further comprises receiving a selection of a target object, as shown in block 320. For example, this may involve identifying that the user has dragged the mouse curser from the trigger object to a target object and released the mouse button. The method shown in FIG. 3 further comprises providing a menu of available actions for the target object, as shown in block 330. The method shown in FIG. 3 further comprises receiving a selection of an action that will be performed by the target object when a trigger event occurs, as shown in block 340. In certain embodiments, the triggering event may be identified by default, for example, it may default to the down click of a displayed button that is selected as a trigger. The triggering event may also be specified by the user. For example, the method may involve providing a menu of available trigger events and receiving a selection of a trigger event. In one embodiment, the trigger object is a video and the trigger action is a cue point in the video.” See claim 1 for selection box teachings and combinations.)
Regarding claim 4, Riggins in view of Zhang teaches:
The method of claim 3, further comprising: in accordance with a detection that a second trigger is selected in the second trigger select box, highlighting the object select box for selection of at least one second object; and in accordance with a detection that at least one second object is selected in the second object select box, highlighting the second action select box for selection of a second action. (See Zhang for visualizing the different trigger select box and object selection box. It would have been a design choice to highlighting the corresponding item in the interface operation to visualize users selection. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Riggins in view of Zhang with the design choice to give users a clear user interface.)
Regarding claim 8, Riggins in view of Zhang teaches:
The method of claim 1, further comprising: for a third command edit region in the effect behaviour panel, in accordance with a detection of a trigger operation on an action add control in the third command edit region, providing a further object select box to select at least one further object in the effect and a further action select box to select a further action to be performed by the at least one further object, wherein the actions selected in the action select box and the further action select box are triggered by a trigger selected in the trigger select box of the third command edit region.( Riggins FIG. 2-2f, [0042], “In certain embodiments, during the authoring process, a user may attach actions to the object and assigns the events which trigger them to construct a web of interactivity in which multiple objects interact back and forth with one another and in loops, i.e., a first object performs an action resulting in an event that triggers a second object to perform an action resulting in an event that triggers a third object to perform an action resulting in an event that triggers the first object to perform an action, etc. Such a web of interactivity can grow in complexity, for example, even with a single initial triggering event 254, such an event can result in multiple actions performed by a set of multiple objects 252a-e, as illustrated in FIG. 2f. Certain embodiments also provide for infinite event loop checking to alert users of these and other potentially problematic interactivity relationships.”)
Regarding claim 9, Riggins in view of Zhang teaches:
The method of claim 1, wherein the effect behaviour panel comprises a command add control, the method further comprising: in accordance with a detection of a trigger operation on the command add control, adding a command edit region in the effect behavior panel. (Riggins [0042], “In certain embodiments, during the authoring process, a user may attach actions to the object and assigns the events which trigger them to construct a web of interactivity in which multiple objects interact back and forth with one another and in loops, i.e., a first object performs an action resulting in an event that triggers a second object to perform an action resulting in an event that triggers a third object to perform an action resulting in an event that triggers the first object to perform an action, etc. Such a web of interactivity can grow in complexity, for example, even with a single initial triggering event 254, such an event can result in multiple actions performed by a set of multiple objects 252a-e, as illustrated in FIG. 2f. Certain embodiments also provide for infinite event loop checking to alert users of these and other potentially problematic interactivity relationships.”)
Regarding claim 10, Riggins in view of Zhang teaches:
The method of claim 9, wherein adding a command edit region in the effect behavior panel comprises: in accordance with a determination that at least one previous command edit region has been presented in the effect behaviour panel, adding the command edit region on top of the at least one previous command edit region. (Riggins FIG. 2F and [0042] teaches a objects and events chain. When one event triggered, it leads to the next round (child) of objects and event editing region. However, it does not explicitly teach the layout of the child editing region and parent editing region. On the other hand, it would be design choice to adding the child editing region on the top of the parent editing region. t would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Riggins in view of Zhang with the design choice to present both editing regions on the interface, which allows users to easily visualize the editing choice and the editing objects and events relationship. )
Regarding claim 11, Riggins in view of Zhang teaches:
The method of claim 1, further comprising: in accordance with a detection of a trigger operation on a preview control while the effect behaviour panel is provided, presenting a preview of the effect with the target action command applied;( Riggins [0029], “To preview the interaction, the user can select the preview button 104 from the user interface as shown in FIG. 1m. Upon selection of the preview button, a preview panel 146 is presented displaying the button 150 and the rectangle 148 in its initial position”) and in accordance with a detection of an exist operation on the preview, presenting the effect behaviour panel. (Riggins [0080], “When a user elects to publish a piece of media, a publishing module, shown as an application builder 26 in FIG. 3, is called. The publishing module, while depicted as a portion of the front end 20 in FIG. 3, may be implemented as a client of the data engine 44 on the back end 40. In certain embodiments, the publishing module walks through the objects and other components defined on the editing interface 22.”)
Regarding claim 12, Riggins in view of Zhang teaches:
The method of claim 1, further comprising: in accordance with a detection of a script generation request of the effect, generating an effect file for the effect, the effect file at least comprising separate node scripts corresponding to the target object, the target action, and the target trigger applied in the effect.( Riggins [0080], “he publishing component may use a portion of a declarative file, for example an MXML file in the Adobe.RTM. Flash.RTM. environment, that describes the objects and the object attribute values. A similar task is performed for each action. The publishing component may also include declarative handler tags inside of the components. The publishing component may use object relational mapping to extract the data from the database. The result is a declarative lay out of all of the components and edits that the user has specified in their project. The implementation details of the actions may be pulled from a subclass, for example, from a class library.”[0084], “An exemplary XML fragment comprises: the ActionScript of the class; a list of all editable properties, along with their types and a default value that should be set on a new instance of this type; a list of available triggers (trigger events) that this object can respond to; a list of available actions that this object supports; for each action type, a list of parameters that the action can take and the type and default values of these action parameters; and for each component type and action type, additional information for publishing the media, for example, information for .swf generation (such as a generic identifier to be used in an MXML file). Alternative embodiments may of course not use an XML fragment or use an XML fragment with differing content.”)
Regarding claim 13, Riggins in view of Zhang teaches:
An electronic device, comprising: at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions executable by the at least one processing unit, the instructions, upon execution by the at least one processing unit, causing the electronic device to perform actions (Riggins [0049], “FIG. 4 is a system diagram illustrating an application environment according to certain embodiments. Other embodiments may be utilized. The environment 1 shown in FIG. 1 comprises a computing device 10 that comprises a processor 12 and memory 14. As is known to one of skill in the art, an application may be resident in any suitable computer-readable medium and execute on any suitable processor. For example, the device 10 shown may comprise a computer-readable medium such as a random access memory (RAM) 14 coupled to a processor 12 that executes computer-executable program instructions stored in memory 14. Such processor(s) 12 may comprise a microprocessor, an ASIC, a state machine, or other processor, and can be any of a number of computer processors. Such processors (14) comprise, or may be in communication with a computer-readable medium which stores instructions that, when executed by the processor, cause the processor to perform the steps described herein.”) the rest of claim 13 recites similar limitations of claim 1, thus is rejected accordingly.
Claim 14-16, 19 recites similar limitations of claim 2-4, 8 respectively, thus is rejected accordingly.
Claim 20 recites similar limitations of claim 13, thus is rejected accordingly.
Claim(s) 5-6, 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Riggins in view of Zhang and further in view of Jang (US 2017/0270967 A1).
Regarding claim 5, Riggins in view of Zhang teaches:
The method of claim 3, further comprising:
However, Riggins in view of Zhang does not explicitly, but Jang teaches:
detecting that a plurality of second objects are selected in the second object select box,([0198], “As described above, when a plurality of check boxes are selected by the user, a plurality of partial images corresponding to the selected check boxes may be edited at the same time. In some cases, when all checks boxes are selected, the same edit effect may be given to all partial images.”) wherein an order of the plurality of second objects is adjusted through a drag operation or a select-to-slide operation.([0202], “the controller 180 may change the order of a partial image according to a drag gesture.”)
Riggins in view of Zhang teaches a user interface. Jang teaches allowing users to fine control the interface.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Riggins in view of Zhang with the specific teachings of Jang to provide a flexible and user-friendly user interface.
Regarding claim 6, Riggins in view of Zhang and Jang teaches:
The method of claim 3, wherein presenting a list of candidate actions comprises: in accordance with a detection that the plurality of second objects are selected in the second object select box and in accordance with a detection of a trigger operation on the second action select box, presenting at least one switching action that is selectable for the plurality of second objects, a switching action specifying an action switching template for the plurality of second objects. ( Riggins [0029] teaches when a preview operation is selected, the button 150 is presented. “To preview the interaction, the user can select the preview button 104 from the user interface as shown in FIG. 1m. Upon selection of the preview button, a preview panel 146 is presented displaying the button 150 and the rectangle 148 in its initial position. The user is able preview the media being developed as if the media were playing. If the user mouse clicks on the button 150, the user preview panel 146 shows the rectangle 146 moving to its ending position as shown in FIGS. 1n and 1o.” Here the switching action is defined as “If such a switching action is selected for the plurality of objects, then a speed adjustment control may be provided for the user to adjust an action speed corresponding to that switching action, e.g., to control the object appearing or disappearing speed.” Based on the Specification [0049] Riggins teaches selecting an object, e.g. 148 in FIG. 1n. However, Jang teaches selecting a plurality of objects. See claim 5. The combination rationale of claim 5 is incorporated here.)
claim 17-18 recites similar limitations of claim 5-6 respectively, thus is rejected accordingly.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Riggins in view of Zhang and Jang and further in view of Zhao (US 2019/0155487 A1).
Regarding claim 7, Riggins in view of Zhang and Jang teaches:
The method of claim 6, further comprising: in accordance with a detection that a target switching action of the at least one switching action is selected for the plurality of second objects, (Riggins [0029] teaches when a preview operation is selected, the button 150 is presented. “To preview the interaction, the user can select the preview button 104 from the user interface as shown in FIG. 1m. Upon selection of the preview button, a preview panel 146 is presented displaying the button 150 and the rectangle 148 in its initial position. The user is able preview the media being developed as if the media were playing. If the user mouse clicks on the button 150, the user preview panel 146 shows the rectangle 146 moving to its ending position as shown in FIGS. 1n and 1o.”)
However, Riggins in view of Zhang and Jang does not, but Zhao teaches:
providing a speed adjustment control to adjust an action speed corresponding to the target switching action. ([0108], “In some embodiments, a sliding up operation or a sliding down operation on the touch screen can be used to adjust the moving speed of moving objects.”)
Riggins in view of Zhang and Jang teaches a switching operation that makes object moving. Zhao teaches a control to adjust the speed.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Riggins in view of Zhang and Jang with the specific teachings of Zhao to provide more flexible user interface.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YANNA WU whose telephone number is (571)270-0725. The examiner can normally be reached Monday-Thursday 8:00-5:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 5712722330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YANNA WU/Primary Examiner, Art Unit 2615