DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: an obtaining unit configured to obtain and a display unit configured to display in claim 12.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Regarding claims 17-20, the limitation “the motion parameter comprises any one or more of an average motion velocity, an average trajectory direction, an average device orientation, a velocity cumulative variation parameter and a directional cumulative variation parameter” uses conjunctive word “and”, and therefore the limitation is interpreted as “the motion parameter comprises any one or more of an average motion velocity, any one or more of an average trajectory direction, any one or more of an average device orientation, any one or more of a velocity cumulative variation parameter and any one or more of a directional cumulative variation parameter”. See SuperGuide Corp. v. DirecTV Enters., Inc., 358 F.3d 870, 69 U.S.P.Q.2d 1865 (Fed. Cir. 2004).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 12 and 14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Regarding claim 12, the language of the claims when taken as a whole, raise questions as whether the claims fall within any of the statutory categories of invention. The language of the claims raise questions as to whether the claims are directed merely to an abstract idea that is not tied to a technological art, environment or machine which would result in a practical application producing a concrete, useful, and tangible result to form the basis of statutory subject matter under 35 U.S.C. 101. The word "apparatus" (found solely in the preamble) does not inherently mean that the claims are directed to a machine. Only if at least one of the claimed elements of the system is a physical part of a device can the system as claimed constitute part of a device or a combination of devices to be a machine within the meaning of 101. Specifically, claim 12 refers to an obtaining unit and a display unit, of which Applicant’s disclosure recites in at least paragraph [0129], that the claimed “units” can be implemented solely in software. This clearly allows for the interpretation of such “units” to be performed in software thus constituting a rejection due to failing to fall within at least one of the statutory categories. Therefore, the claimed elements are software per se, which fails to fall within a statutory category of invention and necessitates the rejection of claim 12.
Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claims recite, “A computer-readable storage medium, on which a computer program is stored …” (see lines 1-2 of claim 14, for example). The specification in paragraph [0120] describes a non-transitory computer readable medium, and further in paragraph [0122] describes a computer-readable storage medium and a computer-readable signal medium. However, the specification does not explicitly exclude the computer-readable storage medium from comprising a propagating signals. Therefore, the broadest reasonable interpretation of “computer-readable storage medium” could cover transitory propagating signals, which are non-statutory. Therefore, the claim is rejected under 35 USC 101 as covering non-statutory subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-2, 4-6 and 12-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over De Ridder et al. (US 10,984,574, hereinafter De Ridder).
Regarding claim 1, De Ridder teaches an animation display method (abstract), characterized in that the method comprises:
obtaining a trajectory parameter set (first/starting position and orientation of the mobile device at the first/starting position when the animator Stephen starts moving the mobile computing device in the real-world environment) corresponding to a target trajectory segment (path corresponding to the movement of the mobile phone), wherein the target trajectory segment is a trajectory segment passed by a terminal device during a motion process (path followed by the movement of Stephen’s mobile phone after Stephen touches the screen or a button to indicate to the mobile phone that he wants to start animating the scene; col. 5 lines 46-63: An animator Stephen, is animating a virtual object in an AR scene, but is uncertain which virtual object is optimal and does not have the necessary skill to operate a desktop animation platform. Stephen uses the AR animation generation system to select a virtual object in the AR scene displayed by the mobile phone. Stephen touches the screen or a button to indicate to the mobile phone that he wants to start animating the scene. Stephen then moves around the real-world environment with the mobile phone. The AR animation generation system animates the selected virtual object in the AR scene and tracks Stephen's movement of the mobile phone. The AR animation generation system animates the virtual object along a corresponding path based on the movement speed and direction of Stephen. The AR animation generation system then saves the animation data including all parameters of both Stephen's inputs and the virtual object profile information; col. 6 line 67-col. 7 line 4: The position/location tracker 108 includes hardware and/or software configured to track and calculate a position and orientation of the mobile computing system 100 in real-world environment; col. 7 lines 49-55: the animation generator 106 receives position information from the position/location tracker 108. For instance, the animation generator 106 may receive mobile device parameters 126, examples of which are: location data such as height, geographic location (x. y, z), acceleration data, relationships of objects and virtual objects in the AR scene; col. 7 lines 60-67: the animation generator 106 may determine a first position of the mobile computing system 100. In some cases, the first position of the mobile computing system 100 may be recorded when an animation related user input 132 is received indicating the start of an animation. The animation generator 106 records the mobile device parameters from position/location tracker 108 from the start of the animation until a user input to stop the animation is received; col. 8 lines 10-18: A brief example of the animation generator is that the mobile computing system is located at a position (X1, Y1, Z1) with an orientation (O1) when the animation related user input 132 to start animation is received. The user 130 also selects a virtual object in the AR content. The position/location tracker 108 monitors movement (e.g., X2, Y2, Z2) or change in orientation (02) of the mobile computing system 100 (e.g., the user 130 moves or changes orientation of the mobile computing system 100); col. 11 lines 18-33: At 208, the method 200 involves determining initial parameters for the mobile computing system. For instance, the animation generator 106 may determine the start or initial position of the mobile computing system in the real-world environment. This may include determining a starting location of the mobile computing system in the real-world environment (e.g., (X.sub.2, Y.sub.2, Z.sub.2) in the coordinate system of the real-world environment) and determining an orientation (i.e., pitch, yaw, roll, and associated degrees between 0-359) for the mobile computing system in the real-world environment. The information in 208 may be determined from various sensors of the mobile computing system, such as an accelerometer, global positioning system, mobile triangulation transmitter, near field communication location, or other methods of position determination of mobile computing systems), the trajectory parameter set comprises a plurality of groups of trajectory parameters (first/starting position, end/second, and orientation of the mobile device for a first movement arc or path followed by the mobile device and first/starting position, end/second, and orientation of the mobile device for a second movement arc or path followed by the mobile device is functionally analogous to a plurality of groups of trajectory parameters), and the trajectory parameters are collected by the terminal device during motion of the target trajectory segment (while Stephen moves around the real-world environment with the mobile phone, the mobile device parameters (such as movement speed, movement direction, location, orientation, etc.) are received corresponding to the path followed by the mobile phone; col. 3 lines 34-45: The mobile device determines a starting location of the virtual object in the AR scene (e.g., (X.sub.1, Y.sub.1, Z.sub.1) and a starting location of the mobile device and computes a correspondence between the respective starting positions. When an animator selects an animation mode, the mobile device associates the movement or motions to animation parameters (i.e., positions, accelerations, interaction with surfaces) for the virtual object being animated. The mobile device captures motions and iteratively computes the corresponding new positions of the virtual object by using the correspondence between the respective starting positions; col. 4 lines 26-35: a change in the position of a physical object (e.g., a mobile computing system) in the real-world environment can include a change only in the location of the physical object within the real-world environment without a change in the orientation of the physical object, a change only in the orientation of the physical object in the real-world environment without a change in the location of the physical object, or a change both in the location and orientation of the physical object in the real-world environment; col. 5 lines 46-63: An animator Stephen, is animating a virtual object in an AR scene, but is uncertain which virtual object is optimal and does not have the necessary skill to operate a desktop animation platform. Stephen uses the AR animation generation system to select a virtual object in the AR scene displayed by the mobile phone. Stephen touches the screen or a button to indicate to the mobile phone that he wants to start animating the scene. Stephen then moves around the real-world environment with the mobile phone. The AR animation generation system animates the selected virtual object in the AR scene and tracks Stephen's movement of the mobile phone. The AR animation generation system animates the virtual object along a corresponding path based on the movement speed and direction of Stephen. The AR animation generation system then saves the animation data including all parameters of both Stephen's inputs and the virtual object profile information; col. 7 lines 49-55: the animation generator 106 receives position information from the position/location tracker 108. For instance, the animation generator 106 may receive mobile device parameters 126, examples of which are: location data such as height, geographic location (x. y, z), acceleration data, relationships of objects and virtual objects in the AR scene; col. 7 lines 60-67: the animation generator 106 may determine a first position of the mobile computing system 100. In some cases, the first position of the mobile computing system 100 may be recorded when an animation related user input 132 is received indicating the start of an animation. The animation generator 106 records the mobile device parameters from position/location tracker 108 from the start of the animation until a user input to stop the animation is received; col. 8 lines 10-18: A brief example of the animation generator is that the mobile computing system is located at a position (X1, Y1, Z1) with an orientation (O1) when the animation related user input 132 to start animation is received. The user 130 also selects a virtual object in the AR content. The position/location tracker 108 monitors movement (e.g., X2, Y2, Z2) or change in orientation (02) of the mobile computing system 100 (e.g., the user 130 moves or changes orientation of the mobile computing system 100); col. 10 lines 16-30: an animation profile for a particular type of virtual object stores information that specifies parameters for controlling the behavior of the object, such as how the virtual object is displayed at particular positions, how the virtual object is animated from one position to the next position, how the virtual object reacts to its surroundings (e.g., to other planes and objects in an AR scene), and the like. The animation profile is used to influence the animation of a virtual object so as to make the animation more realistic and consequently of a higher quality. For example, the animation profile for an airplane may include information such as: information identifying an arc or path (e.g., a 3-D spline) to be followed by the airplane during a takeoff; information identifying an arc or path (e.g., a 3-D spline) to be followed by the airplane during a landing; claim 1: detecting a change in position in the real-world environment of the mobile computing system from a first position to a second position; claim 5: detecting the change in the position of the mobile computing system comprises detecting a change in orientation of the mobile computing system from a first orientation to a second orientation);
displaying an animation corresponding to a target model (animating a selected virtual object) at a display position corresponding to the target trajectory segment on the terminal device (fig. 2 step 214: output animation data representing the recording of the animation; col. 5 lines 46-63: An animator Stephen, is animating a virtual object in an AR scene, but is uncertain which virtual object is optimal and does not have the necessary skill to operate a desktop animation platform. Stephen uses the AR animation generation system to select a virtual object in the AR scene displayed by the mobile phone. Stephen touches the screen or a button to indicate to the mobile phone that he wants to start animating the scene. Stephen then moves around the real-world environment with the mobile phone. The AR animation generation system animates the selected virtual object in the AR scene and tracks Stephen's movement of the mobile phone. The AR animation generation system animates the virtual object along a corresponding path based on the movement speed and direction of Stephen. The AR animation generation system then saves the animation data including all parameters of both Stephen's inputs and the virtual object profile information; col. 6 lines 7-9: the virtual object 136 is displayed with AR content 134 to a user 130 via a display 116; col. 8 lines 5-28: the animation generator 106 combines movement, location, and orientation data of the mobile computing system 100 with the virtual object profile data 124 to produce a virtual object animated by the mobile device parameters 126. A brief example of the animation generator is that the mobile computing system is located at a position (X1, Y1, Z1) with an orientation (O1) when the animation related user input 132 to start animation is received. The user 130 also selects a virtual object in the AR content. The position/location tracker 108 monitors movement (e.g., X2, Y2, Z2) or change in orientation (02) of the mobile computing system 100 (e.g., the user 130 moves or changes orientation of the mobile computing system 100). The animation generator 106 converts the change in location or orientation of the mobile computing system 100 to the virtual object 136 and applies adjustments based on the virtual object profile data 124 or animation parameters 122 as described with regard to FIGS. 2-4A-F. In some examples, the animation generator may also use machine learning/artificial intelligence (ML/AI) controller 113 to adjust the animation based on training data for a specific object (e.g., an airplane, a car, a helicopter, etc.) that have motions associated with the object in addition to the position in space; col. 11 line 43-col. 12 line 11: the animation generator 106 generates a transform equation that can subsequently be used to, given a change in position or a new position of the mobile computing system in the real-world environment, determine a new position for the virtual object in the AR scene. In a non-limiting example, a first position of a virtual object (e.g., an airplane) in an AR scene coordinate system may be determined to be location:(1, 1, 3) with an orientation: (roll,) 090°. An initial position for the mobile computing system in a real-world environment coordinate system may be determined to be location:(2, 4, 6) with an orientation: (roll, 120°). In certain embodiments, the animation generator 106 “ties” these two positions together such that a change in the position of the mobile computing system in real-world environment can be translated to new position for the virtual object in the AR scene. The “tying” may be represented by a transform equation. For example, if subsequently the animation generator 106 detects a change in position of the mobile computing system to a new location:(4, 5, 3) and a new orientation:(roll, 180°), the animation generator 106 uses the link or transform equation to compute a new corresponding position for the virtual object in the AR scene. For example, in this example, the change in location and orientation of the mobile computing system in the real-world environment is (2, 1, −3) with an orientation change of (roll, 60°). The animation generator 106 may compute the corresponding position of the virtual object in the AR scene is at location:(3, 2, 0) in the AR scene with a subsequent orientation of (roll, 140°). One or more of parameters may be used to constrain the motion of orientation change for the virtual object within the AR scene during the animation. For example, if the virtual object being animated is a car, the car will change orientations in a curvilinear manner (e.g., cannot turn via rolling, yawing, etc.)), the motion parameter (when the animator Stephen moves the mobile device from a first position with first orientation to a second position with a second orientation, parameters such the movement speed or acceleration of the mobile device and movement direction of the mobile device, etc. are determined, and such parameters are functionally analogous to the motion parameters) is determined according to the trajectory parameter set (col. 3 lines 32-41: the animator uses a mobile device that presents an AR scene including virtual objects for animation and real world content. The mobile device determines a starting location of the virtual object in the AR scene (e.g., (X.sub.1, Y.sub.1, Z.sub.1) and a starting location of the mobile device and computes a correspondence between the respective starting positions. When an animator selects an animation mode, the mobile device associates the movement or motions to animation parameters (i.e., positions, accelerations, interaction with surfaces) for the virtual object being animated; col. 5 lines 46-63: Stephen then moves around the real-world environment with the mobile phone. The AR animation generation system animates the selected virtual object in the AR scene and tracks Stephen's movement of the mobile phone. The AR animation generation system animates the virtual object along a corresponding path based on the movement speed and direction of Stephen), and the motion parameter reflects a motion state feature of the terminal device on the target trajectory segment (movement speed and movement direction of the mobile device during the movement of the mobile device from a first position with a first orientation to a second position with a second orientation corresponding to the movement path of the mobile device; col. 5 lines 46-63: An animator Stephen, is animating a virtual object in an AR scene, but is uncertain which virtual object is optimal and does not have the necessary skill to operate a desktop animation platform. Stephen uses the AR animation generation system to select a virtual object in the AR scene displayed by the mobile phone. Stephen touches the screen or a button to indicate to the mobile phone that he wants to start animating the scene. Stephen then moves around the real-world environment with the mobile phone. The AR animation generation system animates the selected virtual object in the AR scene and tracks Stephen's movement of the mobile phone. The AR animation generation system animates the virtual object along a corresponding path based on the movement speed and direction of Stephen. The AR animation generation system then saves the animation data including all parameters of both Stephen's inputs and the virtual object profile information).
De Ridder does not explicitly teach the target model is determined according to a motion parameter.
However, De Ridder teaches to select the virtual object (the target model) based on animation parameters or animation profile associated with the movement/motion of the mobile device (movement speed or acceleration and movement direction of the mobile device when the animator Stephen moves the mobile device from a first position with first orientation to a second position with a second orientation is functionally analogous to the motion parameter; a virtual object or a virtual profile related to the virtual object can be selected based on the movement/motion parameters such as orientation, acceleration, interaction with surfaces that are associated with the animation parameters; col. 3 lines 32-41: the animator uses a mobile device that presents an AR scene including virtual objects for animation and real world content. The mobile device determines a starting location of the virtual object in the AR scene (e.g., (X.sub.1, Y.sub.1, Z.sub.1) and a starting location of the mobile device and computes a correspondence between the respective starting positions. When an animator selects an animation mode, the mobile device associates the movement or motions to animation parameters (i.e., positions, accelerations, interaction with surfaces) for the virtual object being animated; col. 4 lines 44-45: The user can also select an animation profile to be used for animating the virtual object; col. 5 lines 46-63: An animator Stephen, is animating a virtual object in an AR scene, but is uncertain which virtual object is optimal and does not have the necessary skill to operate a desktop animation platform. Stephen uses the AR animation generation system to select a virtual object in the AR scene displayed by the mobile phone. Stephen touches the screen or a button to indicate to the mobile phone that he wants to start animating the scene. Stephen then moves around the real-world environment with the mobile phone. The AR animation generation system animates the selected virtual object in the AR scene and tracks Stephen's movement of the mobile phone. The AR animation generation system animates the virtual object along a corresponding path based on the movement speed and direction of Stephen. The AR animation generation system then saves the animation data including all parameters of both Stephen's inputs and the virtual object profile information; col. 8 lines 5-9: the animation generator 106 combines movement, location, and orientation data of the mobile computing system 100 with the virtual object profile data 124 to produce a virtual object animated by the mobile device parameters 126; claim 1: animating the virtual object using the mobile computing system and based on the animation profile, wherein the animating comprises: detecting a change in position in the real-world environment of the mobile computing system from a first position to a second position; claim 2: wherein identifying the animation profile comprises: selecting the animation profile from a plurality of animation profiles based upon an object type of the virtual object; and wherein the animation profile includes a set of one or more parameters that influence animation of the virtual object, the set of one or more parameters including at least one of an orientation change parameter that constrains a motion of the virtual object during an orientation change of the virtual object, a movement parameter that constrains the motion of the virtual object, an alignment parameter that constrains an alignment measured from a first end of the virtual object to a second end of the virtual object, or a planar surface interaction parameter that constrains the motion of the virtual object as it interacts with one or more planar surfaces). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify De Ridder’s process to enable a selection of the virtual object to be animated based on motion/movement data as taught because such a process increases realism of the animation (col. 12 line 27).
Regarding claim 2, De Ridder teaches the method according to claim 1, characterized in that the method is applied to the terminal device, and further comprises:
obtaining a first instruction triggered by a user and starting collecting the trajectory parameters (start/first position data related to the movement of the mobile device is collected after the user signals to start the animation and recording; col. 4 lines 45-47: The user can then signal the start of the animation (e.g., by selecting a “record” button), which triggers recording of the animation; col. 4 lines 50-56: upon receiving a signal from the user to start the generation of the animation, the AR animation generation system determines a first position of the mobile computing system in a real-world coordinate system and also determines a first position of the virtual object in the AR scene to be animated in the virtual scene's coordinate system; col. 7 lines 62-67: the first position of the mobile computing system 100 may be recorded when an animation related user input 132 is received indicating the start of an animation. The animation generator 106 records the mobile device parameters from position/location tracker 108 from the start of the animation until a user input to stop the animation is received);
obtaining a second instruction triggered by the user and stopping collecting the trajectory parameters (end/second position data related to the movement of the mobile device is collected after the user signals to stop or end the animation and recording; col. 1 line 65-col. 2 line 1: the mobile device by detecting a change in position in the real-world environment of the mobile computing system from a first position to a second position; col. 4 lines 47-49: The animation and the recording continues until a signal is received from the user to stop or end the animation; col. 7 lines 62-67: the first position of the mobile computing system 100 may be recorded when an animation related user input 132 is received indicating the start of an animation. The animation generator 106 records the mobile device parameters from position/location tracker 108 from the start of the animation until a user input to stop the animation is received; col. 12 lines 39-41: The process of animating the virtual object may continue until a signal is received to end the animation);
wherein the first instruction is used to indicate a starting point of the target trajectory segment (col. 3 lines 34-45: The mobile device determines a starting location of the virtual object in the AR scene (e.g., (X.sub.1, Y.sub.1, Z.sub.1) and a starting location of the mobile device and computes a correspondence between the respective starting positions. When an animator selects an animation mode, the mobile device associates the movement or motions to animation parameters (i.e., positions, accelerations, interaction with surfaces) for the virtual object being animated. The mobile device captures motions and iteratively computes the corresponding new positions of the virtual object by using the correspondence between the respective starting positions; col. 5 lines 46-63: An animator Stephen, is animating a virtual object in an AR scene, but is uncertain which virtual object is optimal and does not have the necessary skill to operate a desktop animation platform. Stephen uses the AR animation generation system to select a virtual object in the AR scene displayed by the mobile phone. Stephen touches the screen or a button to indicate to the mobile phone that he wants to start animating the scene. Stephen then moves around the real-world environment with the mobile phone. The AR animation generation system animates the selected virtual object in the AR scene and tracks Stephen's movement of the mobile phone. The AR animation generation system animates the virtual object along a corresponding path based on the movement speed and direction of Stephen. The AR animation generation system then saves the animation data including all parameters of both Stephen's inputs and the virtual object profile information), and the second instruction is used to indicate an end point of the target trajectory segment (col. 1 line 65-col. 2 line 1: the mobile device by detecting a change in position in the real-world environment of the mobile computing system from a first position to a second position; col. 3 lines 34-45: The mobile device determines a starting location of the virtual object in the AR scene (e.g., (X.sub.1, Y.sub.1, Z.sub.1) and a starting location of the mobile device and computes a correspondence between the respective starting positions. When an animator selects an animation mode, the mobile device associates the movement or motions to animation parameters (i.e., positions, accelerations, interaction with surfaces) for the virtual object being animated. The mobile device captures motions and iteratively computes the corresponding new positions of the virtual object by using the correspondence between the respective starting positions; col. 4 lines 47-49: The animation and the recording continues until a signal is received from the user to stop or end the animation; col. 7 lines 62-67: the first position of the mobile computing system 100 may be recorded when an animation related user input 132 is received indicating the start of an animation. The animation generator 106 records the mobile device parameters from position/location tracker 108 from the start of the animation until a user input to stop the animation is received; col. 12 lines 39-41: The process of animating the virtual object may continue until a signal is received to end the animation).
Regarding claim 4, De Ridder teaches the method according to claim 1, characterized in that after obtaining the trajectory parameter set corresponding to the target trajectory segment, the method further comprises: performing data cleaning (smoothing) on each group of trajectory parameter (smoothing spline for each movement arc or path followed by the mobile device) in the trajectory parameter set (col. 10 lines 25-32: For example, the animation profile for an airplane may include information such as: information identifying an arc or path (e.g., a 3-D spline) to be followed by the airplane during a takeoff; information identifying an arc or path (e.g., a 3-D spline) to be followed by the airplane during a landing; a smoothing spline used for smoothing a movement path for the virtual object; col. 10 lines 16-32: an animation profile for a particular type of virtual object stores information that specifies parameters for controlling the behavior of the object, such as how the virtual object is displayed at particular positions, how the virtual object is animated from one position to the next position, how the virtual object reacts to its surroundings (e.g., to other planes and objects in an AR scene), and the like. The animation profile is used to influence the animation of a virtual object so as to make the animation more realistic and consequently of a higher quality. For example, the animation profile for an airplane may include information such as: information identifying an arc or path (e.g., a 3-D spline) to be followed by the airplane during a takeoff; information identifying an arc or path (e.g., a 3-D spline) to be followed by the airplane during a landing; a smoothing spline used for smoothing a movement path for the virtual object).
Regarding claim 5, De Ridder teaches the method according to claim 1, characterized in that before displaying the animation corresponding to the target model at the display position corresponding to the target trajectory segment on the terminal device (as shown in step 214 of fig. 2, displaying step of the animation is the last step in the process and therefore the other steps are inherently performed by this step), the method further comprises: determining the display position corresponding to the target trajectory segment according to the trajectory parameter set (col. 4 lines 50-63: upon receiving a signal from the user to start the generation of the animation, the AR animation generation system determines a first position of the mobile computing system in a real-world coordinate system and also determines a first position of the virtual object in the AR scene to be animated in the virtual scene's coordinate system. Based upon the first positions of the virtual object and the mobile computing system, the AR animation generation system then creates a link between the virtual object and the mobile computing system such that a change in position of the mobile computing system in the real-world environment (i.e., real-world coordinate system) causes a corresponding change in the position of the virtual object in the AR scene (i.e., within the AR scene coordinate system); col. 10 line 49-col. 11 line 11: a first position of a virtual object (e.g., an airplane) in an AR scene coordinate system may be determined to be location:(1, 1, 3) with an orientation: (roll,) 090°. An initial position for the mobile computing system in a real-world environment coordinate system may be determined to be location:(2, 4, 6) with an orientation: (roll, 120°). In certain embodiments, the animation generator 106 “ties” these two positions together such that a change in the position of the mobile computing system in real-world environment can be translated to new position for the virtual object in the AR scene. The “tying” may be represented by a transform equation. For example, if subsequently the animation generator 106 detects a change in position of the mobile computing system to a new location:(4, 5, 3) and a new orientation:(roll, 180°), the animation generator 106 uses the link or transform equation to compute a new corresponding position for the virtual object in the AR scene. For example, in this example, the change in location and orientation of the mobile computing system in the real-world environment is (2, 1, −3) with an orientation change of (roll, 60°). The animation generator 106 may compute the corresponding position of the virtual object in the AR scene is at location:(3, 2, 0) in the AR scene with a subsequent orientation of (roll, 140°). One or more of parameters may be used to constrain the motion of orientation change for the virtual object within the AR scene during the animation. For example, if the virtual object being animated is a car, the car will change orientations in a curvilinear manner (e.g., cannot turn via rolling, yawing, etc.); col. 12 lines 53-62: In certain embodiments, the animation generator 106 is configured to monitor changes to the position of the mobile computing system in the real-world environment. Given a change, the animation generator 106 uses the link generated in 210 to determine corresponding positions (locations and/or orientations) for the virtual object in the AR scene. The animation generator 106 then may send the updated virtual object position information to the AR framework 104, which causes the AR scene to be updated to display the virtual object in the new position); determine an animation effect (controlling behavior of the animated object) corresponding to the target model (behavior of the animated object is based on the selected animation profile; col. 10 lines 16-44: an animation profile for a particular type of virtual object stores information that specifies parameters for controlling the behavior of the object, such as how the virtual object is displayed at particular positions, how the virtual object is animated from one position to the next position, how the virtual object reacts to its surroundings (e.g., to other planes and objects in an AR scene), and the like. The animation profile is used to influence the animation of a virtual object so as to make the animation more realistic and consequently of a higher quality. For example, the animation profile for an airplane may include information such as: information identifying an arc or path (e.g., a 3-D spline) to be followed by the airplane during a takeoff; information identifying an arc or path (e.g., a 3-D spline) to be followed by the airplane during a landing; a smoothing spline used for smoothing a movement path for the virtual object; information relating to the airplane's interaction with a particular planar surface (e.g., the planar surface such as the ground on which the airplane sits prior to the start of the animation); information regarding the plane's orientation when making a sharp right or left turn; information regarding the plane's orientation when making a sharp right or left turn, etc. An animation profile for a particular virtual object thus contains information and parameters related to the motion and orientation of the particular virtual object so as to make the movement of the virtual object during the animation more realistic. Further details of the virtual object profiles are described with regard to FIG. 4A-F).
Regarding claim 6, De Ridder teaches the method according to claim 1, characterized in that before displaying the animation corresponding to the target model at the display position corresponding to the target trajectory segment on the terminal device (as shown in step 214 of fig. 2, displaying step of the animation is the last step in the process and therefore the other steps are inherently performed by this step), the method further comprises: determining the motion parameter (when the animator Stephen moves the mobile device from a first position with first orientation to a second position with a second orientation, parameters such the movement speed or acceleration of the mobile device and movement direction of the mobile device, etc. are determined, and such parameters are functionally analogous to the motion parameters) according to the trajectory parameter set (col. 3 lines 32-41: the animator uses a mobile device that presents an AR scene including virtual objects for animation and real world content. The mobile device determines a starting location of the virtual object in the AR scene (e.g., (X.sub.1, Y.sub.1, Z.sub.1) and a starting location of the mobile device and computes a correspondence between the respective starting positions. When an animator selects an animation mode, the mobile device associates the movement or motions to animation parameters (i.e., positions, accelerations, interaction with surfaces) for the virtual object being animated; col. 5 lines 46-63: Stephen then moves around the real-world environment with the mobile phone. The AR animation generation system animates the selected virtual object in the AR scene and tracks Stephen's movement of the mobile phone. The AR animation generation system animates the virtual object along a corresponding path based on the movement speed and direction of Stephen); selecting a target model corresponding to the motion parameter from a model base (selecting the virtual object based on animation parameters or animation profile associated with the movement/motion of the mobile device; movement speed or acceleration and movement direction of the mobile device when the animator Stephen moves the mobile device from a first position with first orientation to a second position with a second orientation is functionally analogous to the motion parameter; a virtual object or a virtual profile related to the virtual object can be selected based on the movement/motion parameters such as orientation, acceleration, interaction with surfaces that are associated with the animation parameters; col. 3 lines 32-41: the animator uses a mobile device that presents an AR scene including virtual objects for animation and real world content. The mobile device determines a starting location of the virtual object in the AR scene (e.g., (X.sub.1, Y.sub.1, Z.sub.1) and a starting location of the mobile device and computes a correspondence between the respective starting positions. When an animator selects an animation mode, the mobile device associates the movement or motions to animation parameters (i.e., positions, accelerations, interaction with surfaces) for the virtual object being animated; col. 4 lines 44-45: The user can also select an animation profile to be used for animating the virtual object; col. 5 lines 46-63: An animator Stephen, is animating a virtual object in an AR scene, but is uncertain which virtual object is optimal and does not have the necessary skill to operate a desktop animation platform. Stephen uses the AR animation generation system to select a virtual object in the AR scene displayed by the mobile phone. Stephen touches the screen or a button to indicate to the mobile phone that he wants to start animating the scene. Stephen then moves around the real-world environment with the mobile phone. The AR animation generation system animates the selected virtual object in the AR scene and tracks Stephen's movement of the mobile phone. The AR animation generation system animates the virtual object along a corresponding path based on the movement speed and direction of Stephen. The AR animation generation system then saves the animation data including all parameters of both Stephen's inputs and the virtual object profile information; col. 8 lines 5-9: the animation generator 106 combines movement, location, and orientation data of the mobile computing system 100 with the virtual object profile data 124 to produce a virtual object animated by the mobile device parameters 126; claim 1: animating the virtual object using the mobile computing system and based on the animation profile, wherein the animating comprises: detecting a change in position in the real-world environment of the mobile computing system from a first position to a second position; claim 2: wherein identifying the animation profile comprises: selecting the animation profile from a plurality of animation profiles based upon an object type of the virtual object; and wherein the animation profile includes a set of one or more parameters that influence animation of the virtual object, the set of one or more parameters including at least one of an orientation change parameter that constrains a motion of the virtual object during an orientation change of the virtual object, a movement parameter that constrains the motion of the virtual object, an alignment parameter that constrains an alignment measured from a first end of the virtual object to a second end of the virtual object, or a planar surface interaction parameter that constrains the motion of the virtual object as it interacts with one or more planar surfaces), wherein the model base comprises a plurality of types of models (col. 10 lines 3-15: multiple animation profiles may be provided for different types of virtual objects. A user may select a particular animation profile that is configured for the virtual object to be animated. For example, a first animation profile may be selected for animating an automobile, a second animation profile may be selected for animating a helicopter, a third animation profile may be selected for animating an airplane, a fourth animation profile may be selected for animating a human, a fifth animation profile may be selected for animating a rocket, and the like. Accordingly, if the virtual object selected in 203 is a helicopter, then the user may select the helicopter animation profile for animating the virtual object; col. 10 line 58- col. 11 line 1: the animation generator 106 may determine which animation profile to be used for a selected virtual object based upon the characteristics of the virtual object. For example, each virtual object may be represented by a model that stores information identifying characteristics of the virtual object. This model information may identify the type of the virtual object (e.g., whether it is a car, an airplane, etc.). The animation generator 106 may use this information to automatically identify an animation profile to be used for the virtual object from among the multiple available animation profiles; claim 2: selecting the animation profile from a plurality of animation profiles based upon an object type of the virtual object).
Claims 12-14 are similar in scope to claim 1, and therefore the examiner provides similar rationale to reject these claims. Moreover, De Ridder teaches an animation display apparatus (mobile computing system 100, fig. 1) comprising an obtaining unit (this element is interpreted under 35 USC 112(f) as hardware; animation generator 106 stored in memory 504 and executed by processor 502 as shown fig. 1 and fig. 5) and a display unit (this element is interpreted under 35 USC 112(f) as hardware; display unit 116/display device 512, fig. 1 and fig. 5). De Ridder further teaches an electronic device (mobile computing system 500, fig. 5) comprising one or more processors (processor 502, fig. 5) and a memory (memory 504, fig. 5). De Ridder also teaches a computer-readable medium (col. 19 lines 57-62).
Regarding claim 15, De Ridder teaches the method according to claim 4, wherein a five-point smoothing method is used to clean the trajectory parameters (smoothing spline for each movement arc or path followed by the mobile device; col. 10 lines 25-32: For example, the animation profile for an airplane may include information such as: information identifying an arc or path (e.g., a 3-D spline) to be followed by the airplane during a takeoff; information identifying an arc or path (e.g., a 3-D spline) to be followed by the airplane during a landing; a smoothing spline used for smoothing a movement path for the virtual object; col. 10 lines 16-32: an animation profile for a particular type of virtual object stores information that specifies parameters for controlling the behavior of the object, such as how the virtual object is displayed at particular positions, how the virtual object is animated from one position to the next position, how the virtual object reacts to its surroundings (e.g., to other planes and objects in an AR scene), and the like. The animation profile is used to influence the animation of a virtual object so as to make the animation more realistic and consequently of a higher quality. For example, the animation profile for an airplane may include information such as: information identifying an arc or path (e.g., a 3-D spline) to be followed by the airplane during a takeoff; information identifying an arc or path (e.g., a 3-D spline) to be followed by the airplane during a landing; a smoothing spline used for smoothing a movement path for the virtual object). However, it would have been prima facie obvious to use five-point smoothing method rather than spline smoothing method to clean the trajectory parameters. Whether the method to clean the trajectory parameters is five-point smoothing or spline smoothing is solely a matter of aesthetic design choice, and would not be sufficient to distinguish over the prior art. See MPEP 2144.04.
Allowable Subject Matter
Claims 3, 7-11, and 16-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 3, De Ridder teaches the method according to claim 2, characterized in that the terminal device comprises a gyroscope and an accelerometer (col. 6 lines 30-37), the trajectory parameter set comprises a first trajectory parameter (first/starting position, end/second, and orientation of the mobile device for a first movement arc or path followed by the mobile device; col. 5 lines 46-63; col. 6 line 67-col. 7 line 4; col. 7 lines 60-67; col. 8 lines 10-18; col. 11 lines 18-33), the first trajectory parameter comprises a first location (end/second position for a first movement arc or path followed by the mobile device), and the starting collecting the trajectory parameters comprises: obtaining a second location (start/first position for a first movement arc or path followed by the mobile device), wherein the second location is a location of the terminal device when the user triggers the first instruction (start/first position data related to the movement of the mobile device is collected after the user signals to start the animation and recording; col. 4 lines 45-47: The user can then signal the start of the animation (e.g., by selecting a “record” button), which triggers recording of the animation; col. 4 lines 50-56: upon receiving a signal from the user to start the generation of the animation, the AR animation generation system determines a first position of the mobile computing system in a real-world coordinate system and also determines a first position of the virtual object in the AR scene to be animated in the virtual scene's coordinate system; col. 7 lines 62-67: the first position of the mobile computing system 100 may be recorded when an animation related user input 132 is received indicating the start of an animation. The animation generator 106 records the mobile device parameters from position/location tracker 108 from the start of the animation until a user input to stop the animation is received); determining motion information of the terminal device according to the gyroscope and the accelerometer (col. 6 lines 30-37 and col. 11 lines 18-33).
However, none of the cited prior art references of record, teach either individually or in combination, the limitation “determining the first location according to the second location and the motion information”.
Regarding claim 7, none of the cited prior art references of record, teach either individually or in combination, the limitation “the trajectory parameters comprise a collection time and a collection location for the terminal device to collect the trajectory parameters, the motion parameter comprises an average motion velocity, and the average motion velocity reflects an average velocity of the terminal device on the target trajectory segment; the determining the motion parameter according to the trajectory parameter set comprises: dividing the target trajectory segment into a plurality of sub-trajectory segments according to the collection time of the trajectory parameters; calculating a moving velocity of the terminal device on each sub-trajectory segment according to a collection location and a collection time of a starting point and a collection location and a collection time of an end point of the each sub-trajectory segment; determining the average motion velocity according to the moving velocity of each sub- trajectory segment among the plurality of sub-trajectory segments; the selecting the target model corresponding to the motion parameter from the model base comprises: selecting, in response that the average motion velocity is greater than an average velocity threshold, an accelerated moving model from the model base as the target model corresponding to the target trajectory segment”.
Regarding claims 8-9, none of the cited prior art references of record, teach either individually or in combination, the limitation “the trajectory parameters comprise a collection location for the terminal device to collect the trajectory parameters, the motion parameter comprises an average trajectory direction, and the average trajectory direction is a direction from a starting point of the target trajectory segment to an end point of the target trajectory segment; the determining the motion parameter according to the trajectory parameter set comprises: calculating the average trajectory direction according to a collection location of the starting point of the target trajectory segment and a collection location of the end point of the target trajectory segment; the selecting the target model corresponding to the motion parameter from the model base comprises: selecting, in response that a component of the average trajectory direction in a vertical direction is less than or equal to a rising threshold, a horizontal movement model from the model base as the target model corresponding to the target trajectory segment; selecting, in response that the component of the average trajectory direction in the vertical direction is greater than the rising threshold, a vertical movement model from the model base as the target model corresponding to the target trajectory segment”.
Regarding claim 10, none of the cited prior art references of record, teach either individually or in combination, the limitation “the trajectory parameters comprise a collection time and a collection location for the terminal device to collect the trajectory parameters, the motion parameter comprises a velocity cumulative variation parameter, and the velocity cumulative variation parameter reflects a velocity fluctuation of the terminal device on the target trajectory segment; the determining the motion parameter according to the trajectory parameter set comprises: calculating a corresponding velocity of the terminal device at each collection location according to the trajectory parameter set; calculating the velocity cumulative variation parameter according to the corresponding velocity of the terminal device at each collection location; the selecting the target model corresponding to the motion parameter from the model base comprises: selecting, in response that the velocity cumulative variation parameter is greater than a velocity fluctuation threshold, a velocity fluctuation model from the model base as the target model corresponding to the target trajectory segment”.
Regarding claim 11, none of the cited prior art references of record, teach either individually or in combination, the limitation “determining, according to the motion parameter, that the target trajectory segment meets a trajectory generation condition, wherein the trajectory generation condition comprises that the motion state feature of the terminal device remains unchanged”.
Regarding claim 16, none of the cited prior art references of record, teach either individually or in combination, the limitation “the trajectory parameters further comprise a device orientation of the terminal device when the terminal device collects the trajectory parameters, the motion parameter further comprises a directional cumulative variation parameter, and the directional cumulative variation parameter reflects a variation of an orientation of the terminal device on the target trajectory segment; the determining the motion parameter according to the trajectory parameter set comprises: dividing the target trajectory segment into a plurality of sub-trajectory segments according to the trajectory parameter set; calculating a rotation angle and a rotation speed of the terminal device on each sub- trajectory segment according to the device orientation; averaging rotation speeds of the plurality of sub-trajectory segments, so as to obtain the directional cumulative variation parameter; selecting the target model corresponding to the motion parameter from the model base comprises: selecting, in response that the directional cumulative variation parameter is greater than a directional fluctuation threshold, a preset model from the model base as the target model corresponding to the target trajectory segment”.
Regarding claims 17-20, none of the cited prior art references of record, teach either individually or in combination, the limitation “determining the motion parameter according to the trajectory parameter set, wherein the motion parameter comprises any one or more of an average motion velocity, an average trajectory direction, an average device orientation, a velocity cumulative variation parameter and a directional cumulative variation parameter; selecting a target model corresponding to the motion parameter from a model base, wherein the model base comprises a plurality of types of models”.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Gelinske et al. (US 2011/0171612) describes a processor being adapted to synchronize a 3-D travel path for said object with said time-referenced object locations and output a graphical representation of said travel path to said display device, and a display device renders a 3-D simulation model of said object's movement along said travel path.
Tsai et al. (US 2018/0182162) describes systems and methods are provided for navigating a three-dimensional model using deterministic movement of an electronic device. An electronic device can load and provide an initial display of a three dimensional model (e.g., of an environment or of an object). As the user moves the electronic device, motion sensing components can detect the device movement and adjust the displayed portion of the three-dimensional model to reflect the movement of the device. By walking with the device in the user's real environment, a user can virtually navigate a representation of a three-dimensional environment. In some embodiments, a user can record an object or environment using an electronic device, and tag the recorded im and orientation of the programmable device relative to the three-dimensional object at the first time responsive to detection of movement ages or video with movement information describing the movement of the device during the recording. The recorded information can then be processed with the movement information to generate a three-dimensional model of the recorded environment or object.
Li (US 2021/0304480) describes determining a target element based on a received first instruction, setting a target state of the target element and/or detecting a target operation on the target element, and obtaining a current state animation parameter of the target element; obtaining a target animation model among pre-stored animation models; and generating animation of the target element based on the target animation model and an initial state animation parameter of the target animation model, where the current state animation parameter is taken as the initial state animation parameter.
Tian et al. (US 2021/0042980) describes a method and an electronic device for displaying animation. The method includes: a display instruction is received, wherein the display instruction is configured to trigger the electronic device to display an animation corresponding to an animation model; a spatial parameter of an image device is obtained, wherein the spatial parameter indicates coordinates in a spatial model; an initial position of the animation model in the spatial model is determined based on the special parameter; and the animation at the initial position is displayed based on a skeleton animation, wherein the skeleton animation is generated based on the animation model.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JWALANT B AMIN whose telephone number is (571)272-2455. The examiner can normally be reached Monday-Friday 10am - 630pm CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JWALANT AMIN/Primary Examiner, Art Unit 2612