Prosecution Insights
Last updated: April 19, 2026
Application No. 18/702,866

VIRTUAL AVATAR ANIMATION

Non-Final OA §103
Filed
Apr 19, 2024
Examiner
LIU, ZHENGXI
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Soul Machines Limited
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
225 granted / 354 resolved
+1.6% vs TC avg
Strong +40% interview lift
Without
With
+40.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
31 currently pending
Career history
385
Total Applications
across all art units

Statute-Specific Performance

§101
13.2%
-26.8% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
5.1%
-34.9% vs TC avg
§112
15.7%
-24.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 354 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Duty to Disclose MPEP 2001.04 states: 37 C.F.R. 1.56 Duty to disclose information material to patentability. (a) A patent by its very nature is affected with a public interest. The public interest is best served, and the most effective patent examination occurs when, at the time an application is being examined, the Office is aware of and evaluates the teachings of all information material to patentability. Each individual associated with the filing and prosecution of a patent application has a duty of candor and good faith in dealing with the Office, which includes a duty to disclose to the Office all information known to that individual to be material to patentability as defined in this section. The duty to disclose information exists with respect to each pending claim until the claim is cancelled or withdrawn from consideration, or the application becomes abandoned. Information material to the patentability of a claim that is cancelled or withdrawn from consideration need not be submitted if the information is not material to the patentability of any claim remaining under consideration in the application. There is no duty to submit information which is not material to the patentability of any existing claim. The duty to disclose all information known to be material to patentability is deemed to be satisfied if all information known to be material to patentability of any claim issued in a patent was cited by the Office or submitted to the Office in the manner prescribed by §§ 1.97(b) -(d) and 1.98. However, no patent will be granted on an application in connection with which fraud on the Office was practiced or attempted or the duty of disclosure was violated through bad faith or intentional misconduct. The Office encourages applicants to carefully examine: (1) Prior art cited in search reports of a foreign patent office in a counterpart application, and (2) The closest information over which individuals associated with the filing or prosecution of a patent application believe any pending claim patentably defines, to make sure that any material information contained therein is disclosed to the Office. Claim Objections Claim 7 is objected to due to minor informalities: the claim recites “the plurality a plurality of motion sources.” For the purposes of compact prosecution, the Examiner is treating the limitation as “the plurality of motion sources.” The claim also recites “motion sources are order by their priority.” For the purposes of compact prosecution, the Examiner is treating the limitation as “motion sources are ordered by their priority.” Corrections are required. Claim 13 is objected to due to minor informalities: the claim recites “a plurality of blend nodes adjacent . . . to the plurality of blend nodes.” Are the plurality of blend nodes always adjacent to themselves? Clarification from Applicant is required. Claims 4-15 are objected to under 37 CFR 1.75(c) as being in improper form because a multiple dependent claim should not depend on another multiple dependent claim. See MPEP § 608.01(n). Accordingly, the claims 4-15 do not have to be treated on the merits. MPEP states, “Furthermore, a multiple dependent claim may not serve as a basis for any other multiple dependent claim, either directly or indirectly. These limitations help to avoid undue confusion in determining how many prior claims are actually referred to in a multiple dependent claim.” MPEP 608.01(n). MPEP further states, “A multiple dependent claim which depends from another multiple dependent claim should be objected to by using form paragraph 7.45.” MPEP 608.01(n). Compact Prosecution Although the Examiner does not have to treat Claims 4-15 on the merit according to MPEP 608.01(n), for the purposes of compact prosecution, the Examiner is treating Claims 4-15 as follows to provide art rejection and to avoid “undue confusion”: Claims 4-10 and 13-15 are treated as only depending on Claim 1, so that there are no multiple dependent claims that depends on another multiple dependent claim. With respect to Claim Interpretation, the Examiner has provided some notes regarding “[BRI on the record]” throughout the Office Action, so that the record is clear about the scope of the claimed invention, and the record is also clear about the basis for the Examiner’s analyses. A clear record of the claim interpretation could expedite the examination by creating the condition to allow the examination to focus on Applicant’s inventive concept and its comparison with related prior art. If there are disagreements, Applicant may present an alternative interpretation based on MPEP 2111. The Examiner will adopt Applicant’s interpretation on the record, if Applicant’s interpretation is reasonable and/or arguments are persuasive. Applicant may amend claims relying on the Examiner’s claim interpretation provided on the record. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a behaviour command system” and “a presentation system” in claim 17. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 Although the Examiner does not have to treat Claims 4-15 on the merit according to MPEP 608.01(n), for the purposes of compact prosecution, the Examiner is treating Claims 4-15 as follows to provide art rejection and to avoid “undue confusion”: Claims 4-10 and 13-15 are treated as only depending on Claim 1, so that there are no multiple dependent claims that depends on another multiple dependent claim. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-10, 13, 16-17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lowe et al. (US 20100134501 A1). Regarding Claim 1, Lowe teaches A method for controlling motion of a digital character (“DEFINING AN ANIMATION OF A VIRTUAL OBJECT WITHIN A VIRTUAL WORLD” Lowe Title.), comprising: i. receiving one or more behaviour commands [BRI on the record] With respect to “behaviour commands,” the Examiner is reading the limitation to mean orders to direct behavior. The specification as shown below does not provide clear guidance. According to Webster dictionary, a command means “an order given.” [0063] A Behaviour Command 102 describes and specifies behaviour actions along with supplementary motion and composition configurations. For example, a Behaviour Command 102 may be provided from a system such as that described in NZ Provisional Patent 770193 Autonomous Animation In Embodied Agents, which discloses a system for automatically injecting Behaviour Commands into avatar conversation based on semantic information. Spec. ¶ 63. [Mapping Analysis] Lowe teaches inputting/receiving behaviour commands through graphical user interface to design animation sequence, stating “As discussed below, in preferred embodiments of the invention the user interface 600 uses certain graphical representations, and implements certain tools, to make it easier for the user to construct complex, but flexible graphs that are easy for the user to comprehend (i.e. understand the intended purpose and nature of the animation being represented by the graph displayed in the graph definition area 602). This is described in more detail below. However, it will be appreciated that these features of the user interface are not essential, but merely serve to make the user interface more intuitive and more flexible for the user to construct or design a graph to represent an animation.” Lowe ¶ 158. Lowe provides examples of receiving the behavior commands from a user, stating “In order to add a node 606 of a particular node type to the graph being displayed in the graph definition area 602, the user may select the corresponding node type indicator 614. For example, the user may use the mouse 126 and double-click on the node type indicator 614. The user interface 600 is then arranged to add a node 606 of that node type to the graph and display that node 606 within the graph definition area 602.” Lowe ¶ 115. These commands are behaviour commands, because they direct animation behavior. Lowe provides some examples of commanded behaviors, stating “Examples of animation sources include, for example, data representing a person running, data representing a person walking, data representing a person jumping, etc., where these various sets of data may be used to set the attributes of the object 200 so that the object 200 (when displayed on the monitor 120) appears to be performing the respective animation/movement (e.g. running, walking, jumping, etc.).” Lowe ¶ 71.); ii. translating the one or more behaviour commands into a time sequence of channel parameters for one or more animation channels ( [BRI on the record] With respect to “animation channels,” the Examiner is reading the limitation to mean a animation sequence of a target object. [0040] In an example, further comprising a conductor system configured for determining a target state based on behaviour commands and for adjusting an animation channel of the multi-channel state machines based on the target state. [0072] . . . Each channel of the Conductor 133 may correspond to a Motion Target 115. The internal state of the Animation Planning 111 may be driven by an internal Clock 132 without an external trigger. The trigger of execution of the Animation Planning 111 system by means of an internal Clock 132 is referred to as a Time-Step. Spec. ¶¶ 40, 72. [Mapping Analysis] PNG media_image1.png 238 660 media_image1.png Greyscale Lowe teaches an animation channel, mapped to a channel of animation sequence associated with an “animation source,” translated/constructed based on user commands, stating “Consequently, the animation source represents how the object 200 moves from time-point to time-point in the sequence of time-points. For ease of explanation, the rest of this description shall refer to frames (and a sequence of frames) as an example of time-points (and the sequence of time-points for the animation).” Lowe ¶ 70; see fig. 21. As illustrated in Lowe fig. 21, for example, there are animation channels based on “AnimSource1” and “AnimSource2.” Lowe teaches channel parameters, mapped to parameters that include “weight control parameters,” stating “The ‘Blend’ node represents an operation that interpolates between the two sets of transform data output by the animation nodes depending on the value of a weight control parameter (the control parameter sub-graph is not shown in FIG. 19), and outputs the results to an output node, and, when event data is being used, also represents an operation that interpolates between the respective sets of event data output by the two animation nodes and outputs 7 that result to the output node.” Lowe ¶ 194. There is a time sequence of the channel parameters as illustrated in figs. 21 and 23 that there could be a sequence of “weight control parameter” for a sequence of blend nodes a long to blend tree. Here, the animation graph components, e.g., channel parameter and animation channels, are translated from the comments received from user through user interfaces. Lowe ¶¶ 71, 115, 158.); iii. receiving one or more motion sources ( Lowe teaches motion sources, mapped to disclosed “animation sources,” stating “Consequently, the animation source represents how the object 200 moves from time-point to time-point in the sequence of time-points. For ease of explanation, the rest of this description shall refer to frames (and a sequence of frames) as an example of time-points (and the sequence of time-points for the animation).” Lowe ¶ 70. Lowe teaches storing motion sources, stating “An animation source may be stored, for example, on the storage medium 104. The storage medium 104 may store a database comprising one or more animation sources for one or more types of object 200.” Lowe ¶ 73. In order to store the “animation sources,” it is implicit that the “animation sources” have been received. For the purposes of compact prosecution, at the end of rejection analysis of Claim 1, the Examiner conducts an obviousness analysis to strengthen the implicit teaching.); iv. determining one or more motion parameters by applying the time sequence of channel parameters to corresponding blend nodes in a blend tree based on the motion sources ( [BRI on the record] With respect to “blend node,” it is a term of art, and the Examiner is reading the limitation to mean a node where animations are blended. [Mapping Analysis] PNG media_image2.png 298 616 media_image2.png Greyscale and fig. 21 as shown. Lowe teaches blend nodes in a blend tree, stating “The benefit of functional connections 608 is illustrated with reference to FIG. 19, which illustrates the operation of a blend node (used in forming a "blend tree").” Lowe ¶ 194. The blend nodes are based on, as shown in Lowe fig. 19, animation sources, mapped to motion sources. Lowe provides examples based on fig. 19, stating “The function of these nodes shown in FIG. 19 will be explained later, but in summary, the ‘AnimSource1’ and ‘AnimSource2’ nodes are animation source nodes that represent respective operations that output respective data that define respective poses for the object 200 at a given point time. This data includes transform data (to represent the position and orientation of the object 200 and its parts) as well as event data (to represent events, such as footfalls, that may need to be synchronised when blending transform data from the two animation source nodes together). The ‘Blend’ node represents an operation that interpolates between the two sets of transform data output by the animation nodes depending on the value of a weight control parameter (the control parameter sub-graph is not shown in FIG. 19), and outputs the results to an output node, and, when event data is being used, also represents an operation that interpolates between the respective sets of event data output by the two animation nodes and outputs 7 that result to the output node. For example, the AnimSource1 node may output data representing a person walking whilst the AnimSource2 node may output data representing that person running. The transform data output by these two nodes represents the respective walking and running poses for the person, whilst the event data output by these two nodes represents the respective timing of when footfalls occur during the walking or running. The Blend node may then interpolate, or combine, the transform data from the two animation sources and may interpolate, or combine, the event data from the two animation sources, with the interpolated transform data representing the pose of a person jogging and the interpolated event data representing when footfalls occur during the jogging animation. The weight control parameter sets how the interpolation occurs (e.g. how much the walking data and how much running data contribute to the final output). FIG. 20 shows how this graph might look after the functional inputs and outputs of FIG. 19 are expanded to expose their underlying components. As can be seen, functional connections have been used in order to represent passing time data in the right-to-left direction, with other data being passed in the left-to-right direction.” Lowe ¶ 194. The motion parameters are mapped to “interpolated transform data representing the pose of a person jogging” based on “output data representing a person walking” by “AnimSource1 node” and “output data representing that person running” by “AnimSource2 node.” channel parameters are mapped to parameters that include “weight control parameter” for respective animation source. There is a time sequence of the channel parameters as illustrated in figs. 21 and 23 that there could be a sequence of “weight control parameter” for a sequence of blend nodes a long to blend tree.); and v. controlling motion of the digital character based on the one or more motion parameters (In the example above, the motion of the digital character could be mapped to the disclosed “jogging.” Lowe ¶ 194.). All limitations of Claim 1 are taught by Lowe. However, the teaching relies on an implicit teaching. The Examiner has explained Lowe implicitly teaches receiving one or more motion sources ( Lowe teaches motion sources, mapped to disclosed “animation sources,” stating “Consequently, the animation source represents how the object 200 moves from time-point to time-point in the sequence of time-points. For ease of explanation, the rest of this description shall refer to frames (and a sequence of frames) as an example of time-points (and the sequence of time-points for the animation).” Lowe ¶ 70. Lowe storing motion sources, stating “An animation source may be stored, for example, on the storage medium 104. The storage medium 104 may store a database comprising one or more animation sources for one or more types of object 200.” Lowe ¶ 73. In order to store the “animation sources,” it is implicit that the “animation sources” have been received.). For the purposes of compact prosecution, the Examiner conducts an obviousness analysis. Lowe teaches the use of graphical user interface, stating “In order to add a node 606 of a particular node type to the graph being displayed in the graph definition area 602, the user may select the corresponding node type indicator 614. For example, the user may use the mouse 126 and double-click on the node type indicator 614. The user interface 600 is then arranged to add a node 606 of that node type to the graph and display that node 606 within the graph definition area 602” (Lowe ¶ 115) and Lowe teaches inputting behaviour commands through graphical user interface to design animation sequence, stating “However, it will be appreciated that these features of the user interface are not essential, but merely serve to make the user interface more intuitive and more flexible for the user to construct or design a graph to represent an animation” (Lowe ¶ 158). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lowe’s teaching of receiving user input through user interfaces with Lowe’s teaching of “animation sources.” One of ordinary skill in the art would be motivated to easily enter needed data with the guidance from user interfaces. Lowe states, “The method may comprise providing a warning to the user when the user attempts to perform an action via the user interface that is contrary to the one or more rules. The method may comprise disallowing the action that is contrary to the one or more rules.” Lowe ¶ 22. Regarding Claim 2, Lowe further teaches The method of claim 1, wherein the one or more behaviour commands describe and specify behaviour actions along with supplementary motion and composition configurations ( Lowe teaches behaviour actions, stating “Examples of animation sources include, for example, data representing a person running, data representing a person walking, data representing a person jumping, etc., where these various sets of data may be used to set the attributes of the object 200 so that the object 200 (when displayed on the monitor 120) appears to be performing the respective animation/movement (e.g. running, walking, jumping, etc.).” Lowe ¶ 71. Lowe teaches supplementary motion, stating “The use of footstep markup event data ensures this happens even when the two animations to be blended have significantly different timing, such as when blending a ‘walk’ animation with a ‘limp’ animation.” Lowe ¶ 211. Here, “limp” animation could be viewed as “supplementary motion” with respect to “walk” animation. Lowe teaches composition configurations to define animation objects and their virtual world, stating “Embodiments of the invention are concerned with animations and, in particular, how to define an animation of a virtual object (or a character) that is located (or resides) within a virtual world (or environment). FIG. 2 schematically illustrates three example virtual objects 200 within a virtual world 202. The virtual objects 200 shown in FIG. 2 (and the rest of this application) represent human beings, . . .. Thus, the virtual world 202 may represent a real-world location, a fictitious location, a building, the outdoors, underwater, in the sky, a scenario/location in a game or in a movie, etc.” Lowe ¶ 68.). Claim 4, Lowe further teaches The method of claim 1 wherein receiving motion sources comprises receiving a pose of the digital character that is interpolated or extrapolated in the time-domain (Lowe provides examples based on fig. 19, stating “The Blend node may then interpolate, or combine, the transform data from the two animation sources and may interpolate, or combine, the event data from the two animation sources, with the interpolated transform data representing the pose of a person jogging and the interpolated event data representing when footfalls occur during the jogging animation. The weight control parameter sets how the interpolation occurs (e.g. how much the walking data and how much running data contribute to the final output).” Lowe ¶ 194.). Regarding Claim 5, Lowe teaches The method of claim 1 wherein receiving motion sources comprises receiving a pose of the digital character that is neither interpolated or extrapolated (Lowe provides examples based on fig. 19, stating “For example, the AnimSource1 node may output data representing a person walking whilst the AnimSource2 node may output data representing that person running. The transform data output by these two nodes represents the respective walking and running poses for the person, whilst the event data output by these two nodes represents the respective timing of when footfalls occur during the walking or running.” Lowe ¶ 194. Here, the walking or running pose is not interpolated or extrapolated.). Regarding Claim 6, Lowe teaches The method of claim 1 wherein the motion sources comprise a high priority motion source and a low priority motion source (Lowe provides examples based on fig. 19, stating “The Blend node may then interpolate, or combine, the transform data from the two animation sources and may interpolate, or combine, the event data from the two animation sources, with the interpolated transform data representing the pose of a person jogging and the interpolated event data representing when footfalls occur during the jogging animation. The weight control parameter sets how the interpolation occurs (e.g. how much the walking data and how much running data contribute to the final output).” Lowe ¶ 194. The animation source given more weight is considered to be a high priority motion source; and the animation source given less weight is considered to be a low priority motion source.), wherein the high priority motion source superimposes over a low priority motion source in determining one or more motion parameters (Lowe ¶ 194. The animation source given more weight is considered to be a high priority motion source and will superimpose over the animation source given less weight, because it will have more influence over other motion source. The motion parameters are mapped to “interpolated transform data representing the pose of a person jogging” based on “output data representing a person walking” by “AnimSource1 node” and “output data representing that person running” by “AnimSource2 node.”). Regarding Claim 7, Lowe teaches The method of claim 1 wherein the motion sources comprise a plurality of motion sources, each motion source having a priority (Lowe provides examples based on fig. 19, stating “The Blend node may then interpolate, or combine, the transform data from the two animation sources and may interpolate, or combine, the event data from the two animation sources, with the interpolated transform data representing the pose of a person jogging and the interpolated event data representing when footfalls occur during the jogging animation. The weight control parameter sets how the interpolation occurs (e.g. how much the walking data and how much running data contribute to the final output).” Lowe ¶ 194. The animation source given more weight is considered to be a high priority motion source; and the animation source given less weight is considered to be a low priority motion source.), wherein the plurality a plurality of motion sources are order by their priority and a high priority motion source superimposes over a low priority motion source in determining one or more motion parameters ( Lowe ¶ 194. The animation source given more weight is considered to be a high priority motion source and will superimpose over the animation source given less weight, because it will have more influence over other motion source. The animation sources are ordered by their weight/priority/influence. The motion parameters are mapped to “interpolated transform data representing the pose of a person jogging” based on “output data representing a person walking” by “AnimSource1 node” and “output data representing that person running” by “AnimSource2 node.”). Regarding Claim 8, Lowe teaches The method of any one of claims 1 wherein the time sequence of channel parameters is based on incremental time steps (Lowe teaches incremental time steps, each of which is mapped a step/duration between disclosed “time-points,” stating “Consequently, the animation source represents how the object 200 moves from time-point to time-point in the sequence of time-points. For ease of explanation, the rest of this description shall refer to frames (and a sequence of frames) as an example of time-points (and the sequence of time-points for the animation).” Lowe ¶ 70; see fig. 21.). Regarding Claim 9, Lowe teaches The method of claim 1 wherein the time sequence of channel parameters is configured to cause changes to channel states in an animation planner ( Lowe teaches channel parameters, mapped to parameters that include “weight control parameters,” stating “The ‘Blend’ node represents an operation that interpolates between the two sets of transform data output by the animation nodes depending on the value of a weight control parameter (the control parameter sub-graph is not shown in FIG. 19), and outputs the results to an output node, and, when event data is being used, also represents an operation that interpolates between the respective sets of event data output by the two animation nodes and outputs 7 that result to the output node.” Lowe ¶ 194. There is a time sequence of the channel parameters as illustrated in figs. 21 and 23 that there could be a sequence of “weight control parameter” for a sequence of blend nodes a long to blend tree. The output results after the blending is mapped to channel states, and Lowe provides an example, stating “For example, the AnimSource1 node may output data representing a person walking whilst the AnimSource2 node may output data representing that person running. The transform data output by these two nodes represents the respective walking and running poses for the person, whilst the event data output by these two nodes represents the respective timing of when footfalls occur during the walking or running. The Blend node may then interpolate, or combine, the transform data from the two animation sources and may interpolate, or combine, the event data from the two animation sources, with the interpolated transform data representing the pose of a person jogging and the interpolated event data representing when footfalls occur during the jogging animation.” Lowe ¶ 194.). Regarding Claim 10, Lowe teaches The method of claim 1 “FIG. 22 illustrates an example two-level blend tree.” Lowe ¶ 47. “FIG. 23 illustrates the directed acyclic graph corresponding to the graph of FIG. 22.” Lowe ¶ 48. PNG media_image3.png 388 674 media_image3.png Greyscale Note, at the end of the directed graph, output is computed.). Regarding Claim 13, Lowe teaches The method of claim 1 wherein the blend tree comprises a single root blend node, a plurality of blend nodes adjacent to the root blend node or to the plurality of blend nodes, and a plurality of source nodes adjacent to the plurality of blend nodes ( PNG media_image4.png 328 458 media_image4.png Greyscale The single root blend node, in the example presented in fig. 22, could be mapped to root Blend3. blend nodes, in the example presented in fig. 22, could be mapped to Blend2 and non-root Blend3. source nodes, in the example presented in fig. 22, could be mapped to AnimSource1-4.). Regarding Claim 16, Lowe teaches A non-transitory computer-readable medium comprising instructions which, when executed by a computer (Lowe ¶ 24), cause the computer to perform the method of claim 1 (see Claim 1 rejection analysis). Regarding Claim 17, Lowe teaches A system for controlling motion of a digital character according to the method of claim 1 (see Claim 1 rejection for detailed analysis; also Lowe ¶ 24), comprising: i. a behaviour command system for selecting and providing behaviour commands ( Lowe teaches inputting behaviour commands through graphical user interface to design animation sequence, stating “As discussed below, in preferred embodiments of the invention the user interface 600 uses certain graphical representations, and implements certain tools, to make it easier for the user to construct complex, but flexible graphs that are easy for the user to comprehend (i.e. understand the intended purpose and nature of the animation being represented by the graph displayed in the graph definition area 602). This is described in more detail below. However, it will be appreciated that these features of the user interface are not essential, but merely serve to make the user interface more intuitive and more flexible for the user to construct or design a graph to represent an animation.” Lowe ¶ 158. Lowe provides examples for selecting behavior commands, stating “In order to add a node 606 of a particular node type to the graph being displayed in the graph definition area 602, the user may select the corresponding node type indicator 614. For example, the user may use the mouse 126 and double-click on the node type indicator 614. The user interface 600 is then arranged to add a node 606 of that node type to the graph and display that node 606 within the graph definition area 602.” Lowe ¶ 115. These commands are behaviour commands, because they direct animation behavior. Lowe provides some examples of commanded behaviors, stating “Examples of animation sources include, for example, data representing a person running, data representing a person walking, data representing a person jumping, etc., where these various sets of data may be used to set the attributes of the object 200 so that the object 200 (when displayed on the monitor 120) appears to be performing the respective animation/movement (e.g. running, walking, jumping, etc.).” Lowe ¶ 71.); ii. an animation framework for receiving and processing behaviour commands and motion sources (“A method of defining an animation of a virtual object within a virtual world, . . ..” Lowe Abstract. “However, it will be appreciated that these features of the user interface are not essential, but merely serve to make the user interface more intuitive and more flexible for the user to construct or design a graph to represent an animation.” Lowe ¶ 158. Lowe fig. 21 showing multiple motion sources that include “AnimSource1” and “AnimSource2.”); iii. animation channels configurable based on channel parameters ( PNG media_image1.png 238 660 media_image1.png Greyscale Lowe teaches an animation channel, mapped to a channel of animation sequence associated with an “animation source,” translated/constructed based on user commands, stating “Consequently, the animation source represents how the object 200 moves from time-point to time-point in the sequence of time-points. For ease of explanation, the rest of this description shall refer to frames (and a sequence of frames) as an example of time-points (and the sequence of time-points for the animation).” Lowe ¶ 70; see fig. 21. As illustrated in Lowe fig. 21, for example, there are animation channels based on “AnimSource1” and “AnimSource2.” Lowe teaches channel parameters, mapped to parameters that include “weight control parameters,” stating “The ‘Blend’ node represents an operation that interpolates between the two sets of transform data output by the animation nodes depending on the value of a weight control parameter (the control parameter sub-graph is not shown in FIG. 19), and outputs the results to an output node, and, when event data is being used, also represents an operation that interpolates between the respective sets of event data output by the two animation nodes and outputs 7 that result to the output node.” Lowe ¶ 194.); iv. a controller for controlling motion of the digital character based on motion parameters ( Lowe provides examples based on fig. 19, stating “The function of these nodes shown in FIG. 19 will be explained later, but in summary, the ‘AnimSource1’ and ‘AnimSource2’ nodes are animation source nodes that represent respective operations that output respective data that define respective poses for the object 200 at a given point time. This data includes transform data (to represent the position and orientation of the object 200 and its parts) as well as event data (to represent events, such as footfalls, that may need to be synchronised when blending transform data from the two animation source nodes together). The ‘Blend’ node represents an operation that interpolates between the two sets of transform data output by the animation nodes depending on the value of a weight control parameter (the control parameter sub-graph is not shown in FIG. 19), and outputs the results to an output node, and, when event data is being used, also represents an operation that interpolates between the respective sets of event data output by the two animation nodes and outputs 7 that result to the output node. For example, the AnimSource1 node may output data representing a person walking whilst the AnimSource2 node may output data representing that person running. The transform data output by these two nodes represents the respective walking and running poses for the person, whilst the event data output by these two nodes represents the respective timing of when footfalls occur during the walking or running. The Blend node may then interpolate, or combine, the transform data from the two animation sources and may interpolate, or combine, the event data from the two animation sources, with the interpolated transform data representing the pose of a person jogging and the interpolated event data representing when footfalls occur during the jogging animation. The weight control parameter sets how the interpolation occurs (e.g. how much the walking data and how much running data contribute to the final output). FIG. 20 shows how this graph might look after the functional inputs and outputs of FIG. 19 are expanded to expose their underlying components. As can be seen, functional connections have been used in order to represent passing time data in the right-to-left direction, with other data being passed in the left-to-right direction.” Lowe ¶ 194. The motion parameters are mapped to “interpolated transform data representing the pose of a person jogging” based on “output data representing a person walking” by “AnimSource1 node” and “output data representing that person running” by “AnimSource2 node.” Lowe teaches such animation through controlling operation queues, stating “The processor 108 may generate a sequence (or ordered list) of nodes (or at least identifiers of nodes or the operations represented thereby), which shall be referred to below as an "operations queue". The operations queue represents an order in which the various operations represented by the nodes of the DAG may be executed in order to correctly perform the update process as intended by the designer of the DAG. This may be referred to as "compiling" the DAG into an operations queue. A particular DAG may have many different valid operations queues, and there are many different ways of compiling a DAG into a corresponding operations queue, such as the so-called "width-first compilation" and the so-called "depth-first compilation", as will be described in more detail below.” Lowe ¶ 135.); and v. a presentation system for presenting the controlled motion of the digital character (In the example above, the controlled motion of the digital character includes the disclosed “jogging” of virtual person. Lowe ¶ 194.). Regarding Claim 20, Lowe further teaches The system of any one of claims 17 to 19, wherein the animation framework comprises an animation mixing system, the animation mixing system being configured to apply channel parameters to corresponding blend nodes of a blend tree ( PNG media_image2.png 298 616 media_image2.png Greyscale Lowe teaches blend nodes in a blend tree, stating “The benefit of functional connections 608 is illustrated with reference to FIG. 19, which illustrates the operation of a blend node (used in forming a "blend tree").” Lowe ¶ 194. The blend nodes are based on, as shown in Lowe fig. 19, animation mixing of animation sources. Lowe teaches channel parameters, mapped to parameters that include “weight control parameters,” stating “The ‘Blend’ node represents an operation that interpolates between the two sets of transform data output by the animation nodes depending on the value of a weight control parameter (the control parameter sub-graph is not shown in FIG. 19), and outputs the results to an output node, and, when event data is being used, also represents an operation that interpolates between the respective sets of event data output by the two animation nodes and outputs 7 that result to the output node.” Lowe ¶ 194.). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Lowe et al. (US 20100134501 A1) as applied to Claim 1 or 2, in further view of Gupta (US 20090037551 A1). Regarding Claim 3, Lowe further teaches The method of claim 1 or claim 2, wherein the one or more behaviour commands comprise behaviour metadata Lowe teaches behavior metadata, could be mapped to various kinds of parameters, properties, and attributes, stating “In such embodiments, one or more of the data values (such as the above "on/off" input data value for the switch node) may be identified as being control parameters that affect the internal dependencies between the output data values of a node 606 and the input data values for that node 606. The processor 108, when compiling the DAG into an operations queue, is then arranged to derive all of the control parameters from an independent sub-graph which has no external dependencies. As previously mentioned, the processor 108 may then evaluate this independent sub-graph (to determine the values of the control parameters) before it compiles the rest of the graph into an operations queue for the update process. With all of the control parameters being up-to-date, the processor 108 can use these control parameters to control the path of recursion during compilation of the rest of the graph.” Lowe ¶ 156. “Operator nodes are used for processing control parameters. Examples include adding, subtracting, scaling etc. of various data values. Other mathematical operations can be performed by operator nodes, such as calculating and outputting the sine of an input parameter. This could be used in conjunction with other operators, for example, as a simple way of varying blend weights smoothly back and forth to give a sinusoidally varying output to control, for example, a bobbing ‘head’ or bouncing ‘tail’ of an object 200.” Lowe ¶ 214. ). Lowe does not explicitly disclose command descriptor. Gupta teaches command descriptor (“a graphical user interface enabled to accept parameters for the command line tasks using text box entry controls and a displayed parameter name and description.” Gupta Claim 17. “The database of command line tasks can be generated by a software developer or user who enters the command tasks, parameter prompts, help descriptions, and sets up other definitions for the graphical interface.” Gupta ¶ 43.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Gupta’s command descriptor with Lowe. One of ordinary skill in the art would be motivated to make code/script/design easier to maintain/develop/use with helpful descriptions and useful parameters. “The database of command line tasks can be generated by a software developer or user who enters the command tasks, parameter prompts, help descriptions, and sets up other definitions for the graphical interface.” Gupta ¶ 43. Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Lowe et al. (US 20100134501 A1) as applied to Claim 10, in further view of Teng et al. (WO 2012088629 A1). Regarding Claim 11, Lowe teaches The method of claim 10. Lowe does not explicitly teach wherein computing the output of the blend tree follows an algorithm for traversing tree or graph data structures. Teng teaches wherein computing the output of the blend tree follows an algorithm for traversing tree or graph data structures ( “Third, the motion graphs generated by known methods are ‘static’; that is, the traversal of motion graphs always generates the same motion path with respect to Euclidean transformation. Thus, a traversal of motion graphs looks unnatural with known methods.” Teng p. 4 lines 9-13. “However the two-level blend tree graph of FIG. 22 which corresponds to the DAG of FIG. 23, by compounding the important attributes and focussing on the left-to-right flow of transform data and event data from animation sources to the output, is a much easier representation to grasp.” Lowe ¶ 207. Lowe’s blend tree is one type of motion graph.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Teng’s motion graph traversal with Lowe. One of ordinary skill in the art would be motivated to generate a motion path to animate an object and/or to produce predictable animation output. “Third, the motion graphs generated by known methods are ‘static’; that is, the traversal of motion graphs always generates the same motion path with respect to Euclidean transformation. Thus, a traversal of motion graphs looks unnatural with known methods.” Teng p. 4 lines 9-13. Regarding Claim 12, Lowe teaches The method of claim 10. Lowe does not explicitly teach wherein computing the output of the blend tree follows a method that produces a similar result as a depth-first search. Teng teaches wherein computing the output of the blend tree follows a method that produces a similar result as a depth-first search (“Novel motions are synthesized from motion graphs using the ‘depth first search’ algorithm according to optimization rules.” Teng p. 2 lines 22-24. “However the two-level blend tree graph of FIG. 22 which corresponds to the DAG of FIG. 23, by compounding the important attributes and focussing on the left-to-right flow of transform data and event data from animation sources to the output, is a much easier representation to grasp.” Lowe ¶ 207. Lowe’s blend tree is one type of motion graph.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Teng’s depth-first search with Lowe. One of ordinary skill in the art would be motivated to generate novel motions for animation. “Novel motions are synthesized from motion graphs using the ‘depth first search’ al
Read full office action

Prosecution Timeline

Apr 19, 2024
Application Filed
Nov 19, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602865
METHODS FOR DEPTH CONFLICT MITIGATION IN A THREE-DIMENSIONAL ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12599463
COLOR MANAGEMENT PROCESS FOR CUSTOMIZED DENTAL RESTORATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597402
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM FOR APPLICATION WINDOW HAVING FIRST DISPLAY MODE AND SECOND DISPLAY MODE
2y 5m to grant Granted Apr 07, 2026
Patent 12567193
PARTICLE RENDERING METHOD AND APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12561929
METHOD AND ELECTRONIC DEVICE FOR PROVIDING INFORMATION RELATED TO PLACING OBJECT IN SPACE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+40.1%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 354 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month