Prosecution Insights
Last updated: April 19, 2026
Application No. 18/766,440

FACIAL EXPRESSION PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM

Non-Final OA §102§103
Filed
Jul 08, 2024
Examiner
RICKS, DONNA J
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
86%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
387 granted / 502 resolved
+15.1% vs TC avg
Moderate +9% lift
Without
With
+8.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
30 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
58.3%
+18.3% vs TC avg
§102
13.7%
-26.3% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 502 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) -1, 8, 15; 2, 3, 4, 9, 10, 11, 16, 17 and 18 is/are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Miller, IV et al. U.S. Pub. No. 2019/0325633. Re: claims 1, Miller teaches 1. A facial expression processing method performed by a computer device, the method comprising: determining a skeletal structure of a three-dimensional facial model of a target style; (“The 3D model processing system 680 can be configured to animate and cause the display 220 to render a virtual avatar 670... The 3D model processing system 680 can include a virtual character processing system 682 and a movement processing system 684... The movement processing system 684 can be configured to animate the avatar such as... by animating the avatar’s facial expressions...”; Miller, [0124], Fig. 6B) Fig. 6B illustrates a system that generates a 3D model of an avatar (three-dimensional facial model of a target style) and animates the avatar’s facial expressions. (“To animate a virtual character, its mesh can be deformed by moving some or all of its vertices to new positions in space at various instants in time. The deformations can represent both large-scale movements (e.g., movement of limbs) and fine movements (e.g., facial movements)... The control systems, processes, and techniques for producing these deformations are referred to as rigging, or simply “the rig.” The example avatar processing and rendering system 690 of Fig. 6B includes a 3D model processing system 680 which can implement rigging and which can be programmed to perform the techniques for avatar facial expression representation in multidimensional space...”; Miller, [0148]) To animate a character, its mesh can be deformed such that the deformations represent fine movements, such as facial movements. The control system uses rigging to perform the avatar facial representation. (“The rigging for a virtual character can use skeletal systems to assist with mesh deformations. ”; Miller, [0149]) The rigging uses the skeletal system (skeletal structure) for mesh deformations (determining a skeletal structure of a three-dimensional facial model of a target style). (“... the facial expressions of a virtual character can be controlled by a rigging system.”; Miller, [0174]) The facial expressions are controlled by the rigging system. skinning the skeletal structure to generate a virtual object face; (“A skeletal system for a virtual character can be defined with joints at appropriate positions, and with appropriate local axes of rotation, degrees of freedom, etc., to allow for a desired set of mesh deformations to be carried out. Once a skeletal system has been defined for a virtual character, each joint can be assigned, in a processed called “skinning,” an amount of influence over the various vertices in the mesh.”; Miller, [0150]) Once a skeletal system has been defined for a virtual character, each joint is assigned in a process called skinning (skinning the skeletal structure to generate a virtual object face). (“skin motions due to an avatar’s facial contortions (which may represent expressions such as smiling, frowning, laughing, speaking, blinking, etc.) can be represented by a series of facial joints controlled by a facial rig;”; Miller, [0153]) Skin motions due to an avatar’s facial contortions are represented by facial joints controlled by a facial rig. binding an action unit in a style expression template matching the target style to a one or more corresponding bones in the skeletal structure; (“The rigging for a virtual character can use skeletal systems to assist with mesh deformations.”; Miller, [0149]) The rigging for a virtual character uses skeletal systems (bones) to assist with mesh deformations. (“... the facial expressions of a virtual character can be controlled by a rigging system... A combination of constituent values of the face vector can correspond to a facial expression or an emotion. The face vector can comprise a number of dimensions... Each dimension can correspond to a blendshape or other facial parameter in a rigging model (e.g., an AU of a model)”; Miller, [0174]) Each dimension of a face vector corresponds to a facial parameter in a rigging model such as an AU (action unit). (“Fig. 11 illustrates an example of sliders associated with an example of a facial rig. In this example, the locations of the sliders are assigned based on action units (AUs) of a facial action coding system (FACS).”; Miller, [0177]) Fig. 11 illustrates sliders associated (binding) with the facial rig (corresponds to skeleton or bones), where the locations of sliders are based on action units of the facial action coding system (an action unit in a style expression template matching the target style to one or more bones in the skeletal structure) (“The sliders 1104, 1108 can be directly mapped to AUs of the FACS characterization of the human face. For example, arrows 1104 correspond to an AU parameter and lines 1108 indicate the directionality of the AUs. ”; Miller, [0178], Fig. 11) Fig. 11 illustrates sliders directly mapped to AUs of the FACS of the face, where the arrows 1104 correspond to an AU parameter (such as the jaw, for instance). associating at least one expression control with at least one action unit in the style expression template; (“Fig. 11 illustrates an example of sliders associated with an example of a facial rig. In this example, the locations of the sliders are assigned based on action units (AUs) of a facial action coding system (FACS). ”; Miller, [0177], Fig. 11) Fig. 11 illustrates sliders (associating at least one expression control) directly mapped to AUs (with at least one action unit) of the FACS (in the style expression template) of the face, where the arrows 1104 correspond to an AU parameter (such as the jaw, for instance). (“The facial rig 110 in Fig. 11 can be used to control facial expressions of an avatar... The sliders 1104, 1108 can be directly mapped to AUs of the FACS characterization of the human face... The sliders 1104, 1108 can be adjusted electronically to control the deformation of the avatar’s mesh.”; Miller, [0178]) The sliders of the facial rig are used to control the deformation of the avatars mesh. in response to an adjustment operation on the at least one expression control, driving bones bound to the at least one action unit to move based on information of the at least one action unit associated with the expression control; (“The facial rig 1100 in Fig. 11 can be used to control facial expressions of an avatar... The sliders 1104, 1108 can be directly mapped to AUs of the FACS characterization of the human face... The sliders 1104, 1108 can be adjusted electronically to control the deformation of the avatar’s mesh. The controls (e.g., facial rig parameters 1104, 1108) can be driven in real time by the facial rig 1100 and can be parametrized to a normalized value, such as between -1 and 1 or between 0 and 1.”; Miller, [0178]) The sliders (expression control) of the facial rig are used to adjust the deformations of the avatar’s mesh. The sliders are mapped (bound) to AUs of the FACS of the face, such as the jaw. The slider controls are driven or adjusted in real time (in response to an adjustment operation on the at least one expression control, driving bones bound to the at least on action unit to move based on information of the at least one action associated with the expression control). (“Various other expressions can also be represented, including, but not limited to, sad, fearful, angry, surprised, or disgusted... The expression “Fearful” can correspond to AUs: 1 (Inner Brow Raiser) (100%), 4 (Brow Lowerer) (100%), 20 (Lip stretcher) (100%), 25 (Lips part) (100%), 2 (Outer Brow Raiser) (57%), 5 (Upper Lid Raiser) (63%), 26 (Jaw Drop) (33%). Thus, “Fearful” can be represented as AU variant [1, 4, 20, 25, 2 (57%), 5 (63%), 26 (33%)].”; Miller, [0180]) Fig. 12A illustrates, for example, an adjustment operation from a neutral facial expression to a surprised facial expression. The sliders are used to adjust the AUs of the parameters of the face, such as the jaw. For example, the JawDrop parameter changes from 0.0 in the neutral face to 0.35 in the surprised face. and controlling the virtual object face to generate an expression conforming to the target style in accordance with the movement of the bones triggered by the adjustment operation. (“Fig. 12A illustrates examples of face vectors representative of facial expressions. This figure shows an example table 1200 which includes various values of a face vector for the expressions: neutral, surprise, shock, displeased, and disgust... Each facial expression can be represented by a face vector. The face vector can include a plurality of parameters for controlling a virtual character (e.g., by deforming a mesh of the virtual character).”; Miller, [0182], Fig. 12A) Fig. 12A illustrates, for example, sliders for the parameters are used to change the neutral facial expression to a surprised facial expression (controlling the virtual object face to generate an expression conforming to the target style). For example, the JawDrop parameter changes from 0.0 in the neutral face to 0.35 in the surprised face (in accordance with the movement of bones triggered by the adjustment operation). (“In the example shown in Fig. 12A, the plurality of parameters in the face vector includes two parameters for left and right eye brow movements (L.BrowUp, R.BrowUp), one parameter for Jaw (JawDrop), one parameter for eyes (EyesOpen), and one parameter for lip corner (LipCorner), etc... each parameter can also correspond to a face slider (e.g., an adjustment of the face slider will adjust the control value of the corresponding parameter in a face vector).”; Miller, [0183]) Fig. 12A illustrates plural parameters, where each parameter corresponds to a face slider. Claim 8 is a device analogous to the method of claim 1, is similar in scope and is rejected under the same rationale. Re: claim 8, Miller teaches 8. A computer device, comprising a memory and one or more processors, the memory having computer-readable instructions stored therein, and the computer-readable instructions, when executed by the one or more processors, causing the computer device to perform a facial expression processing method including: (“As shown, the computing device 10 includes a processing unit 20 that interacts with other components of the computing device 10... In some cases, the graphics processor 24 may share Random Access Memory (RAM) with the processing unit 20.”; Miller, [0282], [0283], Fig. 19) Fig. 19 illustrates a computing device (computer device) that includes a processing unit and a graphics processor that share a RAM (comprising a memory and one or more processors). (“Each of the processes, method, and algorithms described herein and/or depicted in the attached figures may be embodied in, and fully or partially automated by, code modules executed by one or more physical computing systems, hardware computer processors, application-specific circuitry, and/or electronic hardware configured to execute specific and particular computer instructions. For example, computing systems can include generate purpose computers (e.g., servers) programmed with specific computer instructions or special purpose computers, special purpose circuitry, and so forth.”; Miller, [0357]) The computing device includes code modules of instructions executed by, for example, hardware computer processors, causing the computing device to perform the method. Claim 15 is media analogous to the method of claim 1, is similar in scope and is rejected under the same rationale. Claim 15 has an additional limitation. Re: claim 15, Miller teaches 15. One or more non-transitory computer-readable storage media, having computer-readable instructions stored thereon, the computer-readable instructions, when executed by one or more processors of a computer device, causing the computer device to perform a facial expression processing method including: (“Each of the processes, method, and algorithms described herein and/or depicted in the attached figures may be embodied in, and fully or partially automated by, code modules executed by one or more physical computing systems, hardware computer processors, application-specific circuitry, and/or electronic hardware configured to execute specific and particular computer instructions. For example, computing systems can include generate purpose computers (e.g., servers) programmed with specific computer instructions or special purpose computers, special purpose circuitry, and so forth.”; Miller, [0357]) The computing device includes code modules of instructions executed by, for example, hardware computer processors, causing the computing device to perform the method. (“Further, certain implementations of the functionality of the present disclosure are sufficiently mathematically, computationally, or technically complex that application-specific hardware or one or more physical computing devices (utilizing appropriate specialized executable instructions) may be necessary to perform the functionality, for example, due to the volume or complexity of the calculations involved or to provide results substantially in real-time.... As described above, the volume of the multidimensional facial expression space may be so enormously large that a rule-based approach must be computationally implemented on computer hardware, particularly to render virtual avatar facial expressions in real-time in an augmented, virtual, or mixed reality environment.”; Miller, [0358]) The computing device utilizes executable instructions to perform the functionality (“The controller 460 includes programming (e.g., instructions in a non-transitory computer-readable medium) that regulates the timing and provision of image information to the waveguides... ”; Miller, [0085], Fig. 4) Fig. 4 illustrates that the controller 460 includes instructions in a non-transitory computer-readable medium that are executed. Re: claims 2, 9 and 16 (which are rejected under the same rationale), Miller teaches 2. The method according to claim 1, wherein the binding an action unit in a style expression template matching the target style to a one or more corresponding bones in the skeletal structure comprises: displaying object types corresponding to a plurality of styles; when that a target object type in the target style is selected, determining facial skeleton information corresponding to the target object type and the style expression template matching the target style; (“Fig. 12A illustrates examples of face vectors representative of facial expressions. This figure shows an example table 1200 which includes various values of a face vector for the expressions: neutral, surprise, shock, displeased, and disgust. The top row of the table 1200 shows illustrations of the expressions visually... Each facial expression can be represented by a face vector. The face vector can include a plurality of parameters for controlling a virtual character (by deforming the mesh of the virtual character).”; Miller, [0182]) Fig. 12 illustrates plural facial expressions, each represented by a face vector (displaying object types corresponding to a plurality of styles). (“With continued reference to Fig. 12A, to animate the virtual character to show Surprise, the control values can be changed to 0.45 or L.BrowUp, 0.5 for R.BrowUp, 0.35 for JawDrop, 0.75 for EyesOpen, while the control value for LipCorner remains at 0.0.”; Miller, [0184]) Fig. 12 illustrates that when, for example, the surprise facial expression (target object type) is selected, the neutral parameters of the neutral expression are adjusted to match the parameters of the surprise expression (determining facial skeleton information corresponding to the target object type and the style expression template matching target style). (“The facial rig 1100 in Fig. 11 can be used to control facial expressions of an avatar... The sliders 1104, 1108 can be directly mapped to AUs of the characterization of the human face. For example, arrows 1104 correspond to an AU parameter and lines 1108 indicate the directionality of the AUs. The sliders 1104, 1108 can be adjusted electronically to control the deformation of the AUs.”; Miller, [0178], Fig. 11) Fig. 11 illustrates the facial rig used to control facial expressions of an avatar. The facial rig includes sliders that are mapped to AU parameters of the face (skeleton information corresponding to the target object type). and binding the action unit in the style expression template to the one or more corresponding bones in the skeletal structure based on the facial skeleton information. (“The facial rig 1100 in Fig. 11 can be used to control facial expressions of an avatar... The sliders 1104, 1108 can be directly mapped to AUs of the characterization of the human face. For example, arrows 1104 correspond to an AU parameter and lines 1108 indicate the directionality of the AUs. The sliders 1104, 1108 can be adjusted electronically to control the deformation of the AUs.”; Miller, [0178], Fig. 11) Fig. 11 illustrates the facial rig used to control facial expressions of an avatar. The facial rig includes sliders that are mapped to AU parameters of the face. (“In the example shown in Fig. 12A, the plurality of parameters in the face vector includes two parameters for left and right eye brow movements (L.BrowUp, R.BrowUp), one parameter for Jaw (JawDrop), one parameter for eyes (EyesOpen), and one parameter for lip corner (LipCorner), etc.”; Miller, [0183]) Fig. 12A illustrates plural parameters (AU parameters) for the face including JawDrop for the jaw. (“A control value can be associated with each parameter in the face vector... With reference to the example shown in FIG. 12A, the neutral expression is associated with a face vector in which the control values for L.BrowUp, R.BrowUp, JawDrop, and LipCorner are all 0.0, while the control value for EyesOpen is 0.5. By adjusting the control values for the parameters, the virtual character can be animated to show different facial expressions. ”; Miller, [0184]) The parameters are adjusted by a control value. Fig 12A illustrates a style expression template, where for example, the parameter JawDrop is corresponds to the jaw (bone), which is associated with a control value that adjusts the facial expression (binding the action unit I the style expression template to the one or more corresponding bones in the skeletal structure based on the facial skeletal information). Re: claims 3, 10 and 17 (which are rejected under the same rationale), Miller teaches 3. The method according to claim 1, further comprising: displaying an information editing area of the at least one action unit in the style expression template when the virtual object face is displayed; updating information of a target action unit in response to an information editing operation triggered in an information editing area of the target action unit; and based on updated information of the target action unit, driving a bone bound to the target action unit to move to update an expression on the virtual object face. (“In the example shown in Fig. 12A, the plurality of parameters in the face vector includes two parameters for left and right eye brow movements (L.BrowUp, R.BrowUp), one parameter for Jaw (JawDrop), one parameter for eyes (EyesOpen), and one parameter for lip corner (LipCorner), etc.”; Miller, [0183]), Fig. 12A) Fig. 12A illustrates a table that includes a display of the avatar’s face, where below the avatar’s face, adjustable parameters, such as JawDrop are shown (displaying an information editing area of the at least one action unit in the style expression template wen the virtual face is displayed). (“A control value can be associated with each parameter in the face vector... With reference to the example shown in FIG. 12A, the neutral expression is associated with a face vector in which the control values for L.BrowUp, R.BrowUp, JawDrop, and LipCorner are all 0.0, while the control value for EyesOpen is 0.5. By adjusting the control values for the parameters, the virtual character can be animated to show different facial expressions. With continued reference to FIG.12A to animate the virtual character to show Surprise, the control values can be changed to 0.45 for L.BrowUp, 0.5 for R.BrowUp, 0.35 for JawDrop, 0.75 for EyesOpen, while the control value for LipCorner remains at 0.0.”; Miller, [0184]) The parameters are adjusted by a control value. Fig. 12A illustrates that the parameters of the neutral facial expression are changed such that the neutral facial expression becomes the surprise facial expression. For example the parameters for the neutral expression are L.BrowUp, R.BrowUp, JawDrop, and LipCorner are all 0.0, and EyesOpen is 0.5. These parameter (AU) values are changed (updated) to values for the surprise expression, which include 0.45 for L.BrowUp, 0.5 for R.BrowUp, 0.35 for JawDrop, 0.75 for EyesOpen and 0.0 for LipCorner (updating information of a target action unit in response to an information editing operation triggered in an information editing area of the target action unit). Based on these updated parameter values, for example, the jaw moves to update the expression from neutral to surprise (based on updated information of the target action unit, driving a bone bound to the target action unit to move to update an expression on the virtual object face). Re: claims 4, 11 and 18 (which are rejected under the same rationale), Miller teaches 4. The method according to claim 3, further comprising: displaying a bone update area of the action unit in the style expression template; and in response to a bone update operation triggered in a bone update area of the target action unit, updating the bone bound to the target action unit. (“A control value can be associated with each parameter in the face vector... With reference to the example shown in FIG. 12A, the neutral expression is associated with a face vector in which the control values for L.BrowUp, R.BrowUp, JawDrop, and LipCorner are all 0.0, while the control value for EyesOpen is 0.5. By adjusting the control values for the parameters, the virtual character can be animated to show different facial expressions. With continued reference to FIG.12A to animate the virtual character to show Surprise, the control values can be changed to 0.45 for L.BrowUp, 0.5 for R.BrowUp, 0.35 for JawDrop, 0.75 for EyesOpen, while the control value for LipCorner remains at 0.0.”; Miller, [0184], Fig. 12) The parameters are adjusted by a control value. Fig. 12A illustrates that the parameters of the neutral facial expression are changed such that the neutral facial expression becomes the surprise facial expression. For example a parameter for the neutral expression, such as JawDrop, has the value of 0.0 and is updated to 0.35 (displaying a bone update area of the action unit in the style expression template; and in response to a bone update operation triggered in a bone update area of the target action unit). Based on these updated parameter values, for example, the jaw moves to update the expression from neutral to surprise (updating the bone bound to the target action unit). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 5, 12 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller as applied to claims 3, 10 and 15 above, and further in view of An WO 2021/242005 A1. Re: claims 5, 12 and 19 (which are rejected under the same rationale), Miller is silent regarding the information editing area has two states: a frozen state and an activated state; and the method further comprises: stopping responding to an operation on the information editing area when that the information editing area is in the frozen state; or responding to an operation on the information editing area when that the information editing area is in the activated state, however, An teaches 5. The method according to claim 3, wherein the information editing area has two states: a frozen state and an activated state; and the method further comprises: stopping responding to an operation on the information editing area when that the information editing area is in the frozen state; or responding to an operation on the information editing area when that the information editing area is in the activated state. (“For example, the processors 120 and 210 may enter the facial expression editing mode based on a user input for selecting a facial expression generation item (or a facial editing mode item).”; An, p. 12, 10th para) The processors may enter the facial expression editing mode (activated state). (“In operation 413, the processors 120 and 210 may convert a basic model of a face shape (a face open target model) into a target weight value representing a face expression.”; An, p. 13, 2nd para) In the facial expression editing mode, the basic model of a face shape is converted into a target weight value representing a face expression (responding to an operation on the information editing area when the information editing area is in the activated state). (“In operation 414, the process may store a facial motion file or a facial animation file... representing a change in movement of a facial expression... The processors 120 and 210 may apply the facial motion file stored in the three-dimensional form of the user avatar to update the recognized avatar to the user’s facial expression movement.”; An, p. 13, 3rd-4th paras) Once the face expression has been generated, the editing state is exited and the face expression is stored (frozen state) in a facial animation file that represents the change in movement of a facial expression (stopping responding to an operation on the information editing area when the information editing area is in the frozen state). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Miller by adding the feature of the information editing area has two states: a frozen state and an activated state; and the method further comprises: stopping responding to an operation on the information editing area when that the information editing area is in the frozen state; or responding to an operation on the information editing area when that the information editing area is in the activated state, in order to produce an emoji sticker that resembles a user’s desired facial expression to increase satisfaction with the user’s avatar-based emoji usage and improve usability, as taught by An (p. 3, last para). Claim(s) 6, 13 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller as applied to claims 1, 8 and 15 above, and further in view of Kyung et al. U.S. Pub. No. 2007/0037573. Re: claims 6, 13 and 20 (which are rejected under the same rationale), Miller is silent regarding obtaining an adjusted attribute value of the shortcut expression control in response to an adjustment operation on the shortcut expression control; and driving, based on the adjusted attribute value and information of an action unit associated with the shortcut expression control, a bone bound to the associated action unit to move; and controlling the virtual object face to generate the specified expression matching the shortcut expression control, however, Kyung teaches 6. The method according to claim 1, wherein the at least one expression control comprises a shortcut expression control that matches a specified expression; and the method further comprises: obtaining an adjusted attribute value of the shortcut expression control in response to an adjustment operation on the shortcut expression control; and driving, based on the adjusted attribute value and information of an action unit associated with the shortcut expression control, a bone bound to the associated action unit to move; and controlling the virtual object face to generate the specified expression matching the shortcut expression control. (“For example, if the still images, animation, and avatar corresponding to MS#1 are pre-stored in MS#2, the user of MS#1 can use a short-cut key or a jog dial, among other input means to transmit information on the movement or expression of the avatar.”; Kyung, [0038]) If the animation and avatar of the MS#1 (which includes attribute values) is prestored in MS#2, then the user of MS#1 can use a shortcut key to transmit information (a shortcut expression control that matches a specified expression) on the facial expression for another avatar. Kyung is combined with Miller such that the avatar and animation (which includes attribute values) that is pre-stored includes the AUs or control values of Miller that are used to change the expression on the face of the virtual avatar, by for example, changing the parameter of the JawDrop of Miller (obtaining an adjusted attribute value of the shortcut expression control in response to an adjustment operation on the shortcut expression control; and driving, based on the adjusted attribute value and information of an action unit associated with the shortcut expression control, a bone bound to the associated action unit to move; and controlling the virtual object face to generate the specified expression matching the shortcut expression control). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Miller by adding the feature of obtaining an adjusted attribute value of the shortcut expression control in response to an adjustment operation on the shortcut expression control; and driving, based on the adjusted attribute value and information of an action unit associated with the shortcut expression control, a bone bound to the associated action unit to move; and controlling the virtual object face to generate the specified expression matching the shortcut expression control, in order to achieve more efficient communication by not transmitting information on the entire avatar, as taught by Kyung ([0038]). Claim(s) 7 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller as applied to claims 1 and 8 above, and further in view of Hwang et al. U.S. Patent No. 9,298,257. Re: claims 7 and 14 (which are rejected under the same rationale), Miller teaches 7. The method according to claim 1, wherein the three-dimensional facial model of the target style is a target three-dimensional facial model; (“The 3D model processing system 680 can be configured to animate and cause the display 220 to render a virtual avatar 670... The 3D model processing system 680 can include a virtual character processing system 682 and a movement processing system 684... The movement processing system 684 can be configured to animate the avatar such as... by animating the avatar’s facial expressions...”; Miller, [0124], Fig. 6B) Fig. 6B illustrates a system that generates a 3D model of an avatar (three-dimensional facial model of a target style is a target three-dimensional facial model) and animates the avatar’s facial expressions. Miller is silent regarding, determining a skeletal structure of a three-dimensional facial model of a target style comprises: performing affine transformation on a standard three-dimensional facial model based on the target three-dimensional facial model, to obtain a three-dimensional facial model undergone affine transformation, however, Hwang teaches and the determining a skeletal structure of a three-dimensional facial model of a target style comprises: performing affine transformation on a standard three-dimensional facial model based on the target three-dimensional facial model, to obtain a three-dimensional facial model undergone affine transformation; (“The feature points denote specific spots on a face, learned or semantically defined for recognition of a face and facial expressions.”; Hwang, col. 3, lines 29-31) Feature points correspond to points on a face defined for facial expressions. (“The correspondence relations setting unit 701 may set correspondence relations between control coordinates corresponding to the expression control points and a face mesh of the avatar. The face mesh refers to a mesh adapted to control the avatar corresponding to the face of the user. The correspondence relations setting unit 701 models vertexes selected for control of an animation out of all vertexes of the face mesh...”; Hwang, col. 8, lines 25-35) A correspondence is set between the control coordinates of the expression control points and a face mesh that controls the avatar’s face. Vertices are selected for control of an animation of the face mesh (expression control). (“The correspondence relations setting unit 701 may perform global affine transformation with respect to the feature points using the transformation matrix, thereby moving the feature points to the coordinates on the face mesh”; Hwang, col. 8, lines 43-47) An affine transformation is performed on the avatar’s face with respect to the feature points, such that the feature points are moved to the coordinates of the face mesh (performing affine transformation on a standard three-dimensional facial model based on the target three-dimensional facial model, to obtain a three-dimensional facial model undergone affine transformation). and determining, based on bones bound to vertices in the three-dimensional facial model undergone affine transformation, bones bound to vertices in the target three-dimensional facial model, to obtain a skeletal structure of the target three-dimensional facial model. (“The movement processing system 684 can be configured to animate the avatar, such as, e.g., by changing the avatar's pose, by moving the avatar around in a user's environment, or by animating the avatar's facial expressions, etc. As will further be described herein, the virtual avatar can be animated using rigging techniques. In some embodiments, an avatar is represented in two parts: a surface representation (e.g., a deformable mesh) that is used to render the outward appearance of the virtual avatar and a hierarchical set of interconnected joints (e.g., a core skeleton) for animating the mesh.”; Miller, [0124], Fig. 6) The animation the avatar’s facial expressions is performed using rigging techniques, where the avatar includes a deformable mesh (which include vertices) and a core skeleton for animating the mesh. Thus, the rigging, the mesh (vertices) and the core skeleton are bound together (bones bound to vertices in the target three-dimensional facial model). Miller is silent regarding determining bones bound to vertices in the target three-dimensional facial model, to obtain a skeletal structure of the target three-dimensional facial model, however, Hwang teaches this limitation. (“The correspondence relations setting unit 701 may perform global affine transformation with respect to the feature points using the transformation matrix, thereby moving the feature points to the coordinates of the face mesh.”; Hwang, col. 8, lines 43-47) The affine transformation is applied to the face to move the feature points to the coordinates of the face in order to modify the avatar’s facial expression (determining... bones bound to the vertices in the target three-dimensional facial model, to obtain a skeletal structure of the target three-dimensional facial model). In order to change the facial expression, when the affine transformation is applied, the positions of the facial bones and the facial mesh are updated to represent the target facial expression. Hwang is combined with Miller such that the face to which the affine transformation of Hwang is applied includes the face of Miller, which includes the rigging, the mesh (vertices) and the core skeleton (bones) being bound together. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date, to modify the method of Miller by adding the feature of determining a skeletal structure of a three-dimensional facial model of a target style comprises: performing affine transformation on a standard three-dimensional facial model based on the target three-dimensional facial model, to obtain a three-dimensional facial model undergone affine transformation; and determining, based on bones bound to vertices in the three-dimensional facial model undergone affine transformation, bones bound to vertices in the target three-dimensional facial model, to obtain a skeletal structure of the target three-dimensional facial model, in order to more accurately control the avatar’s facial expressions thereby making them appear more natural, as taught by Hwang (col. 1, lines 29-32). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONNA J RICKS whose telephone number is (571)270-7532. The examiner can normally be reached on M-F 7:30am-5pm EST (alternate Fridays off). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Donna J. Ricks/Examiner, Art Unit 2618 /DEVONA E FAULK/Supervisory Patent Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

Jul 08, 2024
Application Filed
Feb 10, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602751
SAMPLE DISTRIBUTION-INFORMED DENOISING & RENDERING
2y 5m to grant Granted Apr 14, 2026
Patent 12592021
GRAPHICS PROCESSING
2y 5m to grant Granted Mar 31, 2026
Patent 12579726
HIERARCHICAL TILING MECHANISM
2y 5m to grant Granted Mar 17, 2026
Patent 12573133
Reprojection method of generating reprojected image data, XR projection system, and machine-learning circuit
2y 5m to grant Granted Mar 10, 2026
Patent 12555281
MANAGING MULTIPLE DATASETS FOR DATA BOUND OBJECTS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
86%
With Interview (+8.8%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 502 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month