Prosecution Insights
Last updated: April 19, 2026
Application No. 18/314,604

Method for Facial Animation

Final Rejection §103
Filed
May 09, 2023
Examiner
LI, GRACE Q
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
4 (Final)
77%
Grant Probability
Favorable
5-6
OA Rounds
2y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
270 granted / 351 resolved
+14.9% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
35 currently pending
Career history
386
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
63.9%
+23.9% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
11.8%
-28.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 351 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) is/are rejected under 35 U.S.C. 103 as being unpatentable over A et al. (US 111) in view of B et al. (US 222). Regarding claim 1, A discloses On the other hand, A fails to explicitly disclose but B discloses It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined A and B. That is, adding of B to the of A. The motivation/ suggestion would have been to provide (B, [0001]). Regarding claim(s) 10-18, they are interpreted and rejected for the same reasons set forth in claim(s) 1-9, respectively. Regarding claim 2, A in view of B discloses DETAILED ACTION Applicant's submission filed on 11/07/2025 has been entered. Claims 1-2, 4-9, 11-20 are pending in the application. Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1, 2, 7, 8, 9, 14, 15, 16, 20 is/are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Smolyanskiy et al. (US 20130121526) in view of Du et al. (US 20140035934). Regarding claim 1, Smolyanskiy discloses A method comprising: generating, by an electronic device in a first stage, a user-specific expression model, comprising: capturing, by one or more sensors of the electronic device, first sensor data of a plurality of predefined expressions of a face of a user, determining a representation of the face of the user based on the sensor data, and generating the user-specific expression model based on the representation of the face of the user (Smolyanskiy, “claim 16, A method for generating a face model of a user's face, comprising: capturing a series of frames containing the user's face in a variety of head poses and facial expressions; fitting two-dimensional feature points to the user's face in each frame in a batch of frames using a two-dimensional face alignment technique; determining a desired batch size that dictates how many images from the series of images to include in the batch of frames; selecting frames to include in the batch of frames using the desired batch size; constructing an energy function over the batch of frames; solving the energy function for three-dimensional head shape parameter updates that are valid for the batch of frames and each frame contained therein; generating a face model of the user's face from the three-dimensional head shape parameter update”). On the other hand, Smolyanskiy fails to explicitly disclose but Du discloses driving, by the electronic device in a second stage, an avatar, comprising: capturing, by the electronic device, second sensor data of the face of the user, and determine, by the electronic device, expression parameters from the second sensor data, wherein the expression parameters and the user-specific expression model are used to animate the avatar (Du, fig.4, “[0052] these gestures and expressions may be expressed as animation parameters. Such animation parameters are transferred to a graphics rendering engine. In this way, the avatar system will be able to reproduce the original user's facial expression on a virtual 3D model. [0055] As shown in FIG. 4, a video frame is read at a block 402. In embodiments, this video frame may be read from a camera placed in front of a user. From this, the face tracking module analyzes the face area, and calculates the animation parameters according to the facial image. As shown in FIG. 4, this may involve the performance of blocks 404-412. [0057] FIG. 4 shows that, at a block 414, the animation parameters are sent to a rendering engine. In turn, the rendering engine drives an avatar 3D model based on the animation parameters at a block 416. [0060] In these blocks, a head model is projected onto a face area detected within the video frame that was read at block 402. More particularly, embodiments may employ a parameterized 3D head model to help the facial action tracking. The shape (e.g., the wireframe) of the 3D model is fully controlled by a set of parameters. In projecting the 3D model onto the face area of the input image, its parameters are adjusted so that the wireframe changes its shape and matches the user head position and facial expression. [0063] Thus, the control parameters of the 3D head model may be repeatedly updated until a satisfactory convergence with the current face occurs”. Therefore, the 3D head model with a satisfactory convergence corresponds to the user-specific expression model.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Smolyanskiy and Du, to include all limitations of claim 1. That is, applying the driving avatar steps of Du after the method of Smolyanskiy is performed. The motivation/ suggestion would have been a practical, real time (or super real time), online and low communication bandwidth avatar system may be implemented (Du, [0014]). Regarding claim 2, Smolyanskiy in view of Du discloses The method of claim 1. Smolyanskiy further discloses wherein the first sensor data is captured from a plurality of angles with respect to the user (Smolyanskiy, “claim 2, capturing a plurality of frames of the user's face over time in different head poses and facial expressions such that the batch of frames contains head pose diversity and facial expression diversity”. Therefore, different head poses indicate a plurality of angle with respect to the user). Regarding claim 7, Smolyanskiy in view of Du discloses The method of claim 1. On the other hand, Smolyanskiy fails to explicitly disclose but Du discloses wherein the expression parameters and the user-specific expression model are used to animate the avatar by causing the avatar to mimic an expression in the second sensor data (Du, “[0052] Such animation parameters are transferred to a graphics rendering engine. In this way, the avatar system will be able to reproduce the original user's facial expression on a virtual 3D model. [0061] For instance, FIG. 4 shows that, at block 404, the head model is projected onto the detected face (also referred to as the current face). As indicated by a block 412, blocks 404-410 may be repeated if the 3D head model and the current face have not converged within a predetermined amount. Otherwise, operation may proceed to a block 414. [0063] Thus, the control parameters of the 3D head model may be repeatedly updated until a satisfactory convergence with the current face occurs”). The same motivation of claim 1 applies here. Regarding claim(s) 8, 9, 14, Du further discloses “[0090] Some embodiments may be implemented, for example, using a storage medium or article which is machine readable. The storage medium may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the embodiments. Such a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware and/or software”. Therefore, they are interpreted and rejected for the same reasons set forth in claim(s) 1, 2, 7, respectively. Regarding claim(s) 15, 16, 20, Du further discloses “[0090] Some embodiments may be implemented, for example, using a storage medium or article which is machine readable. The storage medium may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the embodiments. Such a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware and/or software”. Therefore, they are interpreted and rejected for the same reasons set forth in claim(s) 1, 2, 7, respectively. Claim(s) 4, 11, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Smolyanskiy et al. (US 20130121526) in view of Du et al. (US 20140035934), and further in view of Deering (US 6525725). Regarding claim 4, Smolyanskiy in view of Du discloses The method of claim 1. On the other hand, Smolyanskiy in view of Du fails to explicitly disclose but Deering discloses wherein the representation of the face of the user comprises a polygonal mesh for each expression of the plurality of predefined expressions (Deering, “Deering, “(58) For example, in one embodiment graphics system 112 may be configured to store a plurality of polygons into memory 162, wherein the polygons correspond to an object (e.g., a human face) in a different state (e.g., with a happy expression, with a sad expression, with a confused expression, etc.). Graphics process or 160 may then be configured to morph different combinations of the stored object states in response to receiving particular instructions. Such a configuration would allow the graphics system to render a face with a wide variety of expressions based on a few predefined expressions stored in memory 162”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Deering into the combination of Smolyanskiy and Du, to include all limitations of claim 4. That is, applying the predefined expressions of Deering to the facial expressions of Smolyanskiy and Du. The motivation/ suggestion would have been The weighting functions used may use all or merely a subset of the stored object states to generate the desired intermediate expression (Deering, (58)). Regarding claim(s) 11, 17, they are interpreted and rejected for the same reasons set forth in claim(s) 4. Claim(s) 5, 12, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Smolyanskiy et al. (US 20130121526) in view of Du et al. (US 20140035934), and further in view of Farrer et al. (US 20120176379). Regarding claim 5, Smolyanskiy in view of Du discloses The method of claim 1. On the other hand, Smolyanskiy in view of Du fails to explicitly disclose but Farrer discloses wherein the first sensor data comprises image data and depth data (Farrer, “[0043] As is described above, the output of the 3-D camera 104 comprises a series of frames (indexed by n=1, 2, . . . , N), for example, at 24 or 30 frames per second. The resulting frames can include a sequence of 2-D intensity images q.sup.n(x, y) 108 (e.g., a gray scale intensity image) and a sequence of depth maps z.sup.n(x, y) 106 that provide 3-D information”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Farrer into the combination of Du and Smolyanskiy. That is, applying the 2D image and depth image of Farrer to generate the image data of Smolyanskiy in view of Du. The motivation/ suggestion would have been A two-dimensional mesh animation is determined based on motion tracking in the acquired images. The two-dimensional mesh animation is then combined with the depth maps for form three-dimensional mesh animation suitable for rendering (Farrer, abstract). Regarding claim(s) 12, 18, they are interpreted and rejected for the same reasons set forth in claim(s) 5. Allowable Subject Matter Claim(s) 6, 13, 19 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 6, it recites, wherein the expression parameters and the user-specific expression model are used to animate the avatar by performing an optimization of the expression parameters and animation priors for the avatar. None of the prior arts on the record or any of the prior arts searched, alone or in combination, renders obvious the combination of elements recited in the claim(s) as a whole. Regarding claims 13, 19, they are interpreted and allowed for the same reasons set forth in claim 6. Response to Arguments Applicant's arguments filed on 11/07/2025 have been fully considered but they are not persuasive. The applicant submitted: Du is silent regarding any such user-specific expression model because, at best, Du describes the parameters, and the avatar, and is silent regarding the avatar being related to a specific user at all. As such, nothing in Du describes using a user-specific expression model and the expression parameters to animate the avatar (remarks, page 7). The examiner respectfully disagrees. Du discloses “[0060] The shape (e.g., the wireframe) of the 3D model is fully controlled by a set of parameters. In projecting the 3D model onto the face area of the input image, its parameters are adjusted so that the wireframe changes its shape and matches the user head position and facial expression. [0063] Thus, the control parameters of the 3D head model may be repeatedly updated until a satisfactory convergence with the current face occurs”. Therefore, the face of the input image corresponds to a specific user’s face, and the 3D head model with a satisfactory convergence corresponds to the user-specific expression model. Du further discloses “[0052] these gestures and expressions may be expressed as animation parameters. Such animation parameters are transferred to a graphics rendering engine. In this way, the avatar system will be able to reproduce the original user's facial expression on a virtual 3D model. [0057] FIG. 4 shows that, at a block 414, the animation parameters are sent to a rendering engine. In turn, the rendering engine drives an avatar 3D model based on the animation parameters at a block 416”. Thus, the 3D head model with a satisfactory convergence, and animation parameters with regard to expressions are used to drive an avatar. The applicant submitted: Moreover, there is no indication that it would be obvious to combine the 3D head shape of Smolyanskiy with the avatar of Du to obtain "determine, by the electronic device, expression parameters from the second sensor data, wherein the expression parameters and the user-specific expression model are used to animate the avatar." It would not be obvious to combine Du and Smoliyansiy because Smolyanskiy is not concerned with identifying new expressions but rather is concerned with reducing complexity of feature tracking (remarks, page 7). The examiner respectfully disagrees. It would be obvious to combine Smolyanskiy and Du in that they are analogous prior arts, both of which are related to animating a user’s face using models, facial expression and animation parameters. The combination is applying the driving avatar steps of Du after the method of Smolyanskiy is performed. The motivation/ suggestion would have been a practical, real time (or super real time), online and low communication bandwidth avatar system may be implemented (Du, [0014]). As to the embodiments and purposes of Smolyanskiy and Du, they are irrelevant to the combination if they were not cited in the claim mapping. The applicant submitted: Moreover, neither Du nor Smolyanskiy are directed to animating a digital character to mimic the face of a user, as described at paragraph [0005] of the Specification. To that end, Applicants respectfully submit that the combination of Smolyanskiy and Du are not obvious to combine, nor does the combination lead to the subject matter of the claims (remarks, page 8). The examiner respectfully disagrees. Du teaches “[0052] Such animation parameters are transferred to a graphics rendering engine. In this way, the avatar system will be able to reproduce the original user's facial expression on a virtual 3D model. [0061] For instance, FIG. 4 shows that, at block 404, the head model is projected onto the detected face (also referred to as the current face). As indicated by a block 412, blocks 404-410 may be repeated if the 3D head model and the current face have not converged within a predetermined amount. Otherwise, operation may proceed to a block 414. [0063] Thus, the control parameters of the 3D head model may be repeatedly updated until a satisfactory convergence with the current face occurs”. The parameters applied to the 3D head model are used to mimic the face of a user. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRACE Q LI whose telephone number is (571)270-0497. The examiner can normally be reached Monday - Friday, 8:00 am-5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DEVONA FAULK can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GRACE Q LI/Primary Examiner, Art Unit 2618 1/20/2026
Read full office action

Prosecution Timeline

May 09, 2023
Application Filed
Sep 22, 2024
Non-Final Rejection — §103
Dec 23, 2024
Response Filed
Mar 13, 2025
Final Rejection — §103
Jul 18, 2025
Request for Continued Examination
Jul 22, 2025
Response after Non-Final Action
Aug 05, 2025
Non-Final Rejection — §103
Nov 07, 2025
Response Filed
Jan 21, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602880
Controlling Augmented Reality Content Via Selection of Real-World Locations or Objects
2y 5m to grant Granted Apr 14, 2026
Patent 12602942
MODEL FINE-TUNING FOR AUTOMATED AUGMENTED REALITY DESCRIPTIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597217
METHODS AND SYSTEMS FOR AUGMENTED REALITY IN AUTOMOTIVE APPLICATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12579762
OVERLAY ADAPTATION FOR VISUAL DISCRIMINATION
2y 5m to grant Granted Mar 17, 2026
Patent 12561922
CAPTURE AND DISPLAY OF POINT CLOUDS USING AUGMENTED REALITY DEVICE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
77%
Grant Probability
90%
With Interview (+12.8%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 351 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month