Prosecution Insights
Last updated: April 19, 2026
Application No. 18/447,952

SYSTEMS AND METHODS FOR COMMENTING ON DIGITAL MEDIA CONTENT WITH LIVE AVATARS

Non-Final OA §103
Filed
Aug 10, 2023
Examiner
COCHRAN, BRIANNA RENAE
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Meta Platforms Technologies, LLC
OA Round
3 (Non-Final)
40%
Grant Probability
Moderate
3-4
OA Rounds
2y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
2 granted / 5 resolved
-22.0% vs TC avg
Minimal -40% lift
Without
With
+-40.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
29 currently pending
Career history
34
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
62.7%
+22.7% vs TC avg
§102
13.3%
-26.7% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Arguments This is in response to applicant’s amendment/response filed on 11/26/2025, which have been entered and made of record. Applicant’s arguments regarding claim rejections under 35 U.S.C. 102 have been fully considered and are persuasive. Rejections under 35 U.S.C 102 have been withdrawn. Applicant’s arguments regarding claim rejections under 35 U.S.C 103 have been fully considered but they are not persuasive. Applicant argues finally, the Office Action fails to explain or provide any rationale as to why one of ordinary skill in the art would have been motivated to modify Li's facial motion tracking system (which operates on facial features using floating point data and timestamps) with Assouline 2's full-body skeletal joint tracking system (which operates on body poses from monocular RGB images). The references address different problems in totally different contexts. To do so would degrade Li's facial gesture tracking capabilities. Assouline 2 states "the body pose estimation system 124 may only need the wrist positions, elbow positions, shoulder positions and nose position to be visible in an image, but not the leg positions," and "if only the first user's arms are visible in the image, then only the avatar's arms are adjusted to mimic the first user's arm position." This coarse-grained body tracking is fundamentally different from Li's fine-grained facial gesture tracking. Examiner respectably disagrees. In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, both Li and Assouline B2 are inventions focused on animating virtual avatars. Li animates the avatars based on motion data from the top half of the human body, as they are focused predominately on facial data. Utilizing facial data to animate an avatar requires more precision. Assouline B2 animates the avatars based on the motion data of the entire human body at key skeletal joints. Thus, one of ordinary skill in the art would combine Li and Assouline B2 to obtain overall high-quality avatar animations by tracking the entire body using Assouline B2 and Li’s precise facial motion tracking. Which would result in full body animations with precise facial tracking. Regarding the remaining arguments applicant argues with respect to the amended claim language, which is fully addressed in the prior art rejections set forth below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 6-7, 9-11, 14-15, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. U.S. Patent Application Publication 20140361974 A1 (hereinafter Li). Regarding claim 1, Li teaches a computer-implemented method comprising: identifying one or more expressive gestures (Facial Gestures, Para. 0014) of a user based on a video frame feed (Media Content) of a client computing device; (Para. 0010 and 0017) mapping (Motion Module) the one or more expressive gestures (Facial Gestures, Para. 0014) of the user onto one or more features of an avatar associated with the user within a social networking system; (Para. 0018-0020 and 0035) applying, after mapping (Motion Module, Para. 0018) the one or more expressive gestures of the user onto the one or more features (Facial or Body) of the avatar, one or more pre-generated animations (Group of Avatar/Animations based on Predetermined Value or Provided by Compilation Model) to a body (Motion Data, Para. 0017-0020) of the avatar, (Fig. 2 and Fig. 3, Steps 342/344 or 354 /356 align to mapping expressive gestures onto one or more features of the avatar, then steps 346 and 358 provide an avatar animation based on source facial motion data. Para. 0037 recites “previously extracted and stored singer facial motion data …to animate the singer avatar. Para. 0044 also teaches retrieving data from an “expression database”. Para. 0053 states that avatar animations can be based on facial motion data and Para. 0018-0020 details facial motion data is mapped to the avatar.)(Group of Avatar/Animations based on Predetermined Value or Provided by Compilation Model or Retrieved from a Server) positioning the avatar over an area of a digital media (Media Content) feed; (Para. 0035) and broadcasting the digital media (Media Content) feed with the positioned avatar to one or more co-users within the social networking system. (Para. 0035-0038) Although Li fails to explicitly teach wherein the one or more pre-generated animations are selected from a set of pre-generated animations. Li teaches facial motion data can be stored for later use (Para. 0061) and avatar data which includes avatar animations can be stored (Para. 0034 and 0068). Li details receiving avatar data from a server to render an avatar which can be on a delayed basis such as receiving data from storage. (Para. 0036) This means the data received from the sever can be from storage and can be avatar data. Thus, Li teaches pre-generated avatar animations. Li also teaches avatars or avatar animations can be automatically selected or provided (Para. 0058). If the avatar or avatar animations are not automatically selected the user or users are given a choice to choose instead (Para. 0049, 0058, and 0066). Li clearly teaches recording multiple animations from users of the system, and the multiple animations may be used on a single avatar. Thus, multiple animations can be selected (i.e. a set). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Li’s Pre-Generated animations with Li’s own teaching to incorporate Selecting the Animations from a Set of Pre-Generated Animations. Since doing so would provide the benefit of allowing for more than one animation to be used on an avatar to show multiple forms of expression. Which increases the customization of the avatars and the flexibility of the system. As well as storing generated animations for later use increases the efficiency of the system since it can reuse animations. Regarding claim 2, Li teaches the computer-implemented method of claim 1, wherein identifying the one or more expressive gestures (Facial Gestures, Para. 0014) of the user comprises, for representative frames from the video frame feed: determining one or more gesture landmarks (Facial Motion Data, Para. 0017) in the representative frame; and identifying relative coordinates (Floating Point Data) for each of the one or more gesture landmarks (Facial Motion Data, Para. 0010 and 0017). Regarding claim 3, Li teaches the computer-implemented method of claim 2, wherein determining the one or more gesture landmarks in the representative frame (Para. 0017) comprises identifying one or more of facial landmarks, (Facial Motion Data, Para. 0017) hair landmarks (Scalp Hair or Facial Hair, Para. 0013), or body-shape landmarks (Body Motion Data, Para. 0036). Regarding claim 6, Li teaches the computer-implemented method of claim 1, wherein positioning the avatar over an area of the digital media feed (Media Content) comprises at least one of: (Para. 0035) positioning a face of the avatar over an area of the digital media feed. (Fig. 1A and 1B) positioning an upper torso and face of the avatar over an area of the digital media feed, or positioning a body of the avatar over an area of the digital media feed. Regarding claim 7, Li teaches the computer-implemented method of claim 1, wherein a digital media feed (Media Content, Para. 0017) comprises at least one of a front-facing camera video frame feed of the client computing device, a rear-facing camera video frame feed of the client computing device (Para. 0011), a digital image (Still Image or Picture, Para. 0017) from a camera roll associated with the user, or a short-form video available to the user via the social networking system. (Para. 0035) Regarding system claim 9, Li discloses the method of claim 1 as discussed above and at least one physical processor; and physical memory (Hard Drive) comprising computer-executable instructions that, when executed by the at least one physical processor, (Para. 0059) cause the at least one physical processor to perform the method of claim 1. Therefore claim 9 is rejected under the same rational as claim 1. Regarding claim 10, has similar limitations as of claim 2, therefore it is rejected under the same rationale as claim 2. Regarding claim 11, has similar limitations as of claim 3, therefore it is rejected under the same rational as claim 3. Regarding claim 14, has similar limitations as of claim 6, therefore it is rejected under the same rational as claim 6. Regarding claim 15, has similar limitations as of claim 7, therefore it is rejected under the same rationale as claim 7. Regarding claim 17, Li discloses the method of claim 1 as discussed above and a non-transitory computer-readable medium (Memory or Hard Drive) comprising one or more computer-executable instructions that, when executed by at least one processor of a computing device (Para. 0059), cause the computing device to perform the method of claim 1. Therefore claim 17 is rejected under the same rationale as claim 1. Regarding claim 18, has similar limitations as of claim 2, therefore it is rejected under the same rational as claim 2. Regarding claim 19, has similar limitations as of claim 3, therefore it is rejected under the same rational as claim 3. Claim(s) 4-5, 12-13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. U.S. Patent Application Publication 20140361974 A1 (hereinafter Li) in view of Assouline et al U.S. Patent 11557075 B2 (hereinafter AssoulineB2). Regarding claim 4, Li teaches the computer-implemented method of claim 2, wherein mapping the one or more expressive gestures (Facial Gestures, Para. 0014) of the user to the one or more features of the avatar associated with the user (Motion Module, Para. 0018-0020) comprises, in connection with each frame in the video frame feed: determining one or more avatar landmarks (Avatar Content) that correspond to the one or more gesture landmarks; (Facial Motion Data, Para. 0017-0019) However, Li fails to teach wherein mapping the one or more expressive gestures of the user to the one or more features of the avatar associated with the user comprises, in connection with each frame in the video frame feed: and adjusting positions of the one or more avatar landmarks based on the relative coordinates of the one or more gesture landmarks. Li and AssoulineB2 are analogous to the claimed invention because both of them are in the same field of creating avatars based on a user’s feature or movement. AssoulineB2 teaches wherein mapping the one or more expressive gestures (Poses) of the user to the one or more features of the avatar (Skeletal Rig) associated with the user comprises, in connection with each frame in the video frame feed: (Virtual Object Modification Module, Col. 11 Lines 36-52) and adjusting positions of the one or more avatar landmarks (Virtual Object Modification Module) based on the relative coordinates (X, Y, Coordinates) of the one or more gesture landmarks. (Skeletal Joints Features, Col. 10 Para. 36-65) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Li's avatar and gesture landmarks that are mapped by relative positions to incorporate AssoulineB2’s exact coordinate system for mapping. Since doing so would provide the benefit of having a more accurate mapping between the avatar and user’s gestures. Regarding claim 5, Li teaches the computer-implemented method of claim 4, further comprising mapping (Motion Module, Para. 0018) one or more facial landmarks of the user to one or more facial landmarks (Facial Motion Data, Para. 0019) of the avatar while mapping the pre-generated animations (Predetermined Parameter) to body landmarks of the avatar (Para. 0032). Regarding claim 12, has similar limitations as of claim 4, therefore it is rejected under the same rational as claim 4. Regarding claim 13, has similar limitations as of claim 5, therefore it is rejected under the same rational as claim 5. Regarding claim 20, has similar limitations as of claim 4, therefore it is rejected under the same rational as claim 4. Claim(s) 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. U.S. Patent Application Publication 20140361974 A1 (hereinafter Li) in view of Assouline et al U.S. Patent Application Publication 20220156999 A1 (hereinafter AssoulineA1). Regarding claim 8, Li teaches the computer-implemented method of claim 1, wherein broadcasting the digital media feed (Media Content) and the positioned avatar to the one or more co-users (Multiple Users, Para. 0038) within the social networking system comprises: (Para. 0035 and 0036) determining a social networking system platform (Online Social Community) associated with the digital media feed; (Media Content, Para. 0035) making the digital media feed (Media Content) with the positioned avatar available (Rendered) via the determined social networking system platform; (Online Social Community, Para. 0035) However, Li fails to teach wherein broadcasting the digital media feed and the positioned avatar to the one or more co-users within the social networking system comprises: generating a notification for the digital media feed in association with the user; and providing the notification to the one or more co-users in connection with the social networking system platform. Li and AssoulineA1 are analogous to the claimed invention because both of them are in the same field of creating virtual avatars from a user using body motion and being connected to a social network. AssoulineA1 teaches wherein broadcasting the digital media feed and the positioned avatar to the one or more co-users within the social networking system comprises: generating a notification for the digital media feed (Messaging Client 104) in association with the user; (Para. 0029) and providing the notification to the one or more co-users (Other Users, Friends, or Group Chat Users, Para. 0029) in connection with the social networking system platform. (Para. 0025 and 0033) Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Li’s media content of an avatar to incorporate AssoulineA1’s notifications. Since Li’s media content can be shared with multiple users (Li, Para. 0038) or online social communities (Li, Para. 0035). Alerting users with notifications has become a standard in in social media, hence it would be obvious to use notifications to alert users when someone has posted media content with an avatar. Regarding claim 16, has similar limitations as of claim 8, therefore it is rejected under the same rational as claim 8. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIANNA R COCHRAN whose telephone number is (571)272-4671. The examiner can normally be reached Mon-Fri. 7:30am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRIANNA RENAE COCHRAN/Examiner, Art Unit 2615 /ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Aug 10, 2023
Application Filed
May 21, 2025
Non-Final Rejection — §103
Aug 15, 2025
Response Filed
Sep 29, 2025
Final Rejection — §103
Nov 26, 2025
Response after Non-Final Action
Dec 29, 2025
Request for Continued Examination
Jan 17, 2026
Response after Non-Final Action
Feb 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12541922
METHOD FOR GENERATING A MODEL FOR REPRESENTING RELIEF BY PHOTOGRAMMETRY
2y 5m to grant Granted Feb 03, 2026
Patent 12482144
METHOD AND APPARATUS OF ENCODING/DECODING POINT CLOUD GEOMETRY DATA USING AZIMUTHAL CODING MODE
2y 5m to grant Granted Nov 25, 2025
Patent 12417567
METHOD FOR GENERATING SIGNED DISTANCE FIELD IMAGE, METHOD FOR GENERATING TEXT EFFECT IMAGE, DEVICE AND MEDIUM
2y 5m to grant Granted Sep 16, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
40%
Grant Probability
0%
With Interview (-40.0%)
2y 3m
Median Time to Grant
High
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month