Prosecution Insights
Last updated: April 19, 2026
Application No. 18/433,618

3D DIGITAL VIRTUAL CHARACTER GENERATION FOR VIDEO CONFERENCING

Non-Final OA §103
Filed
Feb 06, 2024
Examiner
ZALALEE, SULTANA MARCIA
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Zoom Video Communications, Inc.
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
86%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
346 granted / 488 resolved
+8.9% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
30 currently pending
Career history
518
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 488 resolved cases

Office Action

§103
DETAILED ACTION Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 8 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Rabinovich et al (US 20210392296 A1), and further in view of Wang et al (Wang, Jingying, et al. "Fully automatic blendshape generation for stylized characters." 2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR). IEEE, March, 2023.). RE claim 1, Rabinovich teaches A computer implemented method (abstract, Fig 1), comprising: joining a video conference involving a plurality of participants (abstract, Figs 1, 5, [0060]); determining an expression parameter vector from a video of a participant of the plurality of participants during the video conference (Fig 7, [0088], [0183], [0158]); generating a virtual character customized for the participant from a virtual character face model by at least applying the expression parameter vector and by incorporating a virtual character neutral face model customized for the participant, and the virtual character neutral face model customized for the participant describing a neutral face of the virtual character customized for the participant (abstract, Figs 1, 7, 15, [0074], [0088], [0102]-[0111], [0375]-[0377], [0141]-[0142], [0154], [0158], [0183]-[0188], [0264], [0417]-[0420] etc. wherein the customized 3D avatar model is created using the neutral 3DMM template model and mapping the determined shape, pose and expression parameters); and rendering the virtual character customized for the participant in a video stream of the participant (abstract, Fig 1, [0060], [0074]-[0076]). Rabinovich is silent RE: to a set of virtual character expressions customized for the participant each of the set of virtual character expressions customized for the participant describing a facial expression of the virtual character customized for the participant. However Wang teaches in abstract, Figs 1-2, 7, page 349 col 1 to expression transfer for stylized characters with different topologies in real time. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Rabinovich a system and method of to a set of virtual character expressions customized for the participant, initial 3D model as suggested by Wang, to expression transfer for stylized characters with different topologies in real time and thereby increasing system effectiveness and user experience. Claim 8 recites limitations similar in scope with limitations of claim 1 and therefore rejected under the same rationale. In addition Rabinovich teaches A system comprising: a non-transitory computer-readable medium; and a processor communicatively coupled to the non-transitory computer-readable medium ([0009]). Claim 15 recites limitations similar in scope with limitations of claim 1 and therefore rejected under the same rationale. In addition Rabinovich teaches A non-transitory computer-readable medium comprising processor-executable instructions ([0008]). Claims 2-7, 9-14 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Rabinovich as modified by Wang, and further in view of Bouaziz et al (US 20140362091 A1). RE claim 2, Rabinovich as modified by Wang teaches wherein the virtual character neutral face model is generated before a start of the video conference, and wherein generating the virtual character neutral face model comprises: extracting a facial feature vector from an image of the participant (Rabinovich [0178], [0375]-[0378], Wang Fig 1, page 348 col 1-2). Rabinovich as modified by Wang is silent RE and combining a set of virtual character face bases according to the facial feature vector to generate the virtual character neutral face model. However Bouaziz teaches in Figs 1-3, [0022], [0060] to generate the neutral character face model. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Rabinovich as modified by Wang a system and method of combining a set of virtual character face bases according to the facial feature vector to generate the virtual character neutral face model, as suggested by Bouaziz, in order to effectively generate the neutral character face model and thereby increasing system effectiveness and user experience. RE claim 3, Rabinovich as modified by Wang and Bouaziz teaches wherein the virtual character neutral face model customized for the participant comprises a base neutral model and one or more accessory models (Rabinovich [0258], [0818]). RE claim 4, Rabinovich as modified by Wang and Bouaziz teaches wherein the set of virtual character expressions customized for the participant are generated by combining a set of virtual character expression bases based on the facial feature vector for the participant (Rabinovich [0178], [0375]- [0378], Wang abstract, Figs 1-2, 7, page 349 col 1). RE claim 5, Rabinovich as modified by Wang and Bouaziz teaches wherein the set of virtual character face bases are generated by applying a deformation transfer to the virtual character face model, a human base face, and a set of human face bases (Rabinovich [0178], [0375]- [0378], [0418], Bouaziz Figs 1-4, [0023], Wang abstract, Figs 1-2, 7, page 349 col 1). RE claim 6, Rabinovich as modified by Wang and Bouaziz teaches wherein the set of virtual character expression bases comprises a subset of expression bases for each virtual character face base in the set of virtual character face bases (Wang abstract, Figs 1-2, 7, page 349 col 1, Bouaziz Figs 1-4, [0023]). RE claim 7, Rabinovich as modified by Wang and Bouaziz teaches wherein a k-th virtual character expression base in a subset of expression bases for a j-th virtual character face base is generated by applying the deformation transfer to the virtual character face model incorporated with the j-th virtual character face base, the human base face incorporated with a j-th human face base, and the human base face incorporated with the j-th human face base and a k-th human expression base of a set of human expression bases (Rabinovich [0178], [0375]- [0378], [0418], Bouaziz Figs 1-4, [0023], [0061]. Wang abstract, Figs 1-2, 7, page 349 col 1). Claims 9-14 recite limitations similar in scope with limitations of claims 2-7 and therefore rejected under the same rationale. Claims 16-20 recite limitations similar in scope with limitations of claims 2-6 and therefore rejected under the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (see attached 892). Any inquiry concerning this communication or earlier communications from the examiner should be directed to SULTANA MARCIA ZALALEE whose telephone number is (571)270-1411. The examiner can normally be reached Monday- Friday 8:00am-4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Sultana M Zalalee/ Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Feb 06, 2024
Application Filed
Nov 18, 2025
Non-Final Rejection — §103
Apr 06, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602876
ANNOTATION TOOLS FOR RECONSTRUCTING THREE-DIMENSIONAL ROOF GEOMETRY
2y 5m to grant Granted Apr 14, 2026
Patent 12592035
Fused Bounding Volume Hierarchy for Multiple Levels of Detail
2y 5m to grant Granted Mar 31, 2026
Patent 12586146
PROGRESSIVE MATERIAL CACHING
2y 5m to grant Granted Mar 24, 2026
Patent 12573150
POLYGON CORRECTION METHOD AND APPARATUS, POLYGON GENERATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12561908
TOPOLOGICALLY CONSISTENT MULTI-VIEW FACE INFERENCE USING VOLUMETRIC SAMPLING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
86%
With Interview (+15.1%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 488 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month