Prosecution Insights
Last updated: April 19, 2026
Application No. 18/755,922

USER APPEARANCE MODIFICATION FOR VIDEO COMMUNICATION

Non-Final OA §102
Filed
Jun 27, 2024
Examiner
TRAN, QUOC DUC
Art Unit
2691
Tech Center
2600 — Communications
Assignee
Motorola Mobility LLC
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
720 granted / 841 resolved
+23.6% vs TC avg
Minimal +5% lift
Without
With
+4.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
17 currently pending
Career history
858
Total Applications
across all art units

Statute-Specific Performance

§101
5.0%
-35.0% vs TC avg
§103
43.3%
+3.3% vs TC avg
§102
30.5%
-9.5% vs TC avg
§112
5.3%
-34.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 841 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 5-9 and 12-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Balaji et al (2023/0030170). Consider claims 1, 12 and 14, Balaji et al teach a client device, method and system comprising: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the client device (par. 0022) to: [receive, from a client device, input user appearance data associated with a video communication (claim 14)]; par. 0045; “the system receives video content within a video communication session of a video communication platform”): detect that input user appearance data for a video communication exceeds a threshold variation from defined user appearance data associated with a user profile (par. 0054-0063; “the appearance adjustment request may be related to, e.g., one or more of: making adjustments to the user's facial shape, applying virtual makeup or other beautification or aesthetic elements to the user's face, teeth whitening, teeth shape alteration, hairstyle modification, hair texture modification, addition of an accessory such as a hat or glasses, changes to the user's clothing, or any other suitable adjustment which may be contemplated”; “rather than receiving the appearance adjustment request from a client device, the system detects that an appearance adjustment should be requested based on one or more adjustment detection factors, then automatically generates an appearance adjustment request including an adjustment depth. …. The system then detects when an appearance adjustment may be needed based on one or more factors. In some embodiments, such adjustment detection factors may include, e.g., detected facial features visible in the video content such as wrinkles, spots, blemishes, or skin non-uniformities. In some embodiments, a user may specify parameters for when the system should detect that an appearance adjustment is needed. For example, a user may specify in a video setting that the system should automatically adjust appearance when skin blemishes show up on the screen”; “The adjustment depth determines the threshold for whether a given skin area is to be classified as a smooth texture region as compared to a rough texture region. For example, if the adjustment depth received is 20%—i.e., the appearance adjustment should only be applied at 20% intensity to the user's image—then the system set a threshold for a skin area to be rough to be relatively high”); modify the input user appearance data based at least in part on the defined user appearance data to generate modified user appearance data (par. 0054; 0081-0083); and output the modified user appearance data as part of the video communication (par. 0083; “The preview window 412 now shows a modified image of a user”). Consider claim 2, Balaji et al teach wherein the input user appearance data is based at least in part on image data of a user captured in real time (par. 0052; “The videos displayed for at least a subset of the participants appear within each participant's corresponding participant window. Video may be, e.g., a live feed which is streamed from the participant's client device to the video communication session. In some embodiments, the system receives video content depicting imagery of the participant, with the video content having multiple video frames. The system provides functionality for a participant to capture and display video imagery to other participants. For example, the system may receive a video stream from a built-in camera of a laptop computer, with the video stream depicting imagery of the participant”). Consider claim 3, Balaji et al teach wherein the at least one processor is configured to cause the client device to detect, prior to initiation of the video communication, that the input user appearance data for the video communication exceeds the threshold variation from the defined user appearance data (par. 0065; “the modification of the imagery is performed such that as soon as a user selects the UI element for touching up the user's appearance, a preview video is displayed in real time or substantially real time showing the user's video if the appearance adjustment is applied”). Consider claims 5 and 15, Balaji et al teach wherein the at least one processor is configured to cause the client device to generate the defined user appearance data based at least in part on one or more of user appearance data captured during one or more previous video communications, user appearance data from one or more stored user images, or user input specifying a preferred visual appearance (par. 0053; “the user may have navigated within a user interface on their client device to the video settings UI window, and then checked a “touch up my appearance” checkbox or manipulated another such UI element”; par. 0055; “rather than receiving the appearance adjustment request from a client device, the system detects that an appearance adjustment should be requested based on one or more adjustment detection factors, then automatically generates an appearance adjustment request including an adjustment depth. …. The system then detects when an appearance adjustment may be needed based on one or more factors. In some embodiments, such adjustment detection factors may include, e.g., detected facial features visible in the video content such as wrinkles, spots, blemishes, or skin non-uniformities. In some embodiments, a user may specify parameters for when the system should detect that an appearance adjustment is needed. For example, a user may specify in a video setting that the system should automatically adjust appearance when skin blemishes show up on the screen”). Consider claims 6 and 16, Balaji et al teach wherein the at least one processor is configured to cause the client device to detect, based at least in part on user camera preference data associated with a video application, that the input user appearance data for the video communication exceeds the threshold variation from the defined user appearance data (par. 0026; 0046; “The user can select one or more video settings options to touch up the user's appearance and/or adjust the video for low light conditions. The settings include a granular control element, such as a slider, which allows the user to select a precise amount of appearance adjustment depth and/or lighting adjustment depth”; par. 0055; “rather than receiving the appearance adjustment request from a client device, the system detects that an appearance adjustment should be requested based on one or more adjustment detection factors, then automatically generates an appearance adjustment request including an adjustment depth. …. The system then detects when an appearance adjustment may be needed based on one or more factors. In some embodiments, such adjustment detection factors may include, e.g., detected facial features visible in the video content such as wrinkles, spots, blemishes, or skin non-uniformities. In some embodiments, a user may specify parameters for when the system should detect that an appearance adjustment is needed. For example, a user may specify in a video setting that the system should automatically adjust appearance when skin blemishes show up on the screen”). Consider claims 7, 13 and 17, Balaji et al teach wherein to modify the input user appearance data, the at least one processor is configured to cause the client device to perform visual modification of one or more visual features of the input user appearance data based at least in part on one or more corresponding visual features of the defined user appearance data (par. 0054; “In various embodiments, the appearance adjustment request may be related to, e.g., one or more of: making adjustments to the user's facial shape, applying virtual makeup or other beautification or aesthetic elements to the user's face, teeth whitening, teeth shape alteration, hairstyle modification, hair texture modification, addition of an accessory such as a hat or glasses, changes to the user's clothing, or any other suitable adjustment which may be contemplated”; par. 0055; “rather than receiving the appearance adjustment request from a client device, the system detects that an appearance adjustment should be requested based on one or more adjustment detection factors, then automatically generates an appearance adjustment request including an adjustment depth. …. The system then detects when an appearance adjustment may be needed based on one or more factors. In some embodiments, such adjustment detection factors may include, e.g., detected facial features visible in the video content such as wrinkles, spots, blemishes, or skin non-uniformities. In some embodiments, a user may specify parameters for when the system should detect that an appearance adjustment is needed. For example, a user may specify in a video setting that the system should automatically adjust appearance when skin blemishes show up on the screen”). Consider claims 8 and 18, Balaji et al teach wherein to modify the input user appearance data, the at least one processor is configured to cause the client device to perform visual replacement of one or more visual features of the input user appearance data with one or more corresponding visual features of the defined user appearance data (par. 0054; “In various embodiments, the appearance adjustment request may be related to, e.g., one or more of: making adjustments to the user's facial shape, applying virtual makeup or other beautification or aesthetic elements to the user's face, teeth whitening, teeth shape alteration, hairstyle modification, hair texture modification, addition of an accessory such as a hat or glasses, changes to the user's clothing, or any other suitable adjustment which may be contemplated”). Consider claims 9 and 19, Balaji et al teach wherein to detect that the input user appearance data for the video communication exceeds the threshold variation from the defined user appearance data associated with the user profile, the at least one processor is configured to cause the client device to one or more of: compare hair state data associated with the input user appearance data to hair state data associated with the defined user appearance data; compare facial feature data associated with the input user appearance data to facial feature data associated with the defined user appearance data; or compare clothing appearance data associated with the input user appearance data to clothing appearance data associated with the defined user appearance data (par. 0054; “In various embodiments, the appearance adjustment request may be related to, e.g., one or more of: making adjustments to the user's facial shape, applying virtual makeup or other beautification or aesthetic elements to the user's face, teeth whitening, teeth shape alteration, hairstyle modification, hair texture modification, addition of an accessory such as a hat or glasses, changes to the user's clothing, or any other suitable adjustment which may be contemplated”; par. 0055; “rather than receiving the appearance adjustment request from a client device, the system detects that an appearance adjustment should be requested based on one or more adjustment detection factors, then automatically generates an appearance adjustment request including an adjustment depth. …. The system then detects when an appearance adjustment may be needed based on one or more factors. In some embodiments, such adjustment detection factors may include, e.g., detected facial features visible in the video content such as wrinkles, spots, blemishes, or skin non-uniformities. In some embodiments, a user may specify parameters for when the system should detect that an appearance adjustment is needed. For example, a user may specify in a video setting that the system should automatically adjust appearance when skin blemishes show up on the screen”). Allowable Subject Matter Claims 4, 10-11 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any response to this action should be mailed to: Mail Stop ____(explanation, e.g., Amendment or After-final, etc.) Commissioner for Patents P.O. Box 1450 Alexandria, VA 22313-1450 Facsimile responses should be faxed to: (571) 273-8300 Hand-delivered responses should be brought to: Customer Service Window Randolph Building 401 Dulany Street Alexandria, VA 22314 Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUOC DUC TRAN whose telephone number is (571) 272-7511. The examiner can normally be reached Monday-Friday 8:30am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached on (571) 272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Quoc D Tran/ Primary Examiner, Art Unit 2691 March 17, 2026
Read full office action

Prosecution Timeline

Jun 27, 2024
Application Filed
Mar 17, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598268
STAGE USER REPLACEMENT TECHNIQUES FOR ONLINE VIDEO CONFERENCES
2y 5m to grant Granted Apr 07, 2026
Patent 12598251
PREVENTING DEEP FAKE VOICEMAIL SCAMS
2y 5m to grant Granted Apr 07, 2026
Patent 12592989
DETECTING A SPOOFED CALL
2y 5m to grant Granted Mar 31, 2026
Patent 12593011
APPARATUS AND METHODS FOR VISUAL SUMMARIZATION OF VIDEOS
2y 5m to grant Granted Mar 31, 2026
Patent 12581033
ENFORCING A LIVENESS REQUIREMENT ON AN ENCRYPTED VIDEOCONFERENCE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
90%
With Interview (+4.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 841 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month