Prosecution Insights
Last updated: April 19, 2026
Application No. 18/914,913

USER INTERFACES FOR MULTI-PARTICIPANT LIVE COMMUNICATION

Non-Final OA §103
Filed
Oct 14, 2024
Examiner
DAILEY, THOMAS J
Art Unit
2458
Tech Center
2400 — Computer Networks
Assignee
Apple Inc.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
95%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
694 granted / 859 resolved
+22.8% vs TC avg
Moderate +15% lift
Without
With
+14.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
27 currently pending
Career history
886
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
11.5%
-28.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 859 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-11 are pending. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement The various information disclosure statements (IDS) submitted in this application prior to this Office Action are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-11 are rejected under 35 U.S.C. 103 as being unpatentable over Toyama (US Pat. 6,806,898), hereafter, “Toyama,” in view of Faulkner (US Pub. No. 2020/0186375). As to claim 1, Toyama discloses a computer system configured to communicate with one or more display generation components, the computer system comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors (Abstract), the one or more programs including instructions for: during a video conference that includes at least a first participant that is a user of the computer system and a second participant that is a user of an external computer system, displaying, via the one or more display generation components, a video conference user interface for the first participant (Abstract and Fig. 2), wherein displaying the video conference user interface includes: in accordance with a determination that the video conference is in a particular mode and that gaze information for the second participant meets a set of one or more criteria, displaying a modified representation of the second participant (column 4, lines 44-56; particularly, “The viewpoint of each camera 222, 224, 226 is not in line with the spatial representations 240, 242, 244 of the participants 210, 212, 214. As such, the participants will be looking at their respective display devices instead of the particular participant that they are communicating with during the videoconference. Videoconferencing software module 260 is included to solve this problem by automatically adjusting gaze and head pose in the videoconferencing environment 200.” And further, column 6, lines 51-67; particularly, “The vision component 412 is activated when the video is captured, and analyzes vision data by detecting the head pose relative to the display (the orientation of the head), the eye gaze relative to the display (the direction of gaze), the outlines of the eyes and the position/outline of the face. The vision component 412 can use any suitable computer vision, pattern recognition, motion analysis, etc. system to track, detect and analyze the video sequences.”), wherein: as detected by one or more camera sensors of the external computer system, the second participant is not looking straight ahead with an even head orientation (column 4, lines 44-56; particularly, “The viewpoint of each camera 222, 224, 226 is not in line with the spatial representations 240, 242, 244 of the participants 210, 212, 214. As such, the participants will be looking at their respective display devices instead of the particular participant that they are communicating with during the videoconference. Videoconferencing software module 260 is included to solve this problem by automatically adjusting gaze and head pose in the videoconferencing environment 200.” And further, column 6, lines 51-67; particularly, “The vision component 412 is activated when the video is captured, and analyzes vision data by detecting the head pose relative to the display (the orientation of the head), the eye gaze relative to the display (the direction of gaze), the outlines of the eyes and the position/outline of the face. The vision component 412 can use any suitable computer vision, pattern recognition, motion analysis, etc. system to track, detect and analyze the video sequences.”); and in the modified representation of the second participant, the second participant appears to be looking straight ahead with an even head orientation (column 4, lines 44-56; particularly, “The viewpoint of each camera 222, 224, 226 is not in line with the spatial representations 240, 242, 244 of the participants 210, 212, 214. As such, the participants will be looking at their respective display devices instead of the particular participant that they are communicating with during the videoconference. Videoconferencing software module 260 is included to solve this problem by automatically adjusting gaze and head pose in the videoconferencing environment 200.”). However, Toyama does not explicitly disclose in accordance with a determination that the video conference is not in the particular mode or that the gaze information for the second participant does not meet the set of one or more criteria, displaying an unmodified representation of the second participant. But, Faulkner discloses in accordance with a determination that the video conference is not in the particular mode or that the gaze information for the second participant does not meet the set of one or more criteria, displaying an unmodified representation of the second participant (Fig. 8A-8B and [0087]-[0089]; particularly, “In the example shown in FIG. 8A, the user interface 800 comprises several display areas showing a raw feed 801, a preview of suggested modifications 802, and a live communication feed 803. The user interface 800 also comprises an input control element 810 for displaying the operating mode and for allowing user input to control the operating mode.” i.e. raw feed is not in “the particular mode” and the user need not go along with the “suggested modifications” and thus is “an unmodified representation of the second participant”) Therefore it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the application to combine the teachings of Toyama and Faulkner so as to provide a means for the system to selectively decide whether or not to modify existing streams thereby providing more control to end users. As to claims 10 and 11, they are rejected by a similar rationale by that set forth in claim 1’s rejection. As to claim 2, the teachings of Toyama and Faulkner as combined for the same reasons set forth in claim 1’s rejection and further disclose as detected by the one or more camera sensors of the external computer system, the second participant is not looking straight ahead with eyes facing forward; and in the modified representation of the second participant, the second participant appears to be looking straight ahead with eyes facing forward (Toyama, column 4, lines 44-56; particularly, “The viewpoint of each camera 222, 224, 226 is not in line with the spatial representations 240, 242, 244 of the participants 210, 212, 214. As such, the participants will be looking at their respective display devices instead of the particular participant that they are communicating with during the videoconference. Videoconferencing software module 260 is included to solve this problem by automatically adjusting gaze and head pose in the videoconferencing environment 200.” See also, column 7, lines 1-40, particularly, “Once the analyzed data is placed in 3D space, the head-pose can be swiveled, and the eye gaze can be set in any direction in the virtual 3D space. In addition, the eye gaze can be set to look directly at the video capture device's 410 viewpoint of the 3D space, creating an impression of eye contact with other videoconferencing participants or anyone viewing the video transmission."). As to claim 3, the teachings of Toyama and Faulkner as combined for the same reasons set forth in claim 1’s rejection and further disclose the set of one or more criteria includes a criterion that is met when, as detected by the one or more camera sensors of the external computer system, the second participant is looking at a display generation component of the external computer system (Toyama, column 4, lines 44-56; particularly, “The viewpoint of each camera 222, 224, 226 is not in line with the spatial representations 240, 242, 244 of the participants 210, 212, 214. As such, the participants will be looking at their respective display devices instead of the particular participant that they are communicating with during the videoconference. Videoconferencing software module 260 is included to solve this problem by automatically adjusting gaze and head pose in the videoconferencing environment 200.” See also, column 7, lines 1-40, particularly, “Once the analyzed data is placed in 3D space, the head-pose can be swiveled, and the eye gaze can be set in any direction in the virtual 3D space. In addition, the eye gaze can be set to look directly at the video capture device's 410 viewpoint of the 3D space, creating an impression of eye contact with other videoconferencing participants or anyone viewing the video transmission.") As to claim 4, the teachings of Toyama and Faulkner as combined for the same reasons set forth in claim 1’s rejection and further disclose the set of one or more criteria includes a criterion that is met when the second participant, as detected by the one or more camera sensors of the external computer system, is looking at a particular type of application on a display generation component of the external computer system (Toyama, column 4, lines 44-56 and column 7, lines 1-40). As to claim 5, the teachings of Toyama and Faulkner as combined for the same reasons set forth in claim 1’s rejection and further disclose the particular mode is a screen sharing mode (Faulkner, [0140]). As to claim 6, the teachings of Toyama and Faulkner as combined for the same reasons set forth in claim 1’s rejection and further disclose the screen sharing mode is a based on whether the second participant is sharing an application in the video conference (Faulkner, [0140]). As to claim 7, the teachings of Toyama and Faulkner as combined for the same reasons set forth in claim 1’s rejection and further disclose the particular mode is based on whether the second participant is determined to be a presenter (Faulkner, Fig. 8A, 8B, and [0059]-[0060]). As to claim 8, the teachings of Toyama and Faulkner as combined for the same reasons set forth in claim 1’s rejection and further disclose the second participant is determined to be the presenter based on a determination that the second participant is actively communicating (Faulkner, Fig. 8A, 8B, and [0059]-[0060]). As to claim 9, the teachings of Toyama and Faulkner as combined for the same reasons set forth in claim 1’s rejection and further disclose the second participant is determined to be the presenter based on a determination that the second participant is communicating above a threshold amount (Faulkner, Fig. 8A, 8B, and [0059]-[0060]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J DAILEY whose telephone number is (571)270-1246. The examiner can normally be reached 9:30am-6:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached on 571-270-3037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THOMAS J DAILEY/ Primary Examiner, Art Unit 2458
Read full office action

Prosecution Timeline

Oct 14, 2024
Application Filed
Mar 05, 2025
Response after Non-Final Action
Feb 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597054
METHOD AND SYSTEM OF FORWARDING CONTACT DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12580953
METHOD AND SYSTEM FOR DETECTING ENCRYPTED FLOOD ATTACKS
2y 5m to grant Granted Mar 17, 2026
Patent 12556589
MEDIA RESOURCE OPTIMIZATION
2y 5m to grant Granted Feb 17, 2026
Patent 12556605
Live Migration Of Clusters In Containerized Environments
2y 5m to grant Granted Feb 17, 2026
Patent 12549399
PROGRESS STATUS AFTER INTERRUPTION OF ONLINE SERVICE
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
95%
With Interview (+14.6%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 859 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month