Prosecution Insights
Last updated: April 19, 2026
Application No. 18/197,709

SIGNALING DEVIATIONS IN USER POSITION DURING A VIDEO CONFERENCE

Non-Final OA §102§103
Filed
May 15, 2023
Examiner
NGUYEN, PHUNG HOANG JOSEPH
Art Unit
2691
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
3 (Non-Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
694 granted / 877 resolved
+17.1% vs TC avg
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
32 currently pending
Career history
909
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
15.2%
-24.8% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 877 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4-5, 7-10, 12-13 and 15-18 and 20 are rejected under 35 U.S.C. 102(1)(a) as anticipated by or, in the alternative, under 35 U.S.C. 103 as obvious over Gardenfors in view of Cutler (US 2019/0320135). Claims 1, 9 and 17, Gardenfors teaches a system, a medium and a method comprising: determining a reference position of a user within a field of view (FOV) of a camera associated with a client device (Gardenfors: The process of providing a non-distracting indication of face positioning to the near-end user is shown in the flow diagram of FIG. 9. The face tracking program determines an indicium of the near-end user's face, at 902. This indicium is compared to the predetermined frame boundaries of the display window, at 904, [0041]. See also figs. 4 and 5) wherein the user is one of a plurality of participants of a video conference, (Gardenfors: video conference or chat with near-end and far-end users [0029, 0031-0033]); Here examiner notices that Gardenfors does not use the term “field of view” (FOV)… but as shown above that Gardenfors’ description is fit right into the description of FOV, see the current Specs, [0048]. So clearly Gardenfors, by implication or alternatively by obviousness, the FOV. Examiner wishes to support this obviousness utilizing a number of documents submitted by the applicants. By utilizing these additional documents, examiner, by no mean, suggests that the primary reference of Garden for does not teach the anticipation, rather to add further clarity and to avoid any future dispute regarding the use of language. receiving a video stream generated by the camera associated with the client device, the video stream comprising an image of the user of the client device; (Gardenfors: [0031] and Fig. 4: The near-end user's image is captured by the camera, image capture subsystem 144); determining, from the image, a current position of the user within the FOV; (Gardenfors: [0033]: An embodiment of the present disclosure utilizes face tracking technology to track the near-end user's face position. Also Fig. 9, steps 904-908); and responsive to determining that a value of a metric reflecting a deviation of the current position from the reference position satisfies a threshold criterion, generating an alert to the user; (Gardenfors: [0034]: As long as the near-end user's face indicium is determined to be within the predetermined frame 503, the color dot 601 is displayed as a distinguishing color, for example green. When the near-end user's face indicium breaches one of the borders of the frame, or if no face is detected, the color dot 601 is displayed as a different color, for example red. Also fig. 6. Also Fig. 9.); and responsive to detecting, while the threshold criterion remains satisfied an expiration of a reframing time period indicative of generating a new reference position, generating the new reference position of the user associated within the FOV. (Gardenfors discussed this limitation, except the Italic portion, via at least Fig. 9 and [0034-0041]. Cutler, via at least [0047-0048] and Figs. 7A and 7B, discusses the Italic portion as he teaches, “In FIG. 7A, at a time t1, the POV tracker 420 may determine a location of a first POV point C.sub.t1 of the local subject 2 relative to the local telepresence device 100A for the time t1. In the illustrated example, the first POV point C.sub.t1 is described in terms of a three-dimensional Cartesian coordinate (X.sub.t1, Y.sub.t1, Z.sub.t1) relative to a lower right position of the local telepresence device 100A from the view of the local subject 2. The X-Y plane 72 in this example is parallel to the front main surface of the display 200. It is noted that various other coordinate systems may be employed to similar effect. A POV point, such as the first POV point C.sub.t1, may be a point between the eyes of the local subject 2 (as illustrated in FIG. 7A), a left eye of the local subject 2, a right eye of the local subject 2, or other location corresponding to a viewpoint of the of the local subject 2. [0048] In FIG. 7B, at a time t2, the local subject 2 has moved from the position at time t1 in FIG. 7A; for example, laterally to the left as indicated by arrow X. For the time t2, the POV tracker 420 may determine a location of a second POV point C.sub.t2 of the local subject 2 relative to the local telepresence device 100A, with a corresponding three-dimensional coordinate (X.sub.t2, Y.sub.t2, Z.sub.t2), much as described in FIG. 7A”. It would have been obvious to the ordinary artisan before the effective filing date to include the teaching of Cutler into the teaching of Gardenfors for the purpose of providing a time indication (i.e., t1 to t2) corresponding to the new reference position which is to improve the immersive experience in an video conference. Claims 2, 10 and 18. The method of claim 1, wherein generating the alert further comprises: incorporating a visual cue to a view of the user rendered on the client device, wherein the visual cue suggests a corrective body movement to be performed by the user. (Gardenfors, [0035-0037], visual indication for user to move to the left, etc, and cues for head positioning). Claims 4, 12 and 20. The method of claim 1, wherein the metric reflecting the deviation of the current position from the reference position is based on a distance of one or more facial features detected within the image of the user from a vertical axis of the FOV of the camera. (See the independent claims or at least Gardenfors: [0033-0034]). Claims 5 and 13. The method of claim 2, wherein the visual cue comprises a visual overlay, over a user interface associated with the video conference, on a side towards which the user is deviating from the reference position. ((See the independent claims or at least Gardenfors: [0036-0037]). Claims 6 and 14. (Cancelled) Claims 7 and 15, The method of claim 1, further comprising: responsive to determining that the threshold criterion is no longer satisfied, disabling the alert. (Gardenfors: [0041] and fig. 9, step 908). Claims 8 and 16. The method of claim 1, further comprising: determining, from a new image, an updated position of the user within the FOV; and responsive to determining that a new value of the metric reflecting a deviation of the updated position from the reference position satisfies another threshold criterion, increasing an intensity of the alert to the user. (Gardenfors, [0037]: Alternatively, the image within the window 401 remains in a fixed location while an area of the image, area 803, becomes unfocused (FIG. 8B) or an area of the image, area 805, becomes blacked-out (FIG. 8C). As the near-end user moves his/her head, the displayed far-end useful image area increases or decreases in size; an increase in the useable far-end image indicating a proper head positioning for the near-end user. Cutler: In the example illustrated in FIG. 7E, the volume 75 is a portion of a cone extending from the second POV point C.sub.t2 corresponding to times t7 through t11, which widens over time to reflect increasing uncertainty over time in the actual POV position of the local subject 2 and provides a volume more certain to encompass POV positions within a degree of uncertainty, [0053]). Claims 3, 11 and 19. are rejected under 35 U.S.C. 102(1)(a) as anticipated by or, in the alternative, under 35 U.S.C. 103 as obvious over Gardenfors in view of Cutler and further in view of Baker or Soni. Claims 3, 11 and 19, wherein generating the alert further comprises: incorporating an audible signal to an audio stream reproduced by the client device, wherein the audible signal suggests a corrective body movement to be performed by the user. (Baker: [0025]: Feedback to the local user may be audio or visual or a combination of the two, but typically the feedback acts as a visual or audio cue as to how the local user should change his current behavior in order to meet the current presentation requirements of the video conference; Soni, [0060]: “move right”, “move left”, “Perfect”…). It would have been obvious to the ordinary artisan before the effective filing date to include the teaching of Baker OR Soni into the teaching of Gardenfors for the purpose of providing some verbal to make adjustment reflecting the new position to enhance the communication during a video conference. Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHUNG-HOANG J. NGUYEN whose telephone number is (571)270-1949. The examiner can normally be reached Reg. Sched. 6:00-3:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at 571-272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHUNG-HOANG J NGUYEN/Primary Examiner, Art Unit 2691
Read full office action

Prosecution Timeline

May 15, 2023
Application Filed
May 08, 2025
Non-Final Rejection — §102, §103
Aug 04, 2025
Applicant Interview (Telephonic)
Aug 04, 2025
Examiner Interview Summary
Aug 13, 2025
Response Filed
Sep 29, 2025
Final Rejection — §102, §103
Dec 18, 2025
Examiner Interview Summary
Dec 18, 2025
Applicant Interview (Telephonic)
Dec 29, 2025
Request for Continued Examination
Dec 30, 2025
Response after Non-Final Action
Jan 29, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598256
DISRUPTED-SPEECH MANAGEMENT ENGINE FOR A MEETING MANAGEMENT SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12591408
DISPLAY APPARATUS AND METHOD INCORPORATING INTEGRATED SPEAKERS WITH ADJUSTMENTS
2y 5m to grant Granted Mar 31, 2026
Patent 12587612
Method and Device for Invoking Public or Private Interactions during a Multiuser Communication Session
2y 5m to grant Granted Mar 24, 2026
Patent 12587705
LIVESTREAMING AUDIO PROCESSING METHOD AND DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12587700
GROUPING IN A SYSTEM WITH MULTIPLE MEDIA PLAYBACK PROTOCOLS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+32.1%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 877 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month