Prosecution Insights
Last updated: April 19, 2026
Application No. 18/651,205

Calibrating A Physical Space For Multi-Camera Video Stream Selection For In-Person Conference Participants

Non-Final OA §103
Filed
Apr 30, 2024
Examiner
WOO, STELLA L
Art Unit
2693
Tech Center
2600 — Communications
Assignee
Zoom Video Communications, Inc.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
93%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
801 granted / 1007 resolved
+17.5% vs TC avg
Moderate +13% lift
Without
With
+13.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
21 currently pending
Career history
1028
Total Applications
across all art units

Statute-Specific Performance

§101
3.3%
-36.7% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
27.9%
-12.1% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1007 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Allowable Subject Matter Claims 5-6, 8-11, 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-4, 12, 15-16, 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hafstad et al. (US 2024/0257553 A1, “Hafstad”) in view of Brigden et al. (US 2022/0284600 A1, “Brigden”). As to claim 1, Hafstad discloses a method, comprising: identifying matched points using a first camera and a second camera as a person moves within a physical space (feature vectors for an individual are compared for multiple frames from multiple different camera outputs, as a basis for identification; para. 0054); and using, during a video conference, a physical space calibration matrix solved based on the matched points to determine whether a first bounding box of a first person captured by the first camera and a second bounding box of a second person captured by the second camera identify a single conference participant within the physical space (a plurality of video streams are analyzed to determine whether a first representation of a meeting participant and the second representation of a meeting participant correspond to a common meeting participant, para. 0051; the analysis may be based on at least one identity indicator, para. 0052). Hafstad differs from claims 1, 15, 18 in that it does not disclose the above underlined limitation. Brigden teaches the well known use of a physical space calibration matrix based on matched sets of key points of the same user in synchronized video streams from a given pair of cameras (para. 0063-0066, 0088). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hafstad with the above teaching of Brigden in order to more accurately identify and track users, as taught by Brigden (para. 0007). As to claims 3, 16, 19, Hafstad in view of Brigden teaches: wherein each of the matched points are key points from the first camera and the second camera for the person that have a same semantic meaning and confidence scores above a threshold (Hafstad: same feature vectors are considered matching when they are within a predetermined distance threshold, para. 0053-0054). As to claims 4, 19, Hafstad in view of Brigden teaches: wherein the matched points are also identified using a third camera (Hafstad: a common meeting participant is tracked and correlated across three or more camera outputs, para. 0071) , wherein each of the matched points are key points between two cameras for the person that have a same semantic meaning and confidence scores above a threshold (Hafstad: same feature vectors are considered matching when they are within a predetermined distance threshold, para. 0053-0054), wherein a first set of matched points are key points between the first camera and the second camera, wherein a second set of matched points are key points between the first camera and the third camera, wherein a third set of matched points are key points between the second camera and the third camera, and wherein the first set of matched points are used to solve a first physical space calibration matrix, the second set of matched points are used to solve a second physical space calibration matrix, and the third set of matched points are used to solve a third physical space calibration matrix (Brigden: keypoint matches are used to identify a calibrating user in a plurality of video streams captured by a plurality of cameras, and used to calculate fundamental matrixes for corresponding pairs of cameras, para. 0055-0065). As to claim 12, Hafstad in view of Brigden teaches: wherein the first bounding box and the second bounding box are obtained using person detection, wherein the first bounding box and the second bounding box are matched if an appearance similarity between the first person and the second person is greater than a similarity threshold, and wherein the physical space calibration matrix is only used for matched bounding boxes (Brigden: matrixes are used with bounding boxes around matched keypoints, para. 0076-0078, 0088, 0091). Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hafstad in view of Brigden, as applied to claim 1 above, and further in view of Harpavat et al. (US 2025/0307986 A1, “Harpavat”). Hafstad in view of Brigden differs from claim 2 in that it does not teach: filtering the matched points using non-maximum suppression (NMS) to remove redundant matched points. Harpavat teaches the well known use of non-maximum suppression (NMS) to filter out redundant detections (para. 0047). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hafstad in view of Brigden with the above teaching of Harpavat in order to maintain only the most relevant detections, as taught by Harpavat (para. 0047). Claim(s) 7, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hafstad in view of Brigden, as applied to claim 1 above, and further in view of Ma et al. (US 9,779,296 B1, “Ma”). Hafstad in view of Brigden teaches the use of a calibration matrix (Brigden: well known use of a physical space calibration matrix based on matched sets of key points of the same user in synchronized video streams from a given pair of cameras (para. 0063-0066, 0088)), but differs from claims 7, 20 in that it does not teach: determining that a number of matched points are below a high threshold; generating a set of low-threshold matched points using key points of the person that are below the high threshold and above a low threshold; generating a set of high-threshold matched points using key points of the person that are above the high threshold; calculating a first physical space calibration matrix using the high-threshold matched points; using the first physical space calibration matrix to obtain inliers of the low-threshold matched points; and solving a second physical space calibration matrix using the inliers, wherein the second physical space calibration matrix is set as the physical space calibration matrix. Ma teaches accomplishing object detection by identifying key points which match with varying degrees and retaining inlier key points (col. 18, lines 3-67). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hafstad in view of Brigden with the above teaching of Harpavat in order to more effectively identify a common person in multiple video streams. Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hafstad in view of Brigden, as applied to claim 1 above, and further in view of Steffanson et al. (US 2018/0341818 A1, “Steffanson”). Hafstad in view of Brigden differs from claim 13 in that it does not teach: displaying a physical space calibration request in the physical space that directs a person assisting with calibration to move around the physical space. Steffanson teaches a user interface instructing a user to walk around in a monitored area in order to calibrate a thermal camera (para. 0082-0085). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hafstad in view of Brigden with the above teaching of Steffanson in order to learn the configuration of the monitored environment, identify humans, etc., as taught by Steffanson (para. 0023, 0025-0026). Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hafstad in view of Brigden, as applied to claim 1 above, and further in view of Wang et al. (US 2016/0188109 A1, “Wang”). Hafstad in view of Brigden differs from claim 14 in that it does not teach: determining that the physical space calibration matrix needs recalibration; and sending a message to a host of an upcoming video conference in the physical space indicating that the physical space calibration matrix needs the recalibration. Wang teaches determining recalibration is needed, such as when a sensor device position has changed, and communicating to a user a notification message recommending recalibration and guidance instructions (Fig. 11; para. 0149-0150, 0223). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Hafstad in view of Brigden with the above teaching of Wang in order to prompt recalibration when needed. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bhatt et al. (US 2025/0139968 A1) teach manual calibration in which a human walks around a perimeter of the room (para. 0042). Nguyen et al. (US 2023/0401891 A1) teach head framing in a video conference system (Fig. 19, para. 0016-0018). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Stella L Woo whose telephone number is (571)272-7512. The examiner can normally be reached Monday - Friday, 8 a.m. to 5 p.m. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached at 571-272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Stella L. Woo/ Primary Examiner, Art Unit 2693
Read full office action

Prosecution Timeline

Apr 30, 2024
Application Filed
Jan 13, 2026
Non-Final Rejection — §103
Mar 31, 2026
Applicant Interview (Telephonic)
Mar 31, 2026
Examiner Interview Summary
Apr 02, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602416
HYBRID ARTIFICIAL INTELLIGENCE SYSTEM FOR SEMI-AUTOMATIC PATENT CLAIMS ANALYSIS
2y 5m to grant Granted Apr 14, 2026
Patent 12587613
System and method for documenting and controlling meetings with labels and automated operations
2y 5m to grant Granted Mar 24, 2026
Patent 12585681
Methods for Converting Electronic Presentations Into Autonomous Information Collection and Feedback Systems
2y 5m to grant Granted Mar 24, 2026
Patent 12581038
AUDIO PROCESSING IN VIDEO CONFERENCING SYSTEM USING MULTIMODAL FEATURES
2y 5m to grant Granted Mar 17, 2026
Patent 12568170
PRIORITIZING EMERGENCY CALLS BASED ON CALLER RESPONSE TO AUTOMATED QUERY
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
93%
With Interview (+13.2%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 1007 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month