Prosecution Insights
Last updated: April 19, 2026
Application No. 18/128,045

PRIVACY PRESERVING ONLINE VIDEO RECORDING USING META DATA

Non-Final OA §103§112
Filed
Mar 29, 2023
Examiner
PATEL, HEMANT SHANTILAL
Art Unit
2694
Tech Center
2600 — Communications
Assignee
Adeia Guides Inc.
OA Round
3 (Non-Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
95%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
761 granted / 939 resolved
+19.0% vs TC avg
Moderate +14% lift
Without
With
+13.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
964
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
15.4%
-24.6% vs TC avg
§112
22.9%
-17.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 939 resolved cases

Office Action

§103 §112
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 15, 2025 has been entered. Response to Amendment Applicant’s arguments with respect to claim(s) 1-10, 12-13, 15-16, 18-20, 124-126 have been considered but are moot in view of new ground of rejection necessitated due to claim amendments and addition of new claims. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 15-16, 18-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Independent claim 15 recites “the first device” in line 21 and line 23. It is not clear if these refer to “a first device” recited in line 6 or “a first device” recited in line 8. Further, claim 15 recites “wherein;” (emphasis added). It is not clear what is included in “wherein”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-10, 12-13, 15-16, 18-20, 124-126 are rejected under 35 U.S.C. 103 as being unpatentable over Alexander (US Patent Application Publication No. 2022/0350925), and further in view of Agrawal (US Patent Application Publication No. 2024/0096374). Regarding claim 1, Alexander teaches a method comprising: receiving a first video stream from a first device (Paragraphs 0017, 0031-0032, 0041 live video feed of image frames); identifying a plurality of objects of a first scene of the first video stream, wherein the plurality of objects comprise a first object of the first scene (Paragraphs 0029, 0032-0033, 0042-0047 identifying participants and various objects); receiving a first policy, wherein the first policy identifies: a first action to be performed on the first object (Paragraphs 0025 selective blurring, 0026 selective filtering, 0032-0033, 0075); generating a first filtered video stream by modifying the first object of the first scene according to the first action identified by the first policy (Paragraphs 0018, 0025-0026, 0029, 0032-0033, 0042-0044, 0048, 0052, 0055-0061 selectively filtering and generating filtered video stream); receiving a second video stream from a second device; generating a merged video stream by combining the first filtered video stream and the second video stream (Paragraphs 0048, 0050-0051 second video stream, and each device performing similar filtering and generating of filtered video); transmitting the merged video stream to the first device and the second device (Paragraphs 0019, 0021, 0035, 0054, 0073) (Paragraphs 0013-0087 for complete details). Alexander teaches “The participant needs to control what images/video/audio is to be shared” (Paragraph 0024), using UPEs individually “configured to selectively apply privacy filtering for various collaboration sessions using the UPEs” with whitelisted and blacklisted persons (Paragraph 0025), obviously teaching “the first policy is generated by the first device in response to receiving an input corresponding to the first action”, but Alexander does not teach it explicitly, and Alexander does not teach storing, by a third device, a recording of the merged video stream; receiving an updated policy from the first device, wherein the updated policy identifies an updated action to be performed on the first object and the updated action is different than the first action; and storing, by the third device, an updated recording of the merged video stream, wherein the updated recording of the merged video stream displays the first object of the first scene according to the updated action identified by the updated policy. However, in the similar field, Agrawal teaches the first policy is generated by the first device in response to receiving an input corresponding to the first action (Paragraphs 0037-0038, 0040-0042, 0052); and storing, by a third device (Paragraph 0052 remote server or private cloud), a recording of the merged video stream; receiving an updated policy from the first device, wherein the updated policy identifies an updated action to be performed on the first object and the updated action is different than the first action; and storing, by the third device, an updated recording of the merged video stream, wherein the updated recording of the merged video stream displays the first object of the first scene according to the updated action identified by the updated policy (Paragraphs 0031-0053 participant updating rules and filtering recording after the call is finished). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Alexander to include the first policy is generated by the first device in response to receiving an input corresponding to the first action; and storing, by a third device, a recording of the merged video stream; receiving an updated policy from the first device, wherein the updated policy identifies an updated action to be performed on the first object and the updated action is different than the first action; and storing, by the third device, an updated recording of the merged video stream, wherein the updated recording of the merged video stream displays the first object of the first scene according to the updated action identified by the updated policy as taught by Agrawal so that “additional rules may be set afterward for modifying the recording made selectively” (Agrawal, Paragraph 0041), wherein “rules may be determined based on manual input from one or more users” and “that recordings saved for Charlie should be filtered for obscenity” (Agrawal, Paragraph 0042). Regarding claim 2, Agrawal teaches generating, by the third device, the updated recording of the merged video stream, wherein the updated recording of the merged video stream displays the first object of the first scene according to the updated action identified by the updated policy (Paragraphs 0040-0042, 0045, 0051-053). Regarding claim 3, Alexander teaches wherein the third device is a server (Paragraphs 0019-0020), and Agrawal teaches wherein the third device is a server (Paragraph 0052). Regarding claim 4, Alexander teaches the first policy further identifies a second object of the first scene of the first video stream (second participant as desired and/or undesired according to meeting information), and a second action to be performed on the second object of the first scene of the first video stream (either action to keep second participant and/or action to remove/blur/mask second participant) (Paragraphs 0029, 0032-0033, 0042-0044). Regarding claim 5, Alexander teaches generating the first filtered video stream further comprises: removing portions of the first scene not identified by the first policy (removing participant not in meeting information); displaying the first object according to the first action (displaying desired participants of meeting); and displaying the second object according to the second action (removing or blurring undesired participant not on meeting list) (Paragraphs 0029, 0032-0033, 0042-0044, 0046). Regarding claim 6, Alexander teaches displaying the second object according to the second action comprises blurring the second object in the first video stream (Paragraphs 0029, 0032-0034, 0042-0044). Regarding claim 7, Alexander teaches displaying the second object according to the second action comprises replacing the second object with blurred object (Paragraphs 0029, 0044) and/or replaced with known static object in a boundary box (Paragraphs 0034, 0044), and Agrawal teaches filtering out video with text-based descriptions of relevant portions of video (Paragraphs 0044-0045). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Alexander to include text descriptions in boundary box in filtered video as taught by Agrwal in a text box in the video as an implementation choice. Regarding claim 8, Alexander teaches displaying the second object according to the second action comprises replacing the second object with a virtual object in the first video stream (Paragraphs 0033-0034). Regarding claim 9, Beck teaches receiving a selection of the virtual object from the first device (col. 2 ll. 64-67, col. 6 ll. 19-23). Regarding claim 10, Beck teaches generating the virtual object based on one or more characteristics of the second object (col. 2 ll. 64-67 replacing face with face, clothing with clothing etc., col. 7 ll. 21-38 using feature of objects). Regarding claim 12, Agrawal teaches updating, by the third device, the recording of the merged video stream to generate the updated recording of the merged video stream, wherein the updating comprises modifying the first object according to the updated action identified by the updated policy (Paragraphs 0040-0042, 0045, 0051-053). Regarding claim 13, it recites receiving a second policy from the first device, wherein the second policy identifies: a second set of features corresponding to a second object of a second scene of the first video stream; and a second action to be performed on the second object; receiving subsequent segments of the first video stream from the first device, wherein the subsequent segments of the first video stream comprise the second scene; generating filtered subsequent segments by modifying the second scene according to the second action identified by the second policy; receiving subsequent segments of the second video stream from the second device; generating a second merged video stream by combining the filtered subsequent segments and the subsequent segments of the second video stream; and transmitting the second merged video stream to the first device and the second device. These limitations are functionally similar to claim 1 with the difference of receiving second policy and filtering the first video scene based on second policy and then using the new filtered video stream. Agrawal teaches receiving updated policy and filtering the first video scene based on second policy and then using the new filtered video stream (Paragraphs 0039-0042, 0044-0045, 0051-053 selectively filtered video recorded according to rules provided/modified by the user during the call). Also, Alexander obviously teaches receiving updated policy and filtering the first video scene based on updated policy and then using the new filtered video stream (Paragraphs 0029-0035 change in meeting information can cause change in filtering of video stream). Refer to rejection for claim 1. Regarding claim 15, Alexander teaches an apparatus (Figs. 1, 6) comprising: control circuitry; and at least one memory including computer program code for one or more programs, the at least one memory and the computer program code configured to, with the control circuitry, cause the apparatus to perform (Paragraphs 0013-0022, 0065-0087) at least the following: receive a first video stream from a first device (Paragraphs 0017, 0031-0032, 0041 live video feed of image frames), wherein the first video stream comprises a first object of a first scene (Paragraphs 0029, 0032-0033, 0042-0047 identifying participants and various objects); receive a first policy from a first device, wherein the first policy indicates a first action to be performed on the first object (Paragraphs 0025 selective blurring, 0026 selective filtering, 0032-0033, 0075); generate a first filtered video stream by modifying the first object of the first scene according to the first action identified by the first policy (Paragraphs 0018, 0025-0026, 0029, 0032-0033, 0042-0044, 0048, 0052, 0055-0061 selectively filtering and generating filtered video stream); receive a second video stream from a second device; generate a merged video stream by combining the first filtered video stream and the second video stream (Paragraphs 0048, 0050-0051 second video stream, and each device performing similar filtering and generating of filtered video); transmit the merged video stream to the first device and the second device (Paragraphs 0019, 0021, 0035, 0054, 0073) (Paragraphs 0013-0087 for complete details). Alexander teaches “The participant needs to control what images/video/audio is to be shared” (Paragraph 0024), using UPEs individually “configured to selectively apply privacy filtering for various collaboration sessions using the UPEs” with whitelisted and blacklisted persons (Paragraph 0025), obviously teaching “the first policy is generated in response to receiving an input corresponding to the first action to be performed on the first object of the first scene”, but Alexander does not teach it explicitly, and Alexander does not teach to store a recording of the merged video stream in memory; receive an updated policy from the first device, wherein the updated policy identifies an updated action to be performed on the first object and the updated action is different than the first action; generate an updated recording of the merged video stream, wherein the updated recording of the merged video stream displays the first object of the first scene according to the updated action identified by the updated policy; and store the updated recording of the merged video stream in memory. However, in the similar field, Agrawal teaches the first policy is generated in response to receiving an input corresponding to the first action to be performed on the first object of the first scene (Paragraphs 0037-0038, 0040-0042, 0052); and receive an updated policy from the first device, wherein the updated policy identifies an updated action to be performed on the first object and the updated action is different than the first action; generate an updated recording of the merged video stream, wherein the updated recording of the merged video stream displays the first object of the first scene according to the updated action identified by the updated policy; and store the updated recording of the merged video stream in memory (Paragraphs 0031-0053 participant updating rules and filtering recording after the call is finished). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Alexander to include the first policy is generated in response to receiving an input corresponding to the first action to be performed on the first object of the first scene; and to receive an updated policy from the first device, wherein the updated policy identifies an updated action to be performed on the first object and the updated action is different than the first action; generate an updated recording of the merged video stream, wherein the updated recording of the merged video stream displays the first object of the first scene according to the updated action identified by the updated policy; and store the updated recording of the merged video stream in memory as taught by Agrawal so that “additional rules may be set afterward for modifying the recording made selectively” (Agrawal, Paragraph 0041), wherein “rules may be determined based on manual input from one or more users” and “that recordings saved for Charlie should be filtered for obscenity” (Agrawal, Paragraph 0042). Regarding claim 16, Alexander teaches wherein the apparatus is a server (Paragraphs 0019-0020, 0026); and Agrawal teaches wherein the apparatus is a server (Paragraphs 0026-0027); and when generating the updated recording of the merged video stream, to modify the recording of the merged video stream to generate the updated recording of the merged video stream, wherein the modifying comprises updating the first object of the recording of the merged video stream according to the updated action identified by the updated policy (Paragraphs 0040-0042, 0045, 0051-053). Regarding claim 18, Alexander teaches the first policy further identifies a second object of the first scene of the first video stream (second participant as desired and/or undesired according to meeting information), and a second action to be performed on the second object of the first scene of the first video stream (either action to keep second participant and/or action to remove/blur/mask second participant) (Paragraphs 0029, 0032-0033, 0042-0044, 0046). Regarding claim 19, Alexander teaches generating the first filtered video stream to: remove portions of the first scene not identified by the first policy (removing participant not in meeting information); display the first object according to the first action (displaying desired participants of meeting); and display the second object according to the second action (removing or blurring undesired participant not on meeting list) (Paragraphs 0029, 0032-0033, 0042-0044, 0046). Regarding claim 20, Alexander teaches displaying the second object according to the second action to blur the second object in the first video stream (Paragraphs 0029, 0032-0034, 0042-0044). Regarding claim 124, Alexander teaches a method comprising: receiving a first video stream from a first device (Paragraphs 0017, 0031-0032, 0041 live video feed of image frames); identifying a plurality of objects of a first scene of the first video stream, wherein the plurality of objects comprise a first object of the first scene (Paragraphs 0029, 0032-0033, 0042-0047 identifying participants and various objects); receiving a first policy, wherein the first policy identifies a first action to be performed on the first object (Paragraphs 0025 selective blurring, 0026 selective filtering, 0032-0033, 0075); generating a first filtered video stream by modifying the first object of the first scene according to the first action identified by the first policy (Paragraphs 0018, 0025-0026, 0029, 0032-0033, 0042-0044, 0048, 0052, 0055-0061 selectively filtering and generating filtered video stream); receiving a second video stream from a second device; generating a merged video stream by combining the first filtered video stream and the second video stream (Paragraphs 0048, 0050-0051 second video stream, and each device performing similar filtering and generating of filtered video); transmitting the merged video stream to the first device and the second device (Paragraphs 0019, 0021, 0035, 0054, 0073) (Paragraphs 0013-0087 for complete details). Alexander teaches “The participant needs to control what images/video/audio is to be shared” (Paragraph 0024), using UPEs individually “configured to selectively apply privacy filtering for various collaboration sessions using the UPEs” with whitelisted and blacklisted persons (Paragraph 0025), obviously teaching “receiving a first policy from the first device, wherein the first policy identifies a first action to be performed on the first object”, but Alexander does not teach it explicitly, and Alexander does not teach generating a recording of the merged video stream; receiving an updated policy from the first device, wherein the updated policy identifies an updated action to be performed on the first object and the updated action is different than the first action; modifying the recording of the merged video stream to generate an updated recording of the merged video stream, wherein the modifying comprises modifying the first object in the recording of the merged video stream according to the updated action identified by the updated policy; and storing the updated recording of the merged video stream. However, in the similar field, Agrawal teaches receiving a first policy from the first device, wherein the first policy identifies a first action to be performed on the first object (Paragraphs 0037-0038, 0040-0042, 0052); and generating a recording of the merged video stream; receiving an updated policy from the first device, wherein the updated policy identifies an updated action to be performed on the first object and the updated action is different than the first action; modifying the recording of the merged video stream to generate an updated recording of the merged video stream, wherein the modifying comprises modifying the first object in the recording of the merged video stream according to the updated action identified by the updated policy; and storing the updated recording of the merged video stream (Paragraphs 0031-0053 participant updating rules and filtering recording after the call is finished). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present invention to modify Alexander to include receiving a first policy from the first device, wherein the first policy identifies a first action to be performed on the first object; and generating a recording of the merged video stream; receiving an updated policy from the first device, wherein the updated policy identifies an updated action to be performed on the first object and the updated action is different than the first action; modifying the recording of the merged video stream to generate an updated recording of the merged video stream, wherein the modifying comprises modifying the first object in the recording of the merged video stream according to the updated action identified by the updated policy; and storing the updated recording of the merged video stream as taught by Agrawal so that “additional rules may be set afterward for modifying the recording made selectively” (Agrawal, Paragraph 0041), wherein “rules may be determined based on manual input from one or more users” and “that recordings saved for Charlie should be filtered for obscenity” (Agrawal, Paragraph 0042). Regarding claim 125, Alexander teaches wherein generating the first filtered video stream further comprises: removing portions of the first scene not identified by the first policy (removing participant not in meeting information); and displaying the first object according to the first action (displaying desired participants of meeting) (Paragraphs 0029, 0032-0033, 0042-0044, 0046). Agrawal teaches wherein generating the first filtered video stream further comprises: removing portions of the first scene not identified by the first policy (Paragraphs 0039, 0048-0049 removing portions of participants not identified); and displaying the first object according to the first action (Paragraphs 0041-0042 displaying to allow removing obscenity). Regarding claim 126, Alexander teaches wherein: displaying the first object according to the first action comprises replacing the first object with a virtual object in the first video stream (Paragraphs 0029, 0032-0034, 0042-0044, 0046 displaying identified object replaced by a box with static object). Agrawal teaches wherein: displaying the first object according to the first action comprises replacing the first object with a virtual object in the first video stream (Paragraphs 0044-0045, 0051 displaying identified object replaced by a text description object); and modifying the first object in the recording of the merged video stream according to the updated action identified by the updated policy comprises removing the virtual object from the recording of the merged video stream (Paragraphs 0041-0042, 0052 removing obscenity gestures or body language). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEMANT PATEL whose telephone number is (571)272-8620. The examiner can normally be reached M-F 8:00 AM - 4:30 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fan Tsang can be reached at 571-272-7547. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. HEMANT PATEL Primary Examiner Art Unit 2694 /HEMANT S PATEL/ Primary Examiner, Art Unit 2694
Read full office action

Prosecution Timeline

Mar 29, 2023
Application Filed
Feb 27, 2025
Non-Final Rejection — §103, §112
Jul 30, 2025
Response Filed
Aug 26, 2025
Final Rejection — §103, §112
Dec 12, 2025
Applicant Interview (Telephonic)
Dec 12, 2025
Examiner Interview Summary
Dec 15, 2025
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Jan 28, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598254
SYSTEMS AND METHODS RELATING TO GENERATING SIMULATED INTERACTIONS FOR TRAINING CONTACT CENTER AGENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12592843
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12578920
AUDIO SYSTEM CONTROL DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12573409
AUDIO ENCODER, METHOD FOR PROVIDING AN ENCODED REPRESENTATION OF AN AUDIO INFORMATION, COMPUTER PROGRAM AND ENCODED AUDIO REPRESENTATION USING IMMEDIATE PLAYOUT FRAMES
2y 5m to grant Granted Mar 10, 2026
Patent 12563160
MULTIUSER TELECONFERENCING WITH SPOTLIGHT FEATURE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
95%
With Interview (+13.6%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 939 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month