Prosecution Insights
Last updated: April 19, 2026
Application No. 18/420,236

PERFORMING INTEGRITY VERIFICATION OF CONTENT IN A VIDEO CONFERENCE USING LIGHTING ADJUSTMENT

Non-Final OA §102§103
Filed
Jan 23, 2024
Examiner
KIM, EUI H
Art Unit
2453
Tech Center
2400 — Computer Networks
Assignee
Google LLC
OA Round
1 (Non-Final)
49%
Grant Probability
Moderate
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
76 granted / 156 resolved
-9.3% vs TC avg
Strong +53% interview lift
Without
With
+52.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
28 currently pending
Career history
184
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
65.9%
+25.9% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 156 resolved cases

Office Action

§102 §103
DETAILED ACTION This Office Action is in response to the application filed on 01/23/2024. Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 8-9, 12-13, and 17 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Thubert et al. (hereinafter Thubert, US 2022/0368547 A1). Regarding Claim 1, Thurbert discloses A method comprising: determining that an integrity verification of video content generated by a first client device of a plurality of client devices of a plurality of participants of a video conference is to be performed (Thubert: Fig. 5 510, para.0020 “ This replacement deceives other participants of the video conference session from recognizing that the individual accessing the video via the legitimate user account is not the individual rightfully/legitimately associated with the user account.” para.0035 “In operation 510, where a first session is established with a participant user device. The participant user device is associated with a video conferencing participant. For example, as discussed above with respect to FIG. 1, in at least some embodiments, a video conferencing system (e.g., video conferencing system 102) establishes one or more sessions with one or more participant user devices (e.g., one or more of participant user devices 104A-D).” a first session of a video conference is initiated which then causes the video integrity checks in the following steps of Fig. 5. This is a determination that integrity verification should be performed as claim 12 of the instant application discloses it as an option “wherein determining that the integrity verification of the video content generated by the first client device is to be performed comprises at least one of…receiving an indication of a start of the video conference”); causing a modified user interface (UI) comprising one or more visual items, each corresponding to a video stream generated by one of the plurality of client devices, to be presented on the first client device (Thurbert: Fig. 5 520, para.0037 “In operation 520, modulated data is transmitted, via the first session, to the participant user device.” Para.0015 “In light of these observations, the disclosed embodiments ensure validity of a video conference participant by generating a session unique and modulating pattern of encoded data and transmitting the encoded data to a participant user device. The data pattern is modulated such that validation can be continuous throughout the video conference or as needed. In some embodiments, the pattern is a series of dots or dashes projected onto a face of the video participant. In some cases, the pattern is encoded as luminosity or color changes to an image displayed on a video screen that is proximate to the participant.” Para.0023 “A video of the participant's face as it reflects the modulated data is captured by an imaging sensor that is proximate to the participant's user device. The captured video is provided back to the video conferencing system 102” Visual Patterns for participants of the video conference are generated for display to a participant, corresponding to a video stream that is generated that would show the modulations reflected on the users face.), wherein the UI was modified using a color pattern encoding (Thurbert: para.0015 “In some cases, the pattern is encoded as luminosity or color changes to an image displayed on a video screen that is proximate to the participant.” A color pattern may be encoded into the image displayed on video screen); receive, from the first client device, a video stream generated by the first client device subsequent to a presentation of the modified UI on the first client device (Thurbert: Fig. 5, 525, para.0041 “In operation 525, a video stream associated with the participant is received via the first session. As discussed above with respect to FIG. 4, in some embodiments, the continuously modulated data is embedded into a video stream as luminosity or color changes to a displayed image/video that is provided to the participant user device.” Para.0022 “For example, the validation signal is embedded in a display signal sent to the participant user device in some embodiments. The changes in display luminosity or color are reflected by the face of the participant, and captured by an imaging sensor.” The display signal from the participant that shows the reflected light from the video screen is captured and received.); and verify the integrity of the video content generated by the first client device based on the video stream generated by the first client device and the color pattern encoding (Thurbert: para.0043 “In operation 530, data derived from the received video stream is analyzed to conditionally detect the continuously modulated data. In some embodiments, operation 530 decodes the continuously modulated data to detect one or more timing indications (such as encoded counters as discussed above), which are then used to synchronize comparisons between the transmitted continuously modulated data and the continuously modulated data derived from the received video stream.” Para.0023 “A video of the participant's face as it reflects the modulated data is captured by an imaging sensor that is proximate to the participant's user device. The captured video is provided back to the video conferencing system 102, which detects the modulated data in the video and compares it to the modulated data it provided to the participant user device. If the two signals are sufficiently similar, the video of the participant is considered validated, at least in some embodiments.” Based on the modulated video sent to the user and the video of the user that reflects the light changes from the video, the video content may be verified, i.e. the user may be validated.). Regarding Claim 8, Thubert discloses claim 1 as set forth above. Thubert further discloses wherein the modified UI is displayed during a live phase of the video conference or during a preparation phase of the video conference (Thubert: para.0015 “The data pattern is modulated such that validation can be continuous throughout the video conference or as needed.” para.0043 “In operation 530, data derived from the received video stream is analyzed to conditionally detect the continuously modulated data. In some embodiments, operation 530 decodes the continuously modulated data to detect one or more timing indications (such as encoded counters as discussed above), which are then used to synchronize comparisons between the transmitted continuously modulated data and the continuously modulated data derived from the received video stream.” Para.0045 “Operation 535 maintains the participant user device in the video conference based on the conditional detecting of operation 530.” Throughout the video conference, as well as immediately after establishing the conference, i.e. a preparation phase of the video conference, the modulated data is sent and verified. Therefore the modified UI is displayed during both a live phase and preparation phase of the video conference.). Regarding Claim 9, Thubert discloses claim 1 as set forth above. Thubert further discloses wherein the video stream generated by the first client device subsequent to the presentation of the modified UI reflects color changes to one or more objects in an image captured by a camera of the first client device, wherein the color changes are caused by the color pattern encoding modifying illumination of the one or more objects by a display of the first client device (Thubert: para.0015 “In some embodiments, the pattern is a series of dots or dashes projected onto a face of the video participant. In some cases, the pattern is encoded as luminosity or color changes to an image displayed on a video screen that is proximate to the participant. In some embodiments, the pattern is projected using infrared wavelengths, while other embodiments use visible light wavelengths. By both projecting a pattern onto a face of the participant, and varying the pattern over time, the disclosed embodiments are able to verify both identity and a time currency of a video image of the participant. Thus, these embodiments verify both that participant that is being imaged and that the participant is being imaged during the video conference.” Para.0017 “The modulated pattern is projected by the personal computer or laptop device, and a video of the participant is captured by a camera on the participant's mobile device, and provided to the video conferencing system.” The ui of the video screen is modified with color changes, which is then reflected on the users face, i.e. the object, and captured by the camera that the user is using the attend the video conference.). Regarding Claim 12, Thubert discloses claim 1 as set forth above. Thubert further discloses wherein determining that the integrity verification of the video content generated by the first client device is to be performed comprises at least one of: receiving a request from a first participant of the plurality of participants of the video conference; receiving an indication of a start of the video conference (Thubert: Fig. 5 510, para.0035 “In operation 510, where a first session is established with a participant user device. The participant user device is associated with a video conferencing participant. For example, as discussed above with respect to FIG. 1, in at least some embodiments, a video conferencing system (e.g., video conferencing system 102) establishes one or more sessions with one or more participant user devices (e.g., one or more of participant user devices 104A-D).” a first session of a video conference is initiated which then causes the video integrity checks in the following steps of Fig. 5.); or detecting, using a plurality of rules, one or more candidate integrity verification threats, wherein the one or more candidate integrity verification threats pertains to one or more of: a connection pattern, a network condition, an internet protocol (IP) geolocation, a virtual private network (VPN) use, or a number of connection attempts. Regarding Claim 13, it teaches all of the same elements as claim 1 but in A system comprising: a memory device; and a processing device coupled to the memory device, the processing device to perform operations comprising: (Thurbert: para.0049-0050). Therefore the supporting rationale for the rejection to claim 1 applies equally as well to that of Claim 13. Regarding Claim 17, it teaches all of the same elements as claim 1 but in A non-transitory computer readable storage medium comprising instructions for a server that, when executed by a processing device, cause the processing device to perform operations comprising: (Thurbert: para.0048-0050 “ video conferencing system 402 include one or more servers analogous to the computing device 600, each configured to support the video conferences”, para.0059 “a non-transitory computer useable medium” ) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2-3, 5-6, 14-15, and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thubert et al. (hereinafter Thubert, US 2022/0368547 A1) in view of LeCun et al. (hereinafter LeCun, US 2019/0110198 A1). Regarding Claim 2, Thurbert discloses claim 1 as set forth above. Thurbert further discloses wherein the color pattern encoding modifies a plurality of frames presented in the modified UI (Thurbert: para.0037 “In operation 520, modulated data is transmitted, via the first session, to the participant user device. The modulated data may be continuously modulated in that operation 520 encodes data that is changing at a frequency that does not fall below a predetermined frequency or frequency threshold, such as encoding different data no less than once per 0.1 seconds, 0.2 seconds, 0.3, seconds, 0.4 seconds, 0.5 seconds, 1 second, 2 seconds, 3, seconds, 4 seconds, or 5 seconds.” The stream is continuously modulated, and the modulation is changed every x seconds, therefore each frame of the stream is modulated.). each of the plurality of frames comprises a first subset of pixels with a modified first color pattern (Thurbert: para.0015 “In light of these observations, the disclosed embodiments ensure validity of a video conference participant by generating a session unique and modulating pattern of encoded data and transmitting the encoded data to a participant user device…. In some cases, the pattern is encoded as luminosity or color changes to an image displayed on a video screen that is proximate to the participant.” The frames may be modified such that a pattern comprising color changes can be applied to an image displayed on a screen, i.e. the pixels of the frames.). However Thurbert does not explicitly disclose each of the plurality of frames comprises a first subset of pixels with a modified first color pattern and a second subset of pixels with a modified second color pattern. LeCun discloses each of the plurality of images comprises a first subset with a modified first color pattern and a second subset with a modified second color pattern (LeCun: para.0100 “ A user attempts to access a stock trading application on their mobile device. To grant access to the stock trading account of the user, the application prompts the user to position their mobile device such that the screen of the mobile device points towards their face. The application then captures image data of the user, via the front-facing camera, while simultaneously displaying a authentication pattern on the screen of the mobile device, wherein the authentication pattern comprises a plurality of images, wherein each image comprises a plurality of regions, wherein at least one of the regions varies in at least one of: brightness, position, size, shape, and color over time causing a variance of lighting effects which create highlights and shadows on the user over time, and wherein one image in the authentication pattern comprises an encoding image.” The pattern comprises a plurality of regions, each region is considered to be a pattern of their own that varies in color. For example, in the images shown in Fig. 2a-H and 3A-B, any of these images show multiple regions, such as 232 in Fig. 2H. The top half of Fig. 2H may be one color pattern, and the bottom half may be considered a second color pattern.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Thurbert with LeCun in order to incorporate each of the plurality of images comprises a first subset with a modified first color pattern and a second subset, and apply this concept to the pixels of each of the frames in Thurbert. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved identity verification (LeCun: para.0003). Regarding Claim 3, Thurbert-LeCun discloses claim 2 as set forth above. Thurbert further discloses wherein the first subset of pixels with the modified first color pattern correspond to the one or more visual items in the modified UI (Thurbert: para.0015 “ In some cases, the pattern is encoded as luminosity or color changes to an image displayed on a video screen that is proximate to the participant.” The pattern that is displayed comprises changes to color on the image displayed on the video screen, therefore the pixels that are changed are the same pixels that correspond to items on the video screen, i.e. on or visual items.). However Thurbert does not explicitly disclose the second subset of pixels with the modified second color pattern correspond to the one or more visual items in the modified UI. LeCun discloses the modified second color pattern (LeCun: para.0100 “ A user attempts to access a stock trading application on their mobile device. To grant access to the stock trading account of the user, the application prompts the user to position their mobile device such that the screen of the mobile device points towards their face. The application then captures image data of the user, via the front-facing camera, while simultaneously displaying a authentication pattern on the screen of the mobile device, wherein the authentication pattern comprises a plurality of images, wherein each image comprises a plurality of regions, wherein at least one of the regions varies in at least one of: brightness, position, size, shape, and color over time causing a variance of lighting effects which create highlights and shadows on the user over time, and wherein one image in the authentication pattern comprises an encoding image.” The pattern comprises a plurality of regions, each region is considered to be a pattern of their own that varies in color. For example, in the images shown in Fig. 2a-H and 3A-B, any of these images show multiple regions, such as 232 in Fig. 2H. The top half of Fig. 2H may be one color pattern, and the bottom half may be considered a second color pattern.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Thurbert with LeCun in order to incorporate the modified second color pattern. By incorporation of the second pattern to a portion of the pixels in Thurbert, this causes the overall pattern that is encoded onto the screen to have 2 pattern portions, thereby causing each pattern to correspond, i.e. overlaid, onto different portions of the video being displayed, thereby meeting the claim limitation. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved identity verification (LeCun: para.0003). Regarding Claim 5, Thubert-LeCun discloses claim 2 as set forth above. Thubert further discloses wherein the plurality of frames comprises a first sub-plurality of frames and a second sub-plurality of frames (Thubert: para.0037 “In operation 520, modulated data is transmitted, via the first session, to the participant user device. The modulated data may be continuously modulated in that operation 520 encodes data that is changing at a frequency that does not fall below a predetermined frequency or frequency threshold, such as encoding different data no less than once per 0.1 seconds, 0.2 seconds, 0.3, seconds, 0.4 seconds, 0.5 seconds, 1 second, 2 seconds, 3, seconds, 4 seconds, or 5 seconds.” Para.0015 “In some cases, the pattern is encoded as luminosity or color changes to an image displayed on a video screen that is proximate to the participant” The modulated data is constantly changed every x seconds, the frames for each interval for which the pattern changes is a sub-plurality of frames.) However, Thubert does not explicitly disclose wherein colors of the first subset of pixels and the second subset of pixels in the first sub-plurality of frames are different from colors of the first subset of pixels and the second subset of pixels in the second sub-plurality of frames. LeCun discloses wherein colors of the first subset and the second subset in the first sub-plurality of images are different from colors of the first subset and the second subset in the second sub-plurality of images (LeCun: para.0100 “ A user attempts to access a stock trading application on their mobile device. To grant access to the stock trading account of the user, the application prompts the user to position their mobile device such that the screen of the mobile device points towards their face. The application then captures image data of the user, via the front-facing camera, while simultaneously displaying a authentication pattern on the screen of the mobile device, wherein the authentication pattern comprises a plurality of images, wherein each image comprises a plurality of regions, wherein at least one of the regions varies in at least one of: brightness, position, size, shape, and color over time causing a variance of lighting effects which create highlights and shadows on the user over time, and wherein one image in the authentication pattern comprises an encoding image.” The pattern comprises a plurality of regions, each region is considered to be a pattern of their own that varies in color. For example, in the images shown in Fig. 2a-H and 3A-B, any of these images show multiple regions, such as 232 in Fig. 2H. The top half of Fig. 2H may be one color pattern, and the bottom half may be considered a second color pattern. Further, each of these regions vary in color over time, therefore the image of one time period comprises sections of varying colors, and the next image has sections of varying color, i.e. different from the colors previously.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Thurbert with LeCun in order to incorporate wherein colors of the first subset and the second subset in the first sub-plurality of images are different from colors of the first subset and the second subset in the second sub-plurality of images, and apply this concept to the pixels of each of the frames in Thurbert. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved identity verification (LeCun: para.0003). Regarding Claim 6, Thubert-LeCun discloses claim 5 as set forth above. Thubert further discloses wherein the first sub-plurality of frames is displayed for a first set of time periods and the second sub-plurality of frames is displayed for a second set of time periods (Thubert: para.0037 “In operation 520, modulated data is transmitted, via the first session, to the participant user device. The modulated data may be continuously modulated in that operation 520 encodes data that is changing at a frequency that does not fall below a predetermined frequency or frequency threshold, such as encoding different data no less than once per 0.1 seconds, 0.2 seconds, 0.3, seconds, 0.4 seconds, 0.5 seconds, 1 second, 2 seconds, 3, seconds, 4 seconds, or 5 seconds.” Para.0015 “In some cases, the pattern is encoded as luminosity or color changes to an image displayed on a video screen that is proximate to the participant” The modulated data is constantly changed every x seconds, a first and second set of time periods, the frames for each interval for which the pattern changes is a sub-plurality of frames.). Regarding Claims 14-15 and 18-19, they do not teach nor further define over the limitations of claims 2-3, therefore the supporting rationale for the rejection to claims 2-3 apply equally as well to that of claims 14-15 and 18-19. Claim(s) 4, 16, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thubert et al. (hereinafter Thubert, US 2022/0368547 A1) in view of LeCun et al. (hereinafter LeCun, US 2019/0110198 A1) further in view of Sato et al. (hereinafter Sato, US 2012/0075878 A1). Regarding Claim 4, Thubert-LeCun discloses claim 2 as set forth above. However Thubert does not explicitly disclose wherein the modified first color pattern comprises a first red-green-blue (RGB) color, and the modified second color pattern comprises a second RGB color that is complementary to the first RGB color, and wherein the first RGB color and the second RGB color are selected such that combined color modifications maintain color neutrality of the modified UI. LeCun discloses wherein the modified first color pattern comprises a first red-green-blue (RGB) color, and the modified second color pattern comprises a second RGB color (LeCun: para.0100 “The application then captures image data of the user, via the front-facing camera, while simultaneously displaying a authentication pattern on the screen of the mobile device, wherein the authentication pattern comprises a plurality of images, wherein each image comprises a plurality of regions, wherein at least one of the regions varies in at least one of: brightness, position, size, shape, and color over time causing a variance of lighting effects which create highlights and shadows on the user over time, and wherein one image in the authentication pattern comprises an encoding image.” Para.0019 “In some embodiments, a region exhibits a color comprising … blue, … green, … red, … or any combination thereof.” The pattern comprises a plurality of regions, each region is considered to be a pattern of their own that varies in color, and each authentication pattern having different patterns and varying colors, Para.0019 shows a plurality of colors that are available, including red green blue, and other colors that can be represented as RGB.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Thurbert with LeCun in order to incorporate wherein the modified first color pattern comprises a first red-green-blue (RGB) color, and the modified second color pattern comprises a second RGB color. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved identity verification (LeCun: para.0003). However LeCun does not explicitly disclose the modified second color pattern comprises a second RGB color that is complementary to the first RGB color, and wherein the first RGB color and the second RGB color are selected such that combined color modifications maintain color neutrality of the modified UI. Sato discloses the modified second color pattern comprises a second RGB color that is complementary to the first RGB color, and wherein the first RGB color and the second RGB color are selected such that combined color modifications maintain color neutrality of the reference plane (Sato: para.0082 “It is a ninth aspect of the present invention that, in the three-dimensional information presentation device according to the eighth aspect of the invention, a color of a light pattern projected from one of the plurality of light projecting means is complementary to a color of a light pattern projected from another one of the plurality of light projecting means, causing light added up on the reference plane to be white and light added up on other than the reference plane to be colored.” When apply 2 color patterns, the colors of these patterns are chosen to be complementary to each other such that a neutral white light is applied, thereby not distorting the reference plane, para.0034). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Thubert-Lecun with that of Sato in order to incorporate the modified second color pattern comprises a second RGB color that is complementary to the first RGB color, and wherein the first RGB color and the second RGB color are selected such that combined color modifications maintain color neutrality of the reference plane, and apply this concept to Thubert-LeCun that applies colors onto a display, such that the resulting patterns that are displayed in Thubert amount to a neutral white as described in Sato. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of applying a light that does not distort the colors of the item of interest (Sato: para.0066), and by doing so, the patterns would be less disrupting to the user during the video conference which has a visual component (Thubert: para.0015). Regarding Claims 16 and 20, they do not teach nor further define over the limitations of claim 4, therefore the supporting rationale for the rejection to claim 4 applies equally as well to that of claims 16-20. Claim(s) 7, 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thubert et al. (hereinafter Thubert, US 2022/0368547 A1) in view of LeCun et al. (hereinafter LeCun, US 2019/0110198 A1) further in view of Tussy et al. (hereinafter Tussy, US 2023/0073410 A1). Regarding Claim 7, Thubert-LeCun discloses claim 6 as set forth above. However Thubert-LeCun does not explicitly disclose wherein the first set of time periods and the second set of time periods are determined using a pseudo-random sequence. Tussy discloses wherein the first set of time periods and the second set of time periods are determined using a pseudo-random sequence (Tussy: para.0149 “As one example, when a user begins authentication, the authentication server may generate and send instructions to the user's device to display a random sequence of colors at random intervals. The authentication server stores the randomly generated sequence for later comparison with the authentication information received from the mobile device. During authentication imaging, the colors displayed by the device are projected onto the user's face and are reflected off the user's eyes (the cornea of the eyes) or any other surface that receives and reflects the light from the screen” when authenticating a user, random sequences of colors are displayed at random intervals, therefore each time period for which each color is displayed are randomly generated). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Thubert-LeCun with Tussy in order to incorporate wherein the first set of time periods and the second set of time periods are determined using a pseudo-random sequence. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved security for user authentication (Tussy: para.0148) Regarding Claim 11, Thubert discloses claim 1 as set forth above. However Thubert does not explicitly disclose wherein verifying the integrity of the video content generated by the first client device is further based on latency between causing the modified UI to be presented on the first client device and receiving, from the first client device, the video stream generated by the first client device subsequent to the presentation of the modified UI on the first client device. Tussy discloses wherein verifying the integrity of the video content generated by the first client device is further based on latency between causing the modified UI to be presented on the first client device and receiving, from the first client device, the video stream generated by the first client device subsequent to the presentation of the modified UI on the first client device (Tussy: para.0149 “As one example, when a user begins authentication, the authentication server may generate and send instructions to the user's device to display a random sequence of colors at random intervals. The authentication server stores the randomly generated sequence for later comparison with the authentication information received from the mobile device. During authentication imaging, the colors displayed by the device are projected onto the user's face and are reflected off the user's eyes (the cornea of the eyes) or any other surface that receives and reflects the light from the screen” para.0006 “Activating the camera to obtain a video feed, monitoring for a video feed, and receiving a received video feed. Detecting a time delay between the activating the camera and the receiving the received video feed, and comparing the time delay to an expected delay or range of expected delay. If the time delay does not match an expected delay or range of expected delay, terminating the authentication session, providing a notification that the time delay to an expected delay or range of expected delay, or both.” para.0007 “In one embodiment, if the time delay does not match an expected delay or range of expected delay, designating the video feed to be a video injection. The video feed from the camera may be a video feed of the user's face.” when authenticating a user, random sequences of colors are displayed at random intervals. During this period, the system also determines the delay in which the response is received during authentication to see if this is an injected video feed.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Thubert with Tussy in order to incorporate wherein verifying the integrity of the video content generated by the first client device is further based on latency between causing the modified UI to be presented on the first client device and receiving, from the first client device, the video stream generated by the first client device subsequent to the presentation of the modified UI on the first client device. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved security by detecting injected videos of the user face rather than the user him/herself (Tussy: para.0007). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thubert et al. (hereinafter Thubert, US 2022/0368547 A1) in view of Maizels et al. (hereinafter Maizels, US 2024/0073219 A1). Regarding Claim 10, Thubert discloses claim 1 as set forth above. Thubert further discloses wherein verifying the integrity of the video content generated by the first client device comprises: providing the video stream generated by the first client device and the color pattern encoding as input to a operation (Thubert: para.0043 “In some embodiments, operation 530 decodes the continuously modulated data to detect one or more timing indications (such as encoded counters as discussed above), which are then used to synchronize comparisons between the transmitted continuously modulated data and the continuously modulated data derived from the received video stream. Thus, in some embodiments, operation 530 compares, for example, data encoded with a particular time indication in both the transmitted and received modulated data.” The transmitted modulated data, i.e. the color pattern encoding, and the received video stream are used in operation 530.); receiving an output of the operation, the output indicating a tolerance of a color pattern of the video stream corresponding to the color pattern encoding (Thubert: para.0044 “If the two sets of modulated data are within a threshold tolerance, operation 530 determines that the data match.” A tolerance of the modulated data and the received video streamed is determined.); and upon determining that the tolerance satisfies a threshold criterion, confirming the integrity of the video content generated by the first client device (Thubert: para.0044 “If the two sets of modulated data are within a threshold tolerance, operation 530 determines that the data match.” para.0045 “Operation 535 maintains the participant user device in the video conference based on the conditional detecting of operation 530. For example, if operation 530 determines that the two sets of modulated data are sufficiently similar in amplitude and/or frequency, the participant user device's participation in the video conference is continued or maintained (e.g., video conference data is shared with the participant user device).” Para.0023 “If the two signals are sufficiently similar, the video of the participant is considered validated, at least in some embodiments.” Based on the tolerance of the match being within a threshold, the integrity of the stream is validated.). However Thubert does not explicitly disclose wherein verifying the integrity of the video content generated by the first client device comprises: providing the video stream generated by the first client device and the color pattern encoding as input to a trained artificial intelligence (AI) model; receiving an output of the trained AI model, the output indicating a likelihood of a color pattern of the video stream corresponding to the color pattern encoding; and upon determining that the likelihood satisfies a threshold criterion, confirming the integrity of the video content generated by the first client device Maizels discloses wherein verifying the integrity of the video content generated by the first client device comprises: providing the video stream generated by the first client device (Maizels: para.0391 video conference, para.0419 video stream) and a reference dataset as input to a trained artificial intelligence (AI) model (Maizels: para.0007 “These embodiments may involve operating a wearable coherent light source configured to project light towards a facial region a head of an individual; operating at least one detector configured to receive coherent light reflections from the facial region and to output associated reflection signals; analyzing the reflection signals to determine specific facial skin micromovements of the individual; accessing memory correlating a plurality of facial skin micromovements with the individual; searching for match between the determined specific facial skin micromovements and at least one of the plurality of facial skin micromovements in the memory; if a match is identified, initiating a first action; and if a match is not identified, initiating a second action different from the first action.” Para.0026 “These embodiments may involve operating a wearable light source configured to project light in a graphical pattern on a facial region of an individual, wherein the graphical pattern is configured to visibly convey information; receiving from a sensor, output signals corresponding with a portion of the light reflected from the facial region” para.0245 “In addition, an artificial intelligence model may be employed and used to search for a match in a dataset accessible to the AI model, as described in the following paragraph. In some cases, the initiated search may be used for finding which of the plurality of facial skin micromovements was most likely generated by a same individual that generated the specific facial skin micromovements.” A reference dataset along with the received video of the light or graphical pattern projected on the users face is obtained and input into a machine learning/AI model); receiving an output of the trained AI model, the output indicating a likelihood of a color pattern of the video stream corresponding to the color pattern encoding (Maizels: para.0245 “A likelihood level or a certainty level of a match may be determined to provide an indication of probability or degree of confidence in the determination that the identification hypothesis is correct, i.e., that a reference facial skin micromovements stored in the memory was indeed generated by a same individual that generated the specific facial skin micromovements. In some disclosed embodiments, a match may be considered to be found when the likelihood level or the certainty level is, by way of example only, greater than 90%, greater than 95%, or greater than 99%.” A likelihood is determined for the match on if the same user generated the facial micromovements based on a light or pattern displayed on the face, para.0007); and upon determining that the likelihood satisfies a threshold criterion, confirming the integrity of the video content generated by the first client device (Maizels: para.0245 “A likelihood level or a certainty level of a match may be determined to provide an indication of probability or degree of confidence in the determination that the identification hypothesis is correct, i.e., that a reference facial skin micromovements stored in the memory was indeed generated by a same individual that generated the specific facial skin micromovements. In some disclosed embodiments, a match may be considered to be found when the likelihood level or the certainty level is, by way of example only, greater than 90%, greater than 95%, or greater than 99%.” If the likelihood is above a threshold, then the integrity of the video content generated by the user is determined, i.e. the user is validated.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Thurbert with Maizels in order to incorporate wherein verifying the integrity of the video content generated by the first client device comprises: providing the video stream generated by the first client device and a reference dataset as input to a trained artificial intelligence (AI) model, receiving an output of the trained AI model, the output indicating a likelihood of a color pattern of the video stream corresponding to the color pattern encoding, upon determining that the likelihood satisfies a threshold criterion, confirming the integrity of the video content generated by the first client device, and apply this concept to the the video stream generated by the first client device and the color pattern encoding as described in Thurbert. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of authenticating the identity of the user (Maizels: para.0009, para.0245.) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ericson US 2021/0182371 A1 see para.0045 Fig. 2 and 5. Showing apply multiple 2 dimension pattens onto a face for identity verification. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EUI H KIM whose telephone number is (571)272-8133. The examiner can normally be reached 7:30-5 M-R, M-F alternating. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamal B Divecha can be reached at 5712725863. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EUI H KIM/ Examiner, Art Unit 2453 /KAMAL B DIVECHA/ Supervisory Patent Examiner, Art Unit 2453
Read full office action

Prosecution Timeline

Jan 23, 2024
Application Filed
Jan 13, 2026
Non-Final Rejection — §102, §103
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 08, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12549457
CREATING DECENTRALIZED MULTI-PARTY TRACEABILITY OF SLA USING A BLOCKCHAIN
2y 5m to grant Granted Feb 10, 2026
Patent 12519859
DETERMINING DATA MIGRATION STRATEGY IN HETEROGENEOUS EDGE NETWORKS
2y 5m to grant Granted Jan 06, 2026
Patent 12506818
METHOD AND SYSTEM FOR TIME SENSITIVE PROCESSING OF TCP SEGMENTS INTO APPLICATION LAYER MESSAGES
2y 5m to grant Granted Dec 23, 2025
Patent 12483462
Cloud Network Failure Auto-Correlator
2y 5m to grant Granted Nov 25, 2025
Patent 12470606
SYSTEMS AND METHODS FOR SCHEDULING FEATURE ACTIVATION AND DEACTIVATION FOR COMMUNICATION DEVICES IN A MULTIPLE-DEVICE ACCESS ENVIRONMENT
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
49%
Grant Probability
99%
With Interview (+52.9%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 156 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month