DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The Applicant Remarks filed 12/12/2025 have been received and considered.
The IDS filed 09/25/2025 has been received and considered.
The Terminal Disclaimer filed 12/12/2025 has been received and approved. Therefore, the Double Patenting rejection in the Non-Final mailed 09/15/2025 is hereby withdrawn.
Claims 8 and 15 have been amended.
Claims 1 – 7 remain withdrawn.
Claims 8 – 19, all of the remaining claims pending in this application, have been rejected.
Response to Applicant’s Remarks
Applicant’s remarks were filed 12/12/2025 regarding amendments to independent claims 8 and 15. Applicant’s remarks starting on Page 7 argue that the prior art Nayshtut teaches a “fundamentally different system and method which even under BRI do not render the claimed invention obvious or anticipated”. Furthermore, Applicant argues that Nayshtut is explicitly self-contained and that the Examiner indentifies no teaching, suggestion, or motivation in Nayshtut toward delegating pattern generation or selection to an outside entity. Applicant contends that verification in Nayshtut compares: the reflected pattern that the same anti-spoof engine generated and that the art does not teach obtaining an identifier. Overall, Applicant keeps re-iterating that the anti-spoof engine’s architecture is based on a self-contained integrity.
Examiner disagrees with the remarks made the Applicant. Pertaining to the network, there are two options truly available. One, the network is based on internal departments in communication with each other. Second, the network is based on a third-party system in which external entities have been given permission to be on the network to communicate with each other or allow the third party to communicate with an entity on behalf of a different trusted entity on said network. In either case, the other machine is a different party. See especially [0020 – 0025], where [0020] specifically states “For example, the computers 106 and computer server 104 may be configured to support a multi-tenant architecture, where each tenant may implement its own secure and isolated virtual network environment. Although not illustrated in FIG. 1, the network infrastructure 100 may connect computer networks 102 to a variety of other types of computing device, such as VMs, containers, hosts, storage devices, electronic devices (e.g., wearable electronic devices), and/or any other electronic device capable of transmitting and/or receiving data over computer networks 102.". Paragraph [0026] elaborates on the different components of the invention, explaining in further detail how the components can be separated and/or at different locations and further discloses that there may be multiple anti-spoof engines at the different locations (each having the same or different functions…i.e. receiving, generating, emitting, extracting, etc.). More so, Paragraph [0023] discloses the patterns and types of identifiers associated with the various anti-spoof engines and how they can communicate or provide instructions. Given these points, the Examiner does not agree that the architecture of the anti-spoof engine is self-contained, as argued by the Applicant. Furthermore, being that the Applicant’s argument was so strongly based on the anti-spoof engine being self-contained or internal, Examiner would like to make evident that the Applicant has not provided any meaningful evidence or pointed to any particular paragraph in the art that would suggest or contradict as to why the art does not teach the limitations of the claims. Therefore, the Examiner maintains that the combined prior art referenced in the non-final mailed 09/15/2025 does indeed teach the newly added features and the original features of the claims, as detailed below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 8, 11 – 12, 15, and 18 – 19 are rejected under 35 U.S.C. 102(a)(2) as anticipated by or, in the alternative, under 35 U.S.C. 103 as obvious over Nayshtut et al. (US Publication No. 2018/0130168 A1) (hereinafter Nayshtut).
Claim 8
Regarding Claim 8, an independent method claim, Nayshtut teaches a method of authenticating video, comprising using a computing device of a receiving party, for: receiving, from a device of a sending party different from said receiving party, a video showing a light pattern being projected onto an area captured in the video (Figure 1; "The computers 106 and/or computer servers 104 may each comprise a plurality of VMs, containers, and/or other types of virtualized computing systems for processing computing instructions and transmitting and/or receiving data over computer networks 102. For example, the computers 106 and computer server 104 may be configured to support a multi-tenant architecture, where each tenant may implement its own secure and isolated virtual network environment. Although not illustrated in FIG. 1, the network infrastructure 100 may connect computer networks 102 to a variety of other types of computing device, such as VMs, containers, hosts, storage devices, electronic devices (e.g., wearable electronic devices), and/or any other electronic device capable of transmitting and/or receiving data over computer networks 102.", Paragraph [0020]…"The anti-spoof engine may provide instructions and/or modify the projection of a light source based on the selected optical patterns and/or data elements. For example, the anti-spoof engine may select a dot grid pattern for the light source to project into a scene.", Paragraph [0023}…The anti-spoof engine may store and/or have access to multiple optical patterns (e.g., a list of optical patterns) and/or various data element information, such as timestamp information, geolocation coordinates (e.g., global positioning system (GPS) coordinates), unique computer identifiers, random numbers, and/or one time password (OTP) sequences.", [0023]; “For instances, embodiments of the present disclosure may separate out one or more of the computing system components. For example, the projection units 212, 312, 412, and 512 and/or image capturing devices 214, 314, 414, and 514 may be separate and independent from the anti-spoof engines 206, 306, 406, and 506 that may be located on one or more remote devices. For example, the projection units 212, 312, 412, and 512 and/or image capturing devices 214, 314, 414, and 514 may be devices that are externally connected, such as externally wired and/or wireless connections, to the computing system 202, 302, 402, and 502. In this instance, the computing system 202, 302, 402, and 502 may remotely receive the captured image from the image capturing devices. Additionally, rather than using a single anti-spoof engine that manages both the emission and extraction of the optical watermark signals, other embodiments may use more than one anti-spoof engines, where each anti-spoof engine manages a portion of the optical watermark signal processing. For example, one anti-spoof engine may be configured to generate and cause emission of the optical watermark signal and a second anti-spoof engine may be configured to receive the reflected optical watermark signal and extract the watermark signal. The different anti-spoof engines may also be located on separate computing systems, for example, on separate trusted network devices.”, Paragraph [0026]);
PNG
media_image1.png
488
660
media_image1.png
Greyscale
extracting the light pattern from the received video (Figure 3; "Once the computing system receives the image source, the computing system analyzes the image source to extract the reflected optical watermark signal.", Paragraph [0018]), where as previously mentioned…the components can also be external components which would allow for the projection and capturing at a different location from one of the “potentially” multiple anti-spoof engines from internal or external trusted parties; and
PNG
media_image2.png
531
516
media_image2.png
Greyscale
obtaining, from a server computer of a trusted third party that is external to both the sending party and the receiving party, a reference light pattern identifier that was previously generated and stored by the trusted third partyof the trusted third party based on the time point associated with the video and on an identity of the sending party (Rejected as applied above), where the applied art to the previous limitations of the claim teach a network having multiple tenants (each tenant may implement its own secure and isolated virtual network environment) “meaning other tenants would be external to one another”, each tenant being able to send and receive data over the network (The computers 106 and/or computer servers 104 may each comprise a plurality of VMs, containers, and/or other types of virtualized computing systems for processing computing instructions and transmitting and/or receiving data over computer networks 102.), and each engine has light pattern identifiers based on a time point associated with the video and identity of sender (The anti-spoof engine may provide instructions and/or modify the projection of a light source based on the selected optical patterns and/or data elements. For example, the anti-spoof engine may select a dot grid pattern for the light source to project into a scene.", Paragraph [0023}…The anti-spoof engine may store and/or have access to multiple optical patterns (e.g., a list of optical patterns) and/or various data element information, such as timestamp information, geolocation coordinates (e.g., global positioning system (GPS) coordinates), unique computer identifiers, random numbers, and/or one time password (OTP) sequences."), where the Applicant’s specification defines the light pattern identifier as “The light pattern is generated based on a light pattern identifier - say a key or index that identifies the light pattern, a file that the light pattern is encoded in, a text (say character string) that the light pattern in encrypted in, etc., or any combination thereof”;
verifying authenticity of the received video based on the extracted light pattern and on a reference light pattern identifier (Abstract; Paragraph [0069]).
While it appears that Nayshtut describes claim 8, to the extent that the identity of the parties is not entirely clear, it is routine that different entities are on a common network that is to some extent a “trusted” network, such as students at an educational institution or contractors/subcontractors on a corporate or government network. To the extent that Nayshtut is not specifying the operator of the multiple different computers in their network, it would have been obvious to one of ordinary skill in the art to allow multiple parties to share a network to facilitate communication and collaboration between related parties.
Claim 11
Regarding Claim 11, dependent on claim 8, Nayshtut teaches the invention as claimed in claim 8.
Nayshtut further teaches extracting a light pattern identifier from the extracted light pattern, and verifying that the extracted light pattern identifier is identical to the reference light pattern identifier (Figure 3, #210 "COMPARATOR"; "Additionally or alternatively, the anti-spoof engine may obtain the extracted data element information encoded within the optical pattern and/or modulated optical signal and compare the extracted data element information with the expected data element information.", Paragraph [0026]; Paragraph [0069]).
Claim 12
Regarding Claim 12, dependent on claim 8, Nayshtut teaches the invention as claimed in claim 8.
Nayshtut further teaches further selecting a second light pattern from a database of light patterns, using the reference light pattern identifier, and verifying that the selected light pattern is identical to the light pattern extracted from the received video ("Similar to computing system 202, for a given visual authentication sessions, the computing system 302 may emit a one or more optical watermark signals at one or more time durations. For example, for a visual authentication session, the computing system 202 may emit a first optical watermark signal for a duration of about a second, a second optical watermark signal for a duration of about two seconds, and a third optical watermark signal for a duration of about three seconds.", Paragraph [0037]), where the extraction and verification process of the one or more watermarks would follow the same process as indicated in figure 3.
Claim 15, an independent system claim, is rejected for the same reasons as applied to claim 8.
Claims 18 – 19, both dependent on claim 15, are rejected for the same reasons as applied to the above claims.
Claims 9, 13 – 14, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Nayshtut et al. (US Publication No. 2018/0130168 A1) (hereinafter Nayshtut) in view of Anderson et al. (US Publication No. 2021/0103647 A1) (hereinafter Anderson).
Claim 9
Regarding Claim 9, dependent on claim 8, Nayshtut teaches the invention as claimed in claim 8.
Nayshtut does not teach verifying that the time difference between the time of receipt of the video by the receiving party and the time point is within a predefined limit. However, Nayshtut includes a timestamp in the anti-spoofing system, and therefore a reasonable conclusion is that this information is used to detect “spoofed” video.
However, Anderson teaches (with greater detail) verifying that the time difference between the time of receipt of the video by the receiving party and the time point is within a predefined limit ("In other embodiments, the validation control 124g is configured to compare a time stamp associated with an image taken by the mobile device 140 with the current time, a time stamp associated with an endorsement request, and/or a time stamp associated with image data provided by the requesting client device 130, for example. Here, the validation control 124g can compare the two times/timestamps in order to provide some verification of the endorsement. This may be done, for example, to ensure the verification image is sufficiently recent and/or to prevent the unauthorized reuse of certain images (e.g. to prevent a user from using an old picture stored in a social media application)", Paragraph [0029]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nayshtut to incorporate the comparison of two time-stamps, as disclosed by Anderson. The suggestion/motivation for doing so would have been to use the time stamps in the authentication process as a means to determine if enough time would have been given from the time of record for the data to be at risk of tampering.
Claim 13
Regarding Claim 13, dependent on claim 8, Nayshtut teaches the invention as claimed in claim 8.
Nayshtut, although implied in any anti-spoofing video/image authentication process and further being discussed in Paragraph [0023], does not explicitly discuss wherein the time point is indicated in a timestamp embedded in the received video.
However, Anderson further teaches wherein the time point is indicated in a timestamp embedded in the received video ("An endorsed electronic instrument and electronic verification information is received from the computing device, including received imagery data and a time stamp indicating when the received imagery data was captured.", Abstract), Examiner notes that this could be perceived as a timestamp that is also projected or a timestamp that is included “embedded” in the metadata of the video.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nayshtut to incorporate a timepoint being indicated in the timestamp embedded in the video, as disclosed by Anderson. The suggestion/motivation for doing so would have been to have an extra layer of security and further increase the effectiveness of the authentication process.
Claim 14
Regarding Claim 14, dependent on claim 8, Nayshtut teaches the invention as claimed in claim 8.
Nayshtut, in view of Anderson further teach wherein the time point is a time of live broadcasting of the video (Rejected as applied to claim 9).
Claim 16, dependent on claim 15, is rejected for the same reasons as applied to the above claims.
Claims 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Nayshtut et al. (US Publication No. 2018/0130168 A1) (hereinafter Nayshtut) in view of Non-Patent Literature “Real-Time Index Authentication for Event-Oriented Surveillance Video Query using Blockchain” to Nikouei et al. (hereinafter Nikouei).
Claim 10
Regarding Claim 10, dependent on claim 8, Nayshtut teaches the invention as claimed in claim 8.
Although the identity of parties involved and timepoint are taught in Paragragh [0023], Nayshtut does not explicitly teach querying a server computer of a trusted third party.
However, Nikouei teaches querying a server computer of a trusted third party (Figure 2; In each domain, the fog device not only enforces predefined security policies to manage domain related devices and services, but also acts as an intermediate to interact with public blockchain and cloud to enable the index authentication for event-oriented surveillance video query.", Section III: Real-Time Index Authentication…"Processing the video instantly gives better understanding of the event taking place in real time. The surveillance camera captures the video and transfers it to the edge/fog devices of choice in real-time.", Section III: Real - Time Index Authentication - part A: Smart Surveillance System and Secure Data Transfer).
PNG
media_image3.png
295
479
media_image3.png
Greyscale
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nayshtut to incorporate querying a server computer of a trusted third party, as disclosed by Nikouei. The suggestion/motivation for doing so would have been to make sure that any authentication of a video/image would have a known light pattern identifier associated with it (from party sending the video/image), along with a precise time in which the video/image was taken to increase the chances of success of anti-spoofing processes.
Claim 17, dependent on claim 15, is rejected for the same reasons as applied to claim 10.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ronde Miller whose telephone number is (703) 756-5686 The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Gregory Morse can be reached on (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RONDE LEE MILLER/Examiner, Art Unit 2663
/GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698