DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The Applicant Remarks filed 12/03/2025 have been received and considered.
The IDS filed 10/09/2025 has been received and considered.
Claims 1, 11, 18 – 19, 25 and 31 have been amended.
Claims 32 – 34 remain withdrawn.
Claims 1 – 31, all of the remaining claims pending in this application, have been rejected.
Response to Applicant’s Remarks
Applicant’s remarks were filed 12/03/2025 regarding amendments to independent claims 1, 11, 18 – 19, 25 and 31. Applicant’s remarks starting on Page 9 argue that the prior art Nayshtut does not teach the newly added claim limitation “receiving, from a device of a sending party different from said receiving party” nor is a verification based on a time difference between time of receipt of the video by the receiving party and the point on which the video was taken.
Examiner disagrees with the remarks made the Applicant. Pertaining to the network, there are two options truly available. One, the network is based on internal departments in communication with each other. Second, the network is based on a third-party system in which external entities have been given permission to be on the network to communicate with each other or allow the third part to communicate with an entity on behalf of a different trusted entity on said network. In either case the other machine is a different party. See especially [0025]. While Examiner notes that having the timestamp information may not anticipate the claim language, having the timestamp information would make obvious, to one skilled in the art, the difference between two specific time stamps (i.e. a captured time and a time in which the network received the respective video. Also, Examiner further notes that claim 1, rejected by Nayshtut, was for the broad claim language that recited “verifying authenticity of the received video based…on a time difference between a time of receipt of the video by the receiving party and the time point.”, where knowing the time stamp information would have allowed for that. Nayshtut describes, at [0024],[0030],[0032],[0037] aspects of the timestamp information including that it is part of the anti-spoof engine and that multiple optical signals of varying lengths may be used during a single session. The logical conclusion to draw is that these are for detecting (continuing) liveness – otherwise why have multiple timestamps as part of an anti-spoof engine? Anderson was referenced as a prior art that taught the claim language “verifying that the time difference between the time of receipt of the video by the receiving party and the time point is ‘within a predefined limit’. [0029]” of claim 2 that further limited the claim language of claim 1. Therefore, while Examiner agrees with some remarks mentioned by the Applicant, the Examiner maintains that the combined prior art referenced in the non-final mailed 09/10/2025 does indeed teach the newly added features and the original features of the claims, as detailed below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 5 – 12, 15 – 17, 18, 23 – 24, 29, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Nayshtut et al. (US Publication No. 2018/0130168 A1) (hereinafter Nayshtut) in view of Anderson et al. (US Publication No. 2021/0103647 A1) (hereinafter Anderson).
Claim 1
Regarding Claim 1, an independent method claim, Nayshtut teaches a method of authenticating video, comprising using a computing device of a receiving party, for: receiving, from a device of a sending party different from said receiving party, a video showing a light pattern being projected onto an area captured in the video, the video being associated with a time point (Figure 1; "The computers 106 and/or computer servers 104 may each comprise a plurality of VMs, containers, and/or other types of virtualized computing systems for processing computing instructions and transmitting and/or receiving data over computer networks 102. For example, the computers 106 and computer server 104 may be configured to support a multi-tenant architecture, where each tenant may implement its own secure and isolated virtual network environment. Although not illustrated in FIG. 1, the network infrastructure 100 may connect computer networks 102 to a variety of other types of computing device, such as VMs, containers, hosts, storage devices, electronic devices (e.g., wearable electronic devices), and/or any other electronic device capable of transmitting and/or receiving data over computer networks 102.", Paragraph [0020]…"The anti-spoof engine may provide instructions and/or modify the projection of a light source based on the selected optical patterns and/or data elements. For example, the anti-spoof engine may select a dot grid pattern for the light source to project into a scene.", Paragraph [0023}…The anti-spoof engine may store and/or have access to multiple optical patterns (e.g., a list of optical patterns) and/or various data element information, such as timestamp information, geolocation coordinates (e.g., global positioning system (GPS) coordinates), unique computer identifiers, random numbers, and/or one time password (OTP) sequences.", [0023]; “For instances, embodiments of the present disclosure may separate out one or more of the computing system components. For example, the projection units 212, 312, 412, and 512 and/or image capturing devices 214, 314, 414, and 514 may be separate and independent from the anti-spoof engines 206, 306, 406, and 506 that may be located on one or more remote devices. For example, the projection units 212, 312, 412, and 512 and/or image capturing devices 214, 314, 414, and 514 may be devices that are externally connected, such as externally wired and/or wireless connections, to the computing system 202, 302, 402, and 502. In this instance, the computing system 202, 302, 402, and 502 may remotely receive the captured image from the image capturing devices. Additionally, rather than using a single anti-spoof engine that manages both the emission and extraction of the optical watermark signals, other embodiments may use more than one anti-spoof engines, where each anti-spoof engine manages a portion of the optical watermark signal processing. For example, one anti-spoof engine may be configured to generate and cause emission of the optical watermark signal and a second anti-spoof engine may be configured to receive the reflected optical watermark signal and extract the watermark signal. The different anti-spoof engines may also be located on separate computing systems, for example, on separate trusted network devices.”, Paragraph [0026]);
PNG
media_image1.png
488
660
media_image1.png
Greyscale
extracting the light pattern from the received video (Figure 3; "Once the computing system receives the image source, the computing system analyzes the image source to extract the reflected optical watermark signal.", Paragraph [0018]), where as previously mentioned…the components can also be external components which would allow for the projection and capturing at a different location from one of the “potentially” multiple anti-spoof engines; and
PNG
media_image2.png
531
516
media_image2.png
Greyscale
verifying authenticity of the received video based on the extracted light pattern and on a reference light pattern identifier (Rejected as applied above).
Nayshtut teaches having access to the timestamp information (which would allow one skilled in the art to determine time differences).
However, Anderson teaches (with greater detail) verifying authenticity of the received video based on a time difference between a time of receipt of the video by the receiving party and the time point ("In other embodiments, the validation control 124g is configured to compare a time stamp associated with an image taken by the mobile device 140 with the current time, a time stamp associated with an endorsement request, and/or a time stamp associated with image data provided by the requesting client device 130, for example. Here, the validation control 124g can compare the two times/timestamps in order to provide some verification of the endorsement. This may be done, for example, to ensure the verification image is sufficiently recent and/or to prevent the unauthorized reuse of certain images (e.g. to prevent a user from using an old picture stored in a social media application)", Paragraph [0029]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nayshtut to incorporate the comparison of two time-stamps, as disclosed by Anderson. The suggestion/motivation for doing so would have been to use the time stamps in the authentication process as a means to determine if enough time would have been given from the time of record for the data to be at risk of tampering.
Claim 2
Regarding Claim 2, dependent on claim 1, Nayshtut, in view of Anderson, teaches the invention as claimed in claim 1.
Nayshtut does not teach verifying that the time difference between the time of receipt of the video by the receiving party and the time point is within a predefined limit. However, Nayshtut includes a timestamp in the anti-spoofing system, and therefore a reasonable conclusion is that this information is used to detect “spoofed” video.
Anderson teaches verifying that the time difference between the time of receipt of the video by the receiving party and the time point is within a predefined limit (Rejected as applied to claim 1).
Claim 5
Regarding Claim 5, dependent on claim 1, Nayshtut, in view of Anderson, teaches the invention as claimed in claim 1.
Nayshtut further teaches obtaining the reference light pattern identifier, using a light pattern identifier selector used locally by the receiving party ("Figure 3).
PNG
media_image3.png
531
516
media_image3.png
Greyscale
Claim 6
Regarding Claim 6, dependent on claim 1, Nayshtut, in view of Anderson, teaches the invention as claimed in claim 1.
Nayshtut further teaches obtaining the reference light pattern identifier, using a light pattern identifier selector used locally by the receiving party and data indicating the time point ("Additionally or alternatively, the optical watermark signal may be encoded with one or more various data elements, such as timestamp information, a unique computer identifier (ID), and/or randomly generated numbers.", Paragraph [0018]).
Claim 7
Regarding Claim 7, dependent on claim 1, Nayshtut, in view of Anderson, teaches the invention as claimed in claim 1.
Nayshtut further teaches extracting a light pattern identifier from the extracted light pattern, and verifying that the extracted light pattern identifier is identical to the reference light pattern identifier (Figure 3, #210 "COMPARATOR"; "Additionally or alternatively, the anti-spoof engine may obtain the extracted data element information encoded within the optical pattern and/or modulated optical signal and compare the extracted data element information with the expected data element information.", Paragraph [0026]).
PNG
media_image4.png
531
516
media_image4.png
Greyscale
Claim 8
Regarding Claim 8, dependent on claim 1, Nayshtut, in view of Anderson, teaches the invention as claimed in claim 1.
Nayshtut further teaches further selecting a second light pattern from a database of light patterns, using the reference light pattern identifier, and verifying that the selected light pattern is identical to the light pattern extracted from the received video ("Similar to computing system 202, for a given visual authentication sessions, the computing system 302 may emit a one or more optical watermark signals at one or more time durations. For example, for a visual authentication session, the computing system 202 may emit a first optical watermark signal for a duration of about a second, a second optical watermark signal for a duration of about two seconds, and a third optical watermark signal for a duration of about three seconds.", Paragraph [0037]), where the extraction and verification process of the one or more watermarks would follow the same process as indicated in figure 3.
Claim 9
Regarding Claim 9, dependent on claim 1, Nayshtut, in view of Anderson, teaches the invention as claimed in claim 1.
Nayshtut, although implied in any anti-spoofing video/image authentication process, does not explicitly discuss wherein the time point is indicated in a timestamp embedded in the received video.
However, Anderson further teaches wherein the time point is indicated in a timestamp embedded in the received video ("An endorsed electronic instrument and electronic verification information is received from the computing device, including received imagery data and a time stamp indicating when the received imagery data was captured.", Abstract), Examiner notes that this could be perceived as a timestamp that is also projected or a timestamp that is included in the metadata of the video.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nayshtut to incorporate a timepoint being indicated in the timestamp embedded in the video, as disclosed by Anderson. The suggestion/motivation for doing so would have been to use the time stamps in the authentication process as a means to determine if enough time would have been given from the time of record for the data to be at risk of tampering.
Claim 10
Regarding Claim 10, dependent on claim 1, Nayshtut, in view of Anderson, teaches the invention as claimed in claim 1.
Nayshtut, in view of Anderson further teach wherein the time point is a time of live broadcasting of the video (Rejected as applied to claim 2), where the timestamp associated with that precise capture in that precise moment is what’s used in the comparison (to obtain a difference) in claim 2.
Claim 23
Regarding Claim 23, dependent on claim 19, Nayshtut teaches the invention as claimed in claim 19.
Nayshtut, in view of Anderson, further teaches obtaining the reference light pattern identifier, using a light pattern identifier selector used locally by the sending party and data identifying a time point associated with the video (Rejected as applied to claim 1), where the network described in Figure 1/Paragraph [0020] details how the process allows for the sending/receiving of data throughout the computer networks.
Claim 24
Regarding Claim 24, dependent on claim 19, Nayshtut teaches the invention as claimed in claim 19.
Nayshtut, in view of Anderson, further teaches sending the video with data identifying a time point associated with the video (Rejected as applied to claim 1), where the network described in Figure 1/Paragraph [0020] details how the process allows for the sending/receiving of data throughout the computer networks.
Claim 11, an independent system claim, is rejected for the same as applied to claim 1.
Claims 12 and 15 – 17 are rejected for the same reason as applied to the above claims.
Claim 18, an independent non-transitory computer readable medium claim, is rejected for the same reasons as applied to claim 1.
Claims 29 – 30, both dependent on claim 19, are rejected as applied to the above claims.
Claims 3 – 4, 13 – 14, 20 – 21, and 26 – 27 are rejected under 35 U.S.C. 103 as being unpatentable over Nayshtut et al. (US Publication No. 2018/0130168 A1) (hereinafter Nayshtut) in view of Anderson et al. (US Publication No. 2021/0103647 A1) (hereinafter Anderson) in further view of Non-Patent Literature “Real-Time Index Authentication for Event-Oriented Surveillance Video Query using Blockchain” to Nikouei et al. (hereinafter Nikouei).
Claim 3
Regarding Claim 3, dependent on claim 1, Nayshtut, in view of Anderson, teaches the invention as claimed in claim 1.
Neither Nayshtut, or Anderson, or the combination explicitly teach querying a server computer of a trusted third party.
However, Nikouei teaches querying a server computer of a trusted third party (Figure 2; In each domain, the fog device not only enforces predefined security policies to manage domain related devices and services, but also acts as an intermediate to interact with public blockchain and cloud to enable the index authentication for event-oriented surveillance video query.", Section III: Real-Time Index Authentication…"Processing the video instantly gives better understanding of the event taking place in real time. The surveillance camera captures the video and transfers it to the edge/fog devices of choice in real-time.", Section III: Real - Time Index Authentication - part A: Smart Surveillance System and Secure Data Transfer).
PNG
media_image5.png
295
479
media_image5.png
Greyscale
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the teachings of Nayshtut, in view of Anderson, to incorporate querying a server computer of a trusted third party, as disclosed by Nikouei. The suggestion/motivation for doing so would have been to make sure that any authentication of a video/image would have a known light pattern identifier associated with it (from party sending the video/image), along with a precise time in which the video/image was taken to increase the chances of success of anti-spoofing processes.
Claim 4
Regarding Claim 4, dependent on claim 1, Nayshtut, in view of Anderson, teaches the invention as claimed in claim 1.
Neither Nayshtut, or Anderson, or the combination explicitly teach querying a server computer of a trusted third party, using data indicating the time point.
However, Nikouei teaches querying a server computer of a trusted third party, using data indicating the time point (Rejected as applied to claim 3), where the real-time indicates the time of capture at that precise moment.
Claim 20
Regarding Claim 20, dependent on claim 19, Nayshtut teaches the invention as claimed in claim 19.
Nayshtut teaches obtaining a reference light pattern identifier, but doesn’t explicitly teach querying a server computer of a trusted third party to obtain it.
However, Nikouei teaches querying a server computer of a trusted third party (Rejected as applied to claim 3), where it would be obvious to one skilled in the art to combine the two concepts as the light pattern identifier would be needed for the authentication process
Claim 13, dependent on claim 11, is rejected for the same reason as applied to claim 3.
Claim 14, dependent on claim 11, is rejected for the same reason as applied to claim 4.
Claims 21, dependent on claim 19, is rejected as applied to the above rejected claims.
Claims 26 – 27, dependent on claim 25, are rejected as applied to the above rejected claims.
Claims 19, 22, 25, 28, and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Nayshtut et al. (US Publication No. 2018/0130168 A1) (hereinafter Nayshtut).
Claim 19
Regarding Claim 19, an independent method claim, Nayshtut teaches a method of authenticating video, the method comprising steps performed by a computing device of a sending party, the steps comprising: obtaining a reference light pattern identifier, the identifier identifying a light pattern ("The anti-spoof engine may provide instructions and/or modify the projection of a light source based on the selected optical patterns and/or data elements. For example, the anti-spoof engine may select a dot grid pattern for the light source to project into a scene.", Paragraph [0023}…The anti-spoof engine may store and/or have access to multiple optical patterns (e.g., a list of optical patterns)…", Paragraph [0023]);
using a light source in communication with the computing device of the sending party, generating and projecting the light pattern identified by the identifier, onto an area covered by a camera (Figure 2);
PNG
media_image6.png
453
459
media_image6.png
Greyscale
using the camera, capturing the area while being projected with the light pattern in a video (Rejected as applied directly above); and
sending the video to a receiving party (Figure 1; "The computers 106 and/or computer servers 104 may each comprise a plurality of VMs, containers, and/or other types of virtualized computing systems for processing computing instructions and transmitting and/or receiving data over computer networks 102. For example, the computers 106 and computer server 104 may be configured to support a multi-tenant architecture, where each tenant may implement its own secure and isolated virtual network environment. Although not illustrated in FIG. 1, the network infrastructure 100 may connect computer networks 102 to a variety of other types of computing device, such as VMs, containers, hosts, storage devices, electronic devices (e.g., wearable electronic devices), and/or any other electronic device capable of transmitting and/or receiving data over computer networks 102.", Paragraph [0020]), where the network described in Figure 1/Paragraph [0020] details how the process allows for the sending/receiving of data throughout the computer networks.
Examiner notes that it would be obvious to one skilled in the art that the network would be able to be configured to host external entities allowing for sending and receiving of data the external parties or a third party for authentication purposes.
Claim 22
Regarding Claim 22, dependent on claim 19, Nayshtut teaches the invention as claimed in claim 19.
Nayshtut further teaches obtaining the reference light pattern identifier, using a light pattern identifier selector used locally by the sending party (Rejected as applied to claim 19), where the network described in Figure 1/Paragraph [0020] details how the process allows for the sending/receiving of data throughout the computer networks.
Claim 25, an independent system claim, is rejected for the same as applied to claim 19.
Claim 28, dependent on claim 25, is rejected as applied to the above claims.
Claim 31, an independent non-transitory computer readable medium claim, is rejected for the same reasons as applied to claim 19.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ronde Miller whose telephone number is (703) 756-5686 The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Gregory Morse can be reached on (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RONDE LEE MILLER/Examiner, Art Unit 2663
/GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698