DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/06/2026 has been entered.
Response to Arguments
Applicant’s arguments with respect to the claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Prior art Hesterman, US 2018/0322197 A1 (Hesterman) has been newly added to teach the newly added claim limitations.
The claim objection made to claim 5 in the previous Office Action has been withdrawn due to Applicant’s amendments.
Claims 1-19 are pending; claims 1, 5, and 19 have been amended.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Noy, US 2021/0148728 A1 (Noy), Nadler, US 2020/0387716 A1 (Nadler), Taheri-Shirazi, “Assisted Target Detection in Airborne Search and Rescue” (Taheri-Shirazi), and further in view of Hesterman, US 2018/0322197 A1 (Hesterman).
Regarding claim 1, Noy teaches a method comprising:
obtaining a video stream (obtaining a video) ([0090]) via a sensor mounted on a platform (imaging sensor(s) mounted on vehicular platforms) ([0002-0003]), wherein the video stream (video) ([0090]) includes at least one of
(i) at least one image frame (at least one video frame 10) (Fig. 1a; [0090]) having metadata parameters (wherein the video frames can include metadata that comprises operation conditions indicative of conditions during capturing) ([0043]) (the metadata can also be information determined by the Los (line-of-sight) determination sensors) ([0113]) and
(ii) video stream frames (video including frames) ([0090]) from which metadata parameters are inferred (the metadata being inferred from the conditions during the time of capture) ([0043]);
wherein the metadata parameters includes a reference of the sensor (wherein the Los can define a position and/or an orientation with respect to the fixed coordinate system established in space; i.e. from a position determination system of the sensor) ([0107]);
locating a location of interest (LOI) shown in at least one frame of the video stream or in the at least one image frame (locating a region of interest (ROI) included in the video frame) ([0090]),
wherein the LOI includes an object that is to be discriminated (wherein the ROI includes an object that is to be marked/emphasized) ([0095-0096]);
selecting at least a portion of the at least one frame of the video stream containing the LOI (selecting/obtaining an indication of a given ROI in the video) ([0118-0119]);
processing the selected portion of the at least one frame containing the LOI based on the reference of the sensor (processing the at least one frame containing the ROI based on the Los of the sensor to identify PCVs (previously captured videos) that include the ROI) ([0120]); and
outputting, automatically, at least one resultant image in response to the processing, wherein the resultant image includes the object at the LOI to be discriminated (outputting at least one frame of a video, in response to finding a match, that includes the object in the ROI to be marked/emphasized) ([0120-0122]).
However, Noy does not explicitly teach a “geospatial” reference of the sensor.
Nadler teaches a server arrangement that acquires data from sensors and analyzes the data to determine at least one object of interest in a surveillance area (Abstract); wherein the plurality of sensors can be mounted on unmanned aerial vehicles (UAVs) and the like to acquire effective data pertaining to a particular object ([0059]); and wherein a geospatial location of each sensor is known to the server arrangement and therefore the plurality of sensors server as reference points for determination of the geospatial location of the identified objects in the surveillance area ([0070]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Noy to include not only a fixed coordinate system stablished in space but a geospatial reference since it improves the detection accuracy of the system (Nadler; [0087]).
However, neither explicitly teaches wherein the geospatial reference includes “a frame center latitude, frame center longitude, and frame center elevation”.
Taheri-Shirazi teaches an assisted target detection (ATD) algorithm for airborne search and rescue (p. 1; Introduction); wherein the geospatial reference includes “a frame center latitude, frame center longitude, and frame center elevation (frame center latitude, longitude, and elevation) (p. 21, Table 2.1 and p. 22, Figure 2.5).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include the frame center latitude, longitude, and elevation as data since it improves the automatic target detection system (Taheri-Shirazi; p. 2, Section 1.2, 2nd paragraph).
However, none of them explicitly teach “wherein the metadata parameters includes the sensor’s orientation, a field of view of the sensor, and a zoom level of the sensor”.
Hesterman teaches video data creation and management ([0002]); wherein videos can be generated by unmanned aerial vehicles (UAVs) ([0005-0006]); wherein the videos include rich metadata ([0006]); wherein geospatial data may be represented as, or used for the creation of a map ([0008-0009]); and wherein the metadata parameters includes the sensor’s orientation (wherein the metadata can include sensor orientation data) ([0025]), a field of view of the sensor (field of view data) ([0025]), and a zoom level of the sensor (zoom data of the sensor) ([0025]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include more metadata since having more metadata allows efficient and advantageous ways of finding video footage, and creating data, video, or image collection missions with geospatial data (Hesterman; [0007]).
Regarding claim 2, Noy teaches wherein processing the selected portion of the at least one frame containing the LOI based on the reference (processing the at least one frame containing the ROI based on the Los of the sensor to identify PCVSs (previously captured video segments) that include the ROI) (Abstract and [0120]) comprises: grouping a plurality of image frames together that depict the LOI regardless of a time at which the image frames in the plurality of image frames were obtained (wherein video frames are grouped together based on having the same ROI (ROI matching PCVs); which can be from different times) ([0130]).
However, Noy does not explicitly teach a “geospatial” reference of the sensor.
Nadler teaches a server arrangement that acquires data from sensors and analyzes the data to determine at least one object of interest in a surveillance area (Abstract); wherein the plurality of sensors can be mounted on unmanned aerial vehicles (UAVs) and the like to acquire effective data pertaining to a particular object ([0059]); and wherein a geospatial location of each sensor is known to the server arrangement and therefore the plurality of sensors server as reference points for determination of the geospatial location of the identified objects in the surveillance area ([0070]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Noy to include not only a fixed coordinate system stablished in space but a geospatial reference since it improves the detection accuracy of the system (Nadler; [0087]).
Regarding claim 3, Noy teaches further comprising: determining which image frames containing the LOI from the plurality of image frames that are grouped together (wherein video frames are grouped together based on having the same ROI (ROI matching PCVs)) ([0130]) has a selected level of the metadata parameters (wherein the ROI matching PCVSs are associated with respective visibility scores higher than the visibility scores of the PCVSs excluding the displayed ROI matching PCVSs; wherein the visibility scores are part of the metadata associated with each PCVS) ([0125]) (the PCVSs can also be selected based on metadata that is associated with operation conditions or preferences selected by the operator) ([0127-0130]).
Regarding claim 4, Noy teaches further comprising: filtering the plurality of image frames that are grouped together to retain only the image frames that include the metadata parameters (wherein the ROI matching PCVSs are associated with respective visibility scores higher than the visibility scores of the PCVSs excluding the displayed ROI matching PCVSs; wherein the visibility scores are part of the metadata associated with each PCVS) ([0125]) (the PCVSs can also be selected based on metadata that is associated with operation conditions or preferences selected by the operator) ([0127-0130]).
Regarding claim 5, Noy teaches further comprising: bridging together non-sequential image frames from the plurality of image frames (the PCVSs that are selected and adjusted can be displayed to the pilot of the operating aircraft blended with the real-time video; wherein the PCVSs are selected at another time where weather conditions were better) ([0100-0103]), wherein the non-sequential image frames each depict the LOI at different times (wherein the selected PCVSs include the ROI as well as the video) ([0096], [0100-0103], and [0124]) as a condensed video stream (a condensed video of only using the image frames that include the ROI and provide the best visibility) ([0100-0103] and [0124]).
Regarding claim 6, Noy teaches wherein processing the selected portion of the at least one frame containing the LOI based on the reference (processing the at least one frame containing the ROI based on the Los of the sensor to identify PCVs (previously captured videos) that include the ROI) ([0120]) in the metadata parameters (wherein the Los can define a position and/or an orientation with respect to the fixed coordinate system established in space; i.e. from a position determination system of the sensor) ([0107]) comprises: extracting a first plurality of image frames that depict the LOI from the video stream (extracting image frames that include the ROI from the video taken by the operating platform) ([0119]); filtering out a second plurality of image frames to retain only the image frames that depict the LOI from the video stream (finding image frames that include the ROI from the PCVSs (ROI matching PCVSs)) ([0120-0122] and [0125]); and condensing the plurality of image frames that depict the LOI that were extracted to create a condensed video stream of image frames that depict the LOI (a condensed video of only using the image frames that include the ROI and provide the best visibility) ([0100-0103] and [0122-0124]).
However, Noy does not explicitly teach a “geospatial” reference of the sensor.
Nadler teaches a server arrangement that acquires data from sensors and analyzes the data to determine at least one object of interest in a surveillance area (Abstract); wherein the plurality of sensors can be mounted on unmanned aerial vehicles (UAVs) and the like to acquire effective data pertaining to a particular object ([0059]); and wherein a geospatial location of each sensor is known to the server arrangement and therefore the plurality of sensors server as reference points for determination of the geospatial location of the identified objects in the surveillance area ([0070]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Noy to include not only a fixed coordinate system stablished in space but a geospatial reference since it improves the detection accuracy of the system (Nadler; [0087]).
Regarding claim 7, Noy teaches further comprising: determining which image frames from the condensed video stream depicting the LOI has a selected level of the metadata parameters (wherein the ROI matching PCVSs are associated with respective visibility scores higher than the visibility scores of the PCVSs excluding the displayed ROI matching PCVSs; wherein the visibility scores are part of the metadata associated with each PCVS) ([0125]) (the PCVSs can also be selected based on metadata that is associated with operation conditions or preferences selected by the operator) ([0127-0130]); and identifying the images frames that have the selected level of the metadata parameters (wherein the ROI matching PCVSs are associated with respective visibility scores higher than the visibility scores of the PCVSs excluding the displayed ROI matching PCVSs; wherein the visibility scores are part of the metadata associated with each PCVS) ([0125]) (the PCVSs can also be selected based on metadata that is associated with operation conditions or preferences selected by the operator) ([0127-0130]).
Regarding claim 8, Noy teaches wherein processing the selected portion of the at least one frame containing the LOI based on the reference (processing the at least one frame containing the ROI based on the Los of the sensor to identify PCVs (previously captured videos) that include the ROI) ([0120]) in the metadata parameters (wherein the Los can define a position and/or an orientation with respect to the fixed coordinate system established in space; i.e. from a position determination system of the sensor) ([0107]) comprises: parsing the metadata parameters based on the reference of the sensor (grouping the metadata parameters based on the ROIs, ROI matching PCVSs, preferring PCVSs associated with LoSs (line of sights) that are similar to the current LoS) ([0130]).
However, Noy does not explicitly teach a “geospatial” reference of the sensor.
Nadler teaches a server arrangement that acquires data from sensors and analyzes the data to determine at least one object of interest in a surveillance area (Abstract); wherein the plurality of sensors can be mounted on unmanned aerial vehicles (UAVs) and the like to acquire effective data pertaining to a particular object ([0059]); and wherein a geospatial location of each sensor is known to the server arrangement and therefore the plurality of sensors server as reference points for determination of the geospatial location of the identified objects in the surveillance area ([0070]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Noy to include not only a fixed coordinate system stablished in space but a geospatial reference since it improves the detection accuracy of the system (Nadler; [0087]).
Regarding claim 9, Noy teaches further comprising: generating a cardinal coordinate representation (wherein the PCVSs that include the ROI of the operating platform have a fixed coordinate system established in space (e.g. earth coordinates)) ([0099-0100] and [0107]) associated with the at least one resultant image (outputting at least one frame of a video, in response to finding a match, that includes the object in the ROI to be marked/emphasized) ([0120-0122]), wherein portions of the cardinal coordinate representation is adapted to be selected to change a view angle of the LOI in the at least one resultant image (it is desirable to adjust any PCVS that is displayed to an operator of an operating platform to a LoS of the operating platform with respect to the fixed coordinate system established in space) ([0099-0100]). Nadler also teaches to capture a full 360 degree view of the surveillance area ([0059]); and wherein processing operations can be done on the output image(s) such as performing rotation of the image(s) ([0066]).
Regarding claim 10, Noy teaches further comprising: toggling the view angle in the at least one resultant image in response to selection of a portion of the cardinal coordinate representation (wherein the PCVS that is displayed to the operator can be adjusted as if they were captured from the same LoS of the operating platform with respect to the fixed coordinate system established in space) ([0100]). Nadler also teaches to capture a full 360 degree view of the surveillance area ([0059]); wherein processing operations can be done on the output image(s) such as performing rotation of the image(s) ([0066]); and wherein the user can make the selection and/or perform specific tasks associated with the system ([0086]).
Regarding claim 19, Noy teaches a non-transitory computer program product (computer) ([0083]) including at least one non-transitory computer readable storage medium (non-transitory computer readable storage medium) ([0066]) having instructions encoded thereon that (having computer readable program code embodied therewith) ([0066]), when executed by one or more processors (executable by at least one processor of a computer) ([0066]), implement a process to organize (organize based on operating conditions and/or operator preferences) ([0125-0130]), prioritize (prioritize based on better visibility and/or the selection of preferences) ([0125-0130]), and retrieve images frames based on metadata included with the image frames (wherein the ROI matching PCVSs are associated with respective visibility scores higher than the visibility scores of the PCVSs excluding the displayed ROI matching PCVSs; wherein the visibility scores are part of the metadata associated with each PCVS) ([0125]) (the PCVSs can also be selected based on metadata that is associated with operation conditions or preferences selected by the operator) ([0127-0130]), the process comprising:
obtaining a video stream (obtaining a video) ([0090]) via at least one sensor mounted on a platform (imaging sensor(s) mounted on vehicular platforms) ([0002-0003]), wherein the video stream (video) ([0090]) includes at least one image frame (at least one video frame 10) (Fig. 1a; [0090]) having metadata parameters (wherein the video frames can include metadata that comprises operation conditions indicative of conditions during capturing) ([0043]) (the metadata can also be information determined by the Los (line-of-sight) determination sensors) ([0113]), wherein one of the metadata parameters is a reference (wherein the Los can define a position and/or an orientation with respect to the fixed coordinate system established in space; i.e. from a position determination system of the sensor) ([0107]);
locating a location of interest (LOI) shown in at least one frame of the video stream (locating a region of interest (ROI) included in the video frame) ([0090]), wherein the LOI includes an object that is to be discriminated (wherein the ROI includes an object that is to be marked/emphasized) ([0095-0096]);
selecting at least a portion of at least one frame containing the LOI in the video stream (selecting/obtaining an indication of a given ROI in the video) ([0118-0119]);
processing the selected portion of the at least one frame containing the LOI based on the reference (processing the at least one frame containing the ROI based on the Los of the sensor to identify PCVs (previously captured videos) that include the ROI) ([0120]); and
automatically outputting at least one resultant image in response to the processing, wherein the resultant image includes the object at the LOI to be discriminated (outputting at least one frame of a video, in response to finding a match, that includes the object in the ROI to be marked/emphasized) ([0120-0122]).
However, Noy does not explicitly teach a “geospatial” reference of the sensor.
Nadler teaches a server arrangement that acquires data from sensors and analyzes the data to determine at least one object of interest in a surveillance area (Abstract); wherein the plurality of sensors can be mounted on unmanned aerial vehicles (UAVs) and the like to acquire effective data pertaining to a particular object ([0059]); and wherein a geospatial location of each sensor is known to the server arrangement and therefore the plurality of sensors server as reference points for determination of the geospatial location of the identified objects in the surveillance area ([0070]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Noy to include not only a fixed coordinate system stablished in space but a geospatial reference since it improves the detection accuracy of the system (Nadler; [0087]).
However, neither explicitly teaches wherein the geospatial reference includes “a frame center latitude, frame center longitude, and frame center elevation”.
Taheri-Shirazi teaches an assisted target detection (ATD) algorithm for airborne search and rescue (p. 1; Introduction); wherein the geospatial reference includes “a frame center latitude, frame center longitude, and frame center elevation (frame center latitude, longitude, and elevation) (p. 21, Table 2.1 and p. 22, Figure 2.5).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include the frame center latitude, longitude, and elevation as data since it improves the automatic target detection system (Taheri-Shirazi; p. 2, Section 1.2, 2nd paragraph).
However, none of them explicitly teach “wherein the metadata parameters includes the sensor’s orientation, a field of view of the sensor, and a zoom level of the sensor”.
Hesterman teaches video data creation and management ([0002]); wherein videos can be generated by unmanned aerial vehicles (UAVs) ([0005-0006]); wherein the videos include rich metadata ([0006]); wherein geospatial data may be represented as, or used for the creation of a map ([0008-0009]); and wherein the metadata parameters includes the sensor’s orientation (wherein the metadata can include sensor orientation data) ([0025]), a field of view of the sensor (field of view data) ([0025]), and a zoom level of the sensor (zoom data of the sensor) ([0025]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include more metadata since having more metadata allows efficient and advantageous ways of finding video footage, and creating data, video, or image collection missions with geospatial data (Hesterman; [0007]).
Claim(s) 11-13 and 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Noy, US 2021/0148728 A1 (Noy), Nadler, US 2020/0387716 A1 (Nadler), Taheri-Shirazi, “Assisted Target Detection in Airborne Search and Rescue” (Taheri-Shirazi), Hesterman, US 2018/0322197 A1 (Hesterman), and further in view of Carbonell et al., US 2018/0191944 A1 (Carbonell).
Regarding claim 11, Noy teaches wherein each of the PCVSs being associated with metadata indicative of LoS of a sensor ([0013]); generating a cardinal coordinate representation (wherein the PCVSs that include the ROI of the operating platform have a fixed coordinate system established in space (e.g. earth coordinates)) ([0099-0100] and [0107]); and wherein certain areas have more metadata than others based on the operation conditions (i.e. less metadata if the conditions are cloudy and thus not used) ([0017], [0125-0127], and [0130]). Nadler also teaches to capture a full 360 degree view of the surveillance area ([0059]); wherein processing operations can be done on the output image(s) such as performing rotation of the image(s) ([0066]); and wherein the user can make the selection and/or perform specific tasks associated with the system ([0086]). Taheri-Shirazi teaches an assisted target detection (ATD) algorithm for airborne search and rescue (p. 1; Introduction). Hesterman teaches video data creation and management ([0002]).
However, none of them explicitly teaches “generating the cardinal coordinate representation with a circular profile having thicker portions and thinner portions of the circular profile”.
Carbonell teaches determining a location of interest, and obtaining image data from one or more camera devices about the location of interest (Abstract); and wherein generating the cardinal coordinate representation with a circular profile (a cardinal coordinate representation with a circular geofence) (Fig. 7; [0097] and [0105]) having thicker portions and thinner portions of the circular profile (wherein images that are of more importance can have larger displayed sizes; and smaller sizes for images having less priority) ([0102]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that although Carbonell teaches changing sizes of image windows, that a simple substitution of changing the size/thickness of the geofence would produce the same predictable solution of displaying and identifying images that are of higher and lower priority.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include displaying image priority since it can increase the usefulness of presented information presented to an operator (Carbonell; [0010]).
Regarding claim 12, Noy teaches are associated with image frames having higher values of the metadata parameters (wherein certain areas have more metadata than others based on the operation conditions (i.e. less metadata if the conditions are cloudy and thus not used)) ([0017], [0125-0127], and [0130]); and are associated with image frames having lower values of the metadata parameters (wherein certain areas have less metadata than others based on the operation conditions (i.e. less metadata if the conditions are cloudy and thus not used)) ([0017], [0125-0127], and [0130]). Nadler also teaches to capture a full 360 degree view of the surveillance area ([0059]); wherein processing operations can be done on the output image(s) such as performing rotation of the image(s) ([0066]); and wherein the user can make the selection and/or perform specific tasks associated with the system ([0086]). Taheri-Shirazi teaches an assisted target detection (ATD) algorithm for airborne search and rescue (p. 1; Introduction). Hesterman teaches video data creation and management ([0002]).
However, none of them explicitly teaches “wherein the thicker portions of the circular profile are associated with image frames” and “wherein the thinner portions of the circular profile are associated with image frames”.
Carbonell teaches determining a location of interest, and obtaining image data from one or more camera devices about the location of interest (Abstract); and wherein generating the cardinal coordinate representation with a circular profile (a cardinal coordinate representation with a circular geofence) (Fig. 7; [0097] and [0105]) wherein the thicker portions of the circular profile are associated with image frames and wherein the thinner portions of the circular profile are associated with image frames (wherein images that are of more importance can have larger displayed sizes; and smaller sizes for images having less priority) ([0102]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that although Carbonell teaches changing sizes of image windows, that a simple substitution of changing the size/thickness of the geofence would produce the same predictable solution of displaying and identifying images that are of higher and lower priority (such as images with more metadata and thus more useful).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include displaying image priority since it can increase the usefulness of presented information presented to an operator (Carbonell; [0010]).
Regarding claim 13, Noy teaches wherein one of the metadata parameters is ground spatial distance (the ROI matching PCVSs are anchored to earth coordinates of the ROI; the metadata being associated with the LoS of the sensor with respect to the earth coordinates) ([0051] and [0064-0065]). Nadler teaches mapping the coordinates of an image plane to the geospatial coordinates of a ground plan or a map ([0071-0072]) and wherein the system accurately represents the spatial depth of the identified object ([0074]).
Regarding claim 15, Noy teaches further comprising: in response to processing the selected portion of the at least one frame containing the LOI based on the reference of the sensor (processing the at least one frame containing the ROI based on the Los of the sensor to identify PCVs (previously captured videos) that include the ROI) ([0120]) in the metadata parameters (wherein the Los can define a position and/or an orientation with respect to the fixed coordinate system established in space; i.e. from a position determination system of the sensor) ([0107]).
However, Noy does not explicitly teach “generating a graph”, “geospatial” reference, or wherein the graph comprises thicker portions and thinner portions of the graph.
Nadler teaches a server arrangement that acquires data from sensors and analyzes the data to determine at least one object of interest in a surveillance area (Abstract); wherein the plurality of sensors can be mounted on unmanned aerial vehicles (UAVs) and the like to acquire effective data pertaining to a particular object ([0059]); wherein a geospatial location of each sensor is known to the server arrangement and therefore the plurality of sensors server as reference points for determination of the geospatial location of the identified objects in the surveillance area ([0070]); and wherein the sensor data is rendered on the user device as a graphical representation ([0086]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Noy to include not only a fixed coordinate system stablished in space but a geospatial reference since it improves the detection accuracy of the system (Nadler; [0087]).
Taheri-Shirazi teaches an assisted target detection (ATD) algorithm for airborne search and rescue (p. 1; Introduction). Hesterman teaches video data creation and management ([0002]). However, none of them explicitly teaches “wherein the graph comprises thicker portions and thinner portions of the graph”.
Carbonell teaches determining a location of interest, and obtaining image data from one or more camera devices about the location of interest (Abstract); and wherein generating the cardinal coordinate representation with a circular profile (a cardinal coordinate representation with a circular geofence) (Fig. 7; [0097] and [0105]); and wherein the graph comprises thicker portions and thinner portions of the graph (wherein images that are of more importance can have larger displayed sizes; and smaller sizes for images having less priority) ([0102]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that although Carbonell teaches changing sizes of image windows, that a simple substitution of changing the size/thickness of a graph would produce the same predictable solution of displaying and identifying images that are of higher and lower priority (such as images with more metadata and thus more useful).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include displaying image priority since it can increase the usefulness of presented information presented to an operator (Carbonell; [0010]).
Regarding claim 16, Noy teaches are associated with image frames having higher values of the metadata parameters (wherein certain areas have more metadata than others based on the operation conditions (i.e. less metadata if the conditions are cloudy and thus not used)) ([0017], [0125-0127], and [0130]); and are associated with image frames having lower values of the metadata parameters (wherein certain areas have less metadata than others based on the operation conditions (i.e. less metadata if the conditions are cloudy and thus not used)) ([0017], [0125-0127], and [0130]). Nadler also teaches to capture a full 360 degree view of the surveillance area ([0059]); wherein processing operations can be done on the output image(s) such as performing rotation of the image(s) ([0066]); wherein the user can make the selection and/or perform specific tasks associated with the system ([0086]); and wherein the sensor data is rendered on the user device as a graphical representation ([0086]). Taheri-Shirazi teaches an assisted target detection (ATD) algorithm for airborne search and rescue (p. 1; Introduction). Hesterman teaches video data creation and management ([0002]).
However, none of them explicitly teaches “wherein the thicker portions of the graph are associated with image frames” and “wherein the thinner portions of the graph are associated with image frames”.
Carbonell teaches determining a location of interest, and obtaining image data from one or more camera devices about the location of interest (Abstract); and wherein generating the cardinal coordinate representation with a circular profile (a cardinal coordinate representation with a circular geofence) (Fig. 7; [0097] and [0105]) wherein the thicker portions of the graph are associated with image frames and wherein the thinner portions of the graph are associated with image frames (wherein images that are of more importance can have larger displayed sizes; and smaller sizes for images having less priority) ([0102]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that although Carbonell teaches changing sizes of image windows, that a simple substitution of changing the size/thickness of the geofence (or a graph) would produce the same predictable solution of displaying and identifying images that are of higher and lower priority (such as images with more metadata and thus more useful).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include displaying image priority since it can increase the usefulness of presented information presented to an operator (Carbonell; [0010]).
Regarding claim 17, Noy teaches wherein one of the metadata parameters is ground spatial distance (the ROI matching PCVSs are anchored to earth coordinates of the ROI; the metadata being associated with the LoS of the sensor with respect to the earth coordinates) ([0051] and [0064-0065]). Nadler teaches mapping the coordinates of an image plane to the geospatial coordinates of a ground plan or a map ([0071-0072]) and wherein the system accurately represents the spatial depth of the identified object ([0074]).
Regarding claim 18, Noy teaches wherein represent image frames in which the LOI was not visible (wherein visibility is poor such as based on cloud coverage) ([0110]) (and does not meet a visibility criteria) ([0014-0015]). Nadler teaches wherein the sensor data is rendered on the user device as a graphical representation ([0086]). Carbonell teaches wherein the graph/representation includes spaces or gaps between portions of the graph (image data of a prioritized view transitioning from a non-displayed state to a displayed state) ([0102]), wherein the spaces or gaps represent image frames in which the LOI was not visible (image data of a prioritized view transitioning from a non-displayed state to a displayed state) ([0102]).
Claim(s) 14 is rejected under 35 U.S.C. 103 as being unpatentable over Noy, US 2021/0148728 A1 (Noy), Nadler, US 2020/0387716 A1 (Nadler), Taheri-Shirazi, “Assisted Target Detection in Airborne Search and Rescue” (Taheri-Shirazi), Hesterman, US 2018/0322197 A1 (Hesterman), and further in view of Rahnemoon et al., US 2022/0157066 A1 (Rahnemoon).
Regarding claim 14, Noy teaches further comprising: in response to processing the selected portion of the at least one frame containing the LOI based on the reference of the sensor (processing the at least one frame containing the ROI based on the Los of the sensor to identify PCVs (previously captured videos) that include the ROI) ([0120]) in the metadata parameters (wherein the Los can define a position and/or an orientation with respect to the fixed coordinate system established in space; i.e. from a position determination system of the sensor) ([0107]). Noy also teaches wherein objects can be emphasized by marking it within the ROI ([0095-0096]). Nadler teaches a server arrangement that acquires data from sensors and analyzes the data to determine at least one object of interest in a surveillance area (Abstract); wherein the plurality of sensors can be mounted on unmanned aerial vehicles (UAVs) and the like to acquire effective data pertaining to a particular object ([0059]); wherein a geospatial location of each sensor is known to the server arrangement and therefore the plurality of sensors server as reference points for determination of the geospatial location of the identified objects in the surveillance area ([0070]); and mapping of the object of interest with respect to geographical and/or geospatial data ([0070]). Taheri-Shirazi teaches an assisted target detection (ATD) algorithm for airborne search and rescue (p. 1; Introduction). Hesterman teaches video data creation and management ([0002]).
However, none of them explicitly teaches “generating a heat map”.
Rahnemoon teaches a monitoring system for an aircraft that has sensors configured to sense objects around the aircraft and provide data indicative of the sensed objects (Abstract); wherein a heat map can be generated to show a likelihood of detection of the object of interest ([0040]); and wherein the heat map lays out the particular parts of the field of view of an image sensor that are more or less likely to contain an object of interest ([0064] and [0066]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of prior arts to include generating a heat map since it allows the user to determine the certainty of detection (Rahnemoon; [0040]), allowing for highly-reliable processing of data from a large number of sensors (Rahnemoon; [0005]).
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL J VANCHY JR whose telephone number is (571)270-1193. The examiner can normally be reached Monday - Friday 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL J VANCHY JR/Primary Examiner, Art Unit 2666 Michael.Vanchy@uspto.gov