DETAILED ACTION
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over YANG (US 20210110168 A1), in view of JORQUERA (ES 2821735 T3).
Re Claim 1, YANG discloses image acquisition method (see YANG: e.g., Fig. 1, --provide an object tracking method and an apparatus. The method includes: obtaining multiple frames of first images shot by a first camera apparatus and a first shooting moment of each frame of the first images, where the first images include a first object; obtaining multiple frames of second images shot by a second camera apparatus and a second shooting moment of each frame of the second images, where the second images include a second object; obtaining a distance between the first camera apparatus and the second camera apparatus; and judging whether the first object and the second object are the same object according to the multiple frames of the first images, the first shooting moment of each frame of the first images, the multiple frames of the second images, the second shooting moment of each frame of the second images and the distance.--, in abstract), comprising:
acquiring, from a first camera, a first target image of a target scene captured by the first camera (see YANG: e.g., -- obtaining multiple frames of first images shot by a first camera apparatus and a first shooting moment of each frame of the first images, where the first images include a first object;--, in abstract, and [0008]);
determining, based on the first target image, a probability of enabling a second camera to capture a second target image of the target scene (see YANG: e.g., [0014] determining a moving speed of the first object according to the first shooting moment of each frame of the first images;
[0015] determining a probability that the first object and the second object are the same object according to the moving speed of the first object, the distance, the first shooting moment and the second shooting moment; and
[0016] judging whether the first object and the second object are the same object according to the first similarity and the probability;--, in [0014]-[0016]; also see: [0008]-[0013] for details; and, -- [0139] Specifically, the object will have a certain speed during the movement, so the moving speed of the first object can be obtained, and then the approximate time for the first object to reach the second camera apparatus is estimated in combination with the distance between the first camera apparatus and the second camera apparatus. Finally, the second shooting moment when the second camera apparatus shoots the second image is used for judgment to see if the difference between the first shooting moment and the second shooting moment is close to the approximate time for the first object to reach the second camera apparatus. If yes, the probability that the first object and the second object are the same object is larger, and if no, the probability is smaller.--, in [0139] {Yang’s judgement and determining the probability, judging… read on the claimed “determining…a probability of enabling”});
Although Yang discloses threshold (see Yang: e.g., -- [0154] Then the global feature similarity, the attribute feature similarity and the probability are fused … Pv is the probability that the first object and the second object are the same object….[0156] It is determined that the first object and the second object are the same object when the fusion parameter exceeds a preset value.
[0157] The greater the fusion parameter f is, the more similar the first object and the second object are. When the fusion parameter f exceeds the preset threshold, it indicates that the first object and the second object are the same object; otherwise they are not the same object. Specifically, when the fusion parameter f exceeds the preset threshold, it indicates that the first vehicle and the second vehicle are the same vehicle; otherwise they are not the same vehicle.--, in [0154]-[0157]);
Yang however does not explicitly disclose enabling the second camera to capture the second target image of the target scene based on the probability of enabling exceeding a first probability threshold;
JORQUERA discloses enabling the second camera to capture the second target image of the target scene based on the probability of enabling exceeding a first probability threshold (see JORQUERA: e.g., -- The system consists of a first camera that has a wide field of vision to detect a moving object; a second camera with a large zoom; a positioner connected to the second camera to position the second camera so that it captures the image detected by the first camera; and a processor connected in operation to receive image data from the first camera, the second camera, or both to identify a moving object that is a flying bird based on the image data.--, in pages 4-5/75 of English version of ES 2821735 T3 as NPL provided with the Office Action; and,
-- Upon identification of the presence of one or more threshold identification attributes, the processor analyzes the output of the pixel subset to determine one or more avian identification parameters.
The processor can compare the output of the pixel subset to one or more reference values in a reference image database to determine whether the moving object is a flying bird, including assigning a probability that the moving object is a flying bird and / or a species of flying bird interest. In this way, resources can be appropriately prioritized for objects of highest probability.--, in page 6 of English version of ES 2821735 T3 as NPL provided with the Office Action; and, -- a) Capture images of the airspace surrounding an image capture system with a plurality of first camera systems…. e) if the moving object (230, 231, 232, 233) is a moving object of interest: place a stereo camera (620) to capture the image of the moving object (230, 231, 232, 233), the stereo camera ( 620) that has a pair of second cameras (625) that independently of each other have a high zoom; {read on claimed enabling the second camera} f) Obtain one or more avian identification parameters for the moving object of interest from the stereo camera (620)--, in claim 6, in page 27 of English version of ES 2821735 T3 as NPL provided with the Office Action);
YANG and JORQUERA are combinable as they are in the same field of endeavor: using multiple (at least two) camera to capture the images of moving object. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify YANG’s method using JORQUERA’s teachings by including enabling the second camera to capture the second target image of the target scene based on the probability of enabling exceeding a first probability threshold to YANG’s setting time, distances of the second camera in order to position the second camera so that it captures the image detected by the first camera; and a processor connected in operation to receive image data from the first camera, the second camera, or both to identify a moving object that is a flying bird based on the image data (see JORQUERA: e.g. in pages 4-6 of English version of ES 2821735 T3 as NPL provided with the Office Action , and of claim 6 in page 27 of English version of ES 2821735 T3 as NPL provided with the Office Action).
Re Claim 2, YANG as JORQUERA further disclose generating, based on the first target image, movement trajectory information of a target object in the target scene (see YANG: e.g., -- tracking a vehicle mainly includes obtaining a surveillance video of a camera, and analyzing the moving trajectory of the target vehicle according to the target vehicle captured in the surveillance video of the camera to track the target.--, in [0004], and [0090]; also see JORQUERA: e.g., -- a flying bird can be observed and values are obtained on its size, speed, wing width, wing shape, color, boundary shape, geometry, intensity, posture and typical trajectories that in turn are defined by ranges about an average value. These parameters can be obtained for a specific bird or for a plurality of birds. The reference values can be arranged in a reference image database or can be determined using one or more reference image algorithms,--, in pages 4-6 of English version of ES 2821735 T3 as NPL provided with the Office Action);
wherein the enabling the second camera step is further based on a similarity between a trajectory represented by the moving trajectory information and a trigger trajectory being greater than or equal to a predetermined similarity threshold (see YANG: e.g., -- tracking a vehicle mainly includes obtaining a surveillance video of a camera, and analyzing the moving trajectory of the target vehicle according to the target vehicle captured in the surveillance video of the camera to track the target.--, in [0004], and [0090]; -- [0154] Then the global feature similarity, the attribute feature similarity and the probability are fused … Pv is the probability that the first object and the second object are the same object….[0156] It is determined that the first object and the second object are the same object when the fusion parameter exceeds a preset value.
[0157] The greater the fusion parameter f is, the more similar the first object and the second object are. When the fusion parameter f exceeds the preset threshold, it indicates that the first object and the second object are the same object; otherwise they are not the same object. Specifically, when the fusion parameter f exceeds the preset threshold, it indicates that the first vehicle and the second vehicle are the same vehicle; otherwise they are not the same vehicle.--, in [0008]-0016], [0139], and [0154]-[0157]; also see JORQUERA: e.g., -- a flying bird can be observed and values are obtained on its size, speed, wing width, wing shape, color, boundary shape, geometry, intensity, posture and typical trajectories that in turn are defined by ranges about an average value. These parameters can be obtained for a specific bird or for a plurality of birds. The reference values can be arranged in a reference image database or can be determined using one or more reference image algorithms,--, in pages 4-6 of English version of ES 2821735 T3 as NPL provided with the Office Action; and, -- The system consists of a first camera that has a wide field of vision to detect a moving object; a second camera with a large zoom; a positioner connected to the second camera to position the second camera so that it captures the image detected by the first camera; and a processor connected in operation to receive image data from the first camera, the second camera, or both to identify a moving object that is a flying bird based on the image data.--, in pages 4-5/75 of English version of ES 2821735 T3 as NPL provided with the Office Action; and,
-- Upon identification of the presence of one or more threshold identification attributes, the processor analyzes the output of the pixel subset to determine one or more avian identification parameters.
The processor can compare the output of the pixel subset to one or more reference values in a reference image database to determine whether the moving object is a flying bird, including assigning a probability that the moving object is a flying bird and / or a species of flying bird interest. In this way, resources can be appropriately prioritized for objects of highest probability.--, in page 6 of English version of ES 2821735 T3 as NPL provided with the Office Action; and, -- a) Capture images of the airspace surrounding an image capture system with a plurality of first camera systems…. e) if the moving object (230, 231, 232, 233) is a moving object of interest: place a stereo camera (620) to capture the image of the moving object (230, 231, 232, 233), the stereo camera ( 620) that has a pair of second cameras (625) that independently of each other have a high zoom; {read on claimed enabling the second camera} f) Obtain one or more avian identification parameters for the moving object of interest from the stereo camera (620)--, in claim 6, in page 27 of English version of ES 2821735 T3 as NPL provided with the Office Action).
Re Claim 3, YANG as JORQUERA further disclose determining positional information of a target object within a shooting range of the first camera (see YANG: e.g., Fig. 1, --provide an object tracking method and an apparatus. The method includes: obtaining multiple frames of first images shot by a first camera apparatus and a first shooting moment of each frame of the first images, where the first images include a first object; obtaining multiple frames of second images shot by a second camera apparatus and a second shooting moment of each frame of the second images, where the second images include a second object; obtaining a distance between the first camera apparatus and the second camera apparatus; and judging whether the first object and the second object are the same object according to the multiple frames of the first images, the first shooting moment of each frame of the first images, the multiple frames of the second images, the second shooting moment of each frame of the second images and the distance.--, in abstract; and, -- obtaining multiple frames of first images shot by a first camera apparatus and a first shooting moment of each frame of the first images, where the first images include a first object;--, in abstract, and [0008], and, --[0014] determining a moving speed of the first object according to the first shooting moment of each frame of the first images;
[0015] determining a probability that the first object and the second object are the same object according to the moving speed of the first object, the distance, the first shooting moment and the second shooting moment; and
[0016] judging whether the first object and the second object are the same object according to the first similarity and the probability;--, in [0014]-[0016]; also see: [0008]-[0013] for details; and, -- [0139] Specifically, the object will have a certain speed during the movement, so the moving speed of the first object can be obtained, and then the approximate time for the first object to reach the second camera apparatus is estimated in combination with the distance between the first camera apparatus and the second camera apparatus. Finally, the second shooting moment when the second camera apparatus shoots the second image is used for judgment to see if the difference between the first shooting moment and the second shooting moment is close to the approximate time for the first object to reach the second camera apparatus. If yes, the probability that the first object and the second object are the same object is larger, and if no, the probability is smaller.--, in [0139]);
determining, based on the positional information, a moment of enabling the second camera; wherein enabling the second camera step is further based on the moment of enabling (see YANG: e.g., Fig. 1, --provide an object tracking method and an apparatus. The method includes: obtaining multiple frames of first images shot by a first camera apparatus and a first shooting moment of each frame of the first images, where the first images include a first object; obtaining multiple frames of second images shot by a second camera apparatus and a second shooting moment of each frame of the second images, where the second images include a second object; obtaining a distance between the first camera apparatus and the second camera apparatus; and judging whether the first object and the second object are the same object according to the multiple frames of the first images, the first shooting moment of each frame of the first images, the multiple frames of the second images, the second shooting moment of each frame of the second images and the distance.--, in abstract, and, -- obtaining multiple frames of first images shot by a first camera apparatus and a first shooting moment of each frame of the first images, where the first images include a first object;--, in abstract, and [0008]; and, --[0014] determining a moving speed of the first object according to the first shooting moment of each frame of the first images;
[0015] determining a probability that the first object and the second object are the same object according to the moving speed of the first object, the distance, the first shooting moment and the second shooting moment; and
[0016] judging whether the first object and the second object are the same object according to the first similarity and the probability;--, in [0014]-[0016]; also see: [0008]-[0013] for details; and, -- [0139] Specifically, the object will have a certain speed during the movement, so the moving speed of the first object can be obtained, and then the approximate time for the first object to reach the second camera apparatus is estimated in combination with the distance between the first camera apparatus and the second camera apparatus. Finally, the second shooting moment when the second camera apparatus shoots the second image is used for judgment to see if the difference between the first shooting moment and the second shooting moment is close to the approximate time for the first object to reach the second camera apparatus. If yes, the probability that the first object and the second object are the same object is larger, and if no, the probability is smaller.--, in [0139] {Yang’s judgement and determining the probability, judging… read on the claimed “enabling the second camera”}).
Re Claim 4, YANG as JORQUERA further disclose determining a first number of occurrences of a reference object within a shooting range of the first camera during a first time period (see YANG: e.g., --[0118] FIG. 6 is a schematic diagram of obtaining a global feature similarity according to an embodiment of the present disclosure. As shown in FIG. 6, multiple frames of first images and multiple frames of second images are included. The first images include an image A, an image B and an image C, and the second images include an image D, an image E, an image F and an image G. Then the server will select any one first image and any one second image to obtain a pair of images, and compare the global features of the objects therein to obtain the global feature similarity of this pair of images.
[0119] The combination modes in FIG. 6 include: A-D, A-E, A-F, A-G, B-D, B-E, B-F, B-G, C-D, C-E, C-F, C-G, i.e., a total of 12 pairs of images, thereby a total of 12 global feature similarities, each for one pair of images, are obtained. Then the global feature similarity is obtained by averaging these 12 global feature similarities.
[0120] The number of frames of the first images and the number of frames of the second images in FIG. 6 are only examples, and the numbers are not limited thereto. For example, when there arel0 frames of the first images and 20 frames of the second images, there are a total of 200 global feature similarities, which can be averaged to obtain the global feature similarity.
[0121] Step 53: obtaining attribute features of the first object in the multiple frames of the first images and attribute features of the second object in the multiple frames of the second images according to an attribute feature model, where the attribute feature model is obtained by training according to multiple frames of second sample images.--, in [0118]-[0121]);
determining a second number of occurrences of the reference object moving from the shooting range of the first camera into a shooting range of the second camera during a second time period, wherein the first time period comprises the second time period, wherein a duration of the second time period is less than a duration of the first time period; and wherein the first probability threshold is determined based on a quotient of the second number of occurrences and the first number of occurrences (see YANG: e.g., --[0118] FIG. 6 is a schematic diagram of obtaining a global feature similarity according to an embodiment of the present disclosure. As shown in FIG. 6, multiple frames of first images and multiple frames of second images are included. The first images include an image A, an image B and an image C, and the second images include an image D, an image E, an image F and an image G. Then the server will select any one first image and any one second image to obtain a pair of images, and compare the global features of the objects therein to obtain the global feature similarity of this pair of images.
[0119] The combination modes in FIG. 6 include: A-D, A-E, A-F, A-G, B-D, B-E, B-F, B-G, C-D, C-E, C-F, C-G, i.e., a total of 12 pairs of images, thereby a total of 12 global feature similarities, each for one pair of images, are obtained. Then the global feature similarity is obtained by averaging these 12 global feature similarities.
[0120] The number of frames of the first images and the number of frames of the second images in FIG. 6 are only examples, and the numbers are not limited thereto. For example, when there arel0 frames of the first images and 20 frames of the second images, there are a total of 200 global feature similarities, which can be averaged to obtain the global feature similarity.
[0121] Step 53: obtaining attribute features of the first object in the multiple frames of the first images and attribute features of the second object in the multiple frames of the second images according to an attribute feature model, where the attribute feature model is obtained by training according to multiple frames of second sample images.--, in [0118]-[0121]).
Re Claim 5, YANG as JORQUERA further disclose after enabling the second camera to capture the second target image of the target scene, determining whether a target object is included in the second target image captured by the second camera (see YANG: e.g., Fig. 1, --provide an object tracking method and an apparatus. The method includes: obtaining multiple frames of first images shot by a first camera apparatus and a first shooting moment of each frame of the first images, where the first images include a first object; obtaining multiple frames of second images shot by a second camera apparatus and a second shooting moment of each frame of the second images, where the second images include a second object; obtaining a distance between the first camera apparatus and the second camera apparatus; and judging whether the first object and the second object are the same object according to the multiple frames of the first images, the first shooting moment of each frame of the first images, the multiple frames of the second images, the second shooting moment of each frame of the second images and the distance.--, in abstract; and, -- obtaining multiple frames of first images shot by a first camera apparatus and a first shooting moment of each frame of the first images, where the first images include a first object;--, in abstract, and [0008], and, --[0014] determining a moving speed of the first object according to the first shooting moment of each frame of the first images;
[0015] determining a probability that the first object and the second object are the same object according to the moving speed of the first object, the distance, the first shooting moment and the second shooting moment; and
[0016] judging whether the first object and the second object are the same object according to the first similarity and the probability;--, in [0014]-[0016]; also see: [0008]-[0013] for details; and, -- [0139] Specifically, the object will have a certain speed during the movement, so the moving speed of the first object can be obtained, and then the approximate time for the first object to reach the second camera apparatus is estimated in combination with the distance between the first camera apparatus and the second camera apparatus. Finally, the second shooting moment when the second camera apparatus shoots the second image is used for judgment to see if the difference between the first shooting moment and the second shooting moment is close to the approximate time for the first object to reach the second camera apparatus. If yes, the probability that the first object and the second object are the same object is larger, and if no, the probability is smaller.--, in [0139]); and
after determining that the target object is included in the second target image, controlling the first camera to enter a standby hibernation mode (see YANG: e.g., Fig. 1, --provide an object tracking method and an apparatus. The method includes: obtaining multiple frames of first images shot by a first camera apparatus and a first shooting moment of each frame of the first images, where the first images include a first object; obtaining multiple frames of second images shot by a second camera apparatus and a second shooting moment of each frame of the second images, where the second images include a second object; obtaining a distance between the first camera apparatus and the second camera apparatus; and judging whether the first object and the second object are the same object according to the multiple frames of the first images, the first shooting moment of each frame of the first images, the multiple frames of the second images, the second shooting moment of each frame of the second images and the distance.--, in abstract; and, -- obtaining multiple frames of first images shot by a first camera apparatus and a first shooting moment of each frame of the first images, where the first images include a first object;--, in abstract, and [0008], and, --[0014] determining a moving speed of the first object according to the first shooting moment of each frame of the first images;
[0015] determining a probability that the first object and the second object are the same object according to the moving speed of the first object, the distance, the first shooting moment and the second shooting moment; and
[0016] judging whether the first object and the second object are the same object according to the first similarity and the probability;--, in [0014]-[0016]; also see: [0008]-[0013] for details; and, -- [0139] Specifically, the object will have a certain speed during the movement, so the moving speed of the first object can be obtained, and then the approximate time for the first object to reach the second camera apparatus is estimated in combination with the distance between the first camera apparatus and the second camera apparatus. Finally, the second shooting moment when the second camera apparatus shoots the second image is used for judgment to see if the difference between the first shooting moment and the second shooting moment is close to the approximate time for the first object to reach the second camera apparatus. If yes, the probability that the first object and the second object are the same object is larger, and if no, the probability is smaller.--, in [0139]).
Re Claim 6, YANG as JORQUERA further disclose after enabling the second camera to capture the second target image of the target scene, using a probability determination model to determine a target probability, wherein the target probability represents a probability that the first target image captured by the first camera and the second target image captured by the second camera contain a same target object (see YANG: e.g., -- tracking a vehicle mainly includes obtaining a surveillance video of a camera, and analyzing the moving trajectory of the target vehicle according to the target vehicle captured in the surveillance video of the camera to track the target.--, in [0004], and [0090]; -- [0154] Then the global feature similarity, the attribute feature similarity and the probability are fused … Pv is the probability that the first object and the second object are the same object….[0156] It is determined that the first object and the second object are the same object when the fusion parameter exceeds a preset value.
[0157] The greater the fusion parameter f is, the more similar the first object and the second object are. When the fusion parameter f exceeds the preset threshold, it indicates that the first object and the second object are the same object; otherwise they are not the same object. Specifically, when the fusion parameter f exceeds the preset threshold, it indicates that the first vehicle and the second vehicle are the same vehicle; otherwise they are not the same vehicle.--, in [0008]-0016], [0139], and [0154]-[0157]; also see JORQUERA: e.g., -- a flying bird can be observed and values are obtained on its size, speed, wing width, wing shape, color, boundary shape, geometry, intensity, posture and typical trajectories that in turn are defined by ranges about an average value. These parameters can be obtained for a specific bird or for a plurality of birds. The reference values can be arranged in a reference image database or can be determined using one or more reference image algorithms,--, in pages 4-6 of English version of ES 2821735 T3 as NPL provided with the Office Action; and, -- The system consists of a first camera that has a wide field of vision to detect a moving object; a second camera with a large zoom; a positioner connected to the second camera to position the second camera so that it captures the image detected by the first camera; and a processor connected in operation to receive image data from the first camera, the second camera, or both to identify a moving object that is a flying bird based on the image data.--, in pages 4-5/75 of English version of ES 2821735 T3 as NPL provided with the Office Action; and,
-- Upon identification of the presence of one or more threshold identification attributes, the processor analyzes the output of the pixel subset to determine one or more avian identification parameters.
The processor can compare the output of the pixel subset to one or more reference values in a reference image database to determine whether the moving object is a flying bird, including assigning a probability that the moving object is a flying bird and / or a species of flying bird interest. In this way, resources can be appropriately prioritized for objects of highest probability.--, in page 6 of English version of ES 2821735 T3 as NPL provided with the Office Action; and, -- a) Capture images of the airspace surrounding an image capture system with a plurality of first camera systems…. e) if the moving object (230, 231, 232, 233) is a moving object of interest: place a stereo camera (620) to capture the image of the moving object (230, 231, 232, 233), the stereo camera ( 620) that has a pair of second cameras (625) that independently of each other have a high zoom; {read on claimed enabling the second camera} f) Obtain one or more avian identification parameters for the moving object of interest from the stereo camera (620)--, in claim 6, in page 27 of English version of ES 2821735 T3 as NPL provided with the Office Action);
and after determining that the target probability is less than a pre-determined second probability threshold and after determining that the first target image captured by the first camera and the second target image captured by the second camera comprise the same target object, employing the first target image and the second target image to continue to train the probability determination model (see YANG: e.g., -- tracking a vehicle mainly includes obtaining a surveillance video of a camera, and analyzing the moving trajectory of the target vehicle according to the target vehicle captured in the surveillance video of the camera to track the target.--, in [0004], and [0090]; -- [0154] Then the global feature similarity, the attribute feature similarity and the probability are fused … Pv is the probability that the first object and the second object are the same object….[0156] It is determined that the first object and the second object are the same object when the fusion parameter exceeds a preset value.
[0157] The greater the fusion parameter f is, the more similar the first object and the second object are. When the fusion parameter f exceeds the preset threshold, it indicates that the first object and the second object are the same object; otherwise they are not the same object. Specifically, when the fusion parameter f exceeds the preset threshold, it indicates that the first vehicle and the second vehicle are the same vehicle; otherwise they are not the same vehicle.--, in [0008]-0016], [0139], and [0154]-[0157]; also see JORQUERA: e.g., -- a flying bird can be observed and values are obtained on its size, speed, wing width, wing shape, color, boundary shape, geometry, intensity, posture and typical trajectories that in turn are defined by ranges about an average value. These parameters can be obtained for a specific bird or for a plurality of birds. The reference values can be arranged in a reference image database or can be determined using one or more reference image algorithms,--, in pages 4-6 of English version of ES 2821735 T3 as NPL provided with the Office Action; and, -- The system consists of a first camera that has a wide field of vision to detect a moving object; a second camera with a large zoom; a positioner connected to the second camera to position the second camera so that it captures the image detected by the first camera; and a processor connected in operation to receive image data from the first camera, the second camera, or both to identify a moving object that is a flying bird based on the image data.--, in pages 4-5/75 of English version of ES 2821735 T3 as NPL provided with the Office Action; and,
-- Upon identification of the presence of one or more threshold identification attributes, the processor analyzes the output of the pixel subset to determine one or more avian identification parameters.
The processor can compare the output of the pixel subset to one or more reference values in a reference image database to determine whether the moving object is a flying bird, including assigning a probability that the moving object is a flying bird and / or a species of flying bird interest. In this way, resources can be appropriately prioritized for objects of highest probability.--, in page 6 of English version of ES 2821735 T3 as NPL provided with the Office Action; and, -- a) Capture images of the airspace surrounding an image capture system with a plurality of first camera systems…. e) if the moving object (230, 231, 232, 233) is a moving object of interest: place a stereo camera (620) to capture the image of the moving object (230, 231, 232, 233), the stereo camera ( 620) that has a pair of second cameras (625) that independently of each other have a high zoom; {read on claimed enabling the second camera} f) Obtain one or more avian identification parameters for the moving object of interest from the stereo camera (620)--, in claim 6, in page 27 of English version of ES 2821735 T3 as NPL provided with the Office Action).
Re Claim 7, YANG as JORQUERA further disclose after enabling the second camera, recognizing target objects in a plurality of target images captured by the first camera and the second camera (see YANG: e.g., Fig. 1, --provide an object tracking method and an apparatus. The method includes: obtaining multiple frames of first images shot by a first camera apparatus and a first shooting moment of each frame of the first images, where the first images include a first object; obtaining multiple frames of second images shot by a second camera apparatus and a second shooting moment of each frame of the second images, where the second images include a second object; obtaining a distance between the first camera apparatus and the second camera apparatus; and judging whether the first object and the second object are the same object according to the multiple frames of the first images, the first shooting moment of each frame of the first images, the multiple frames of the second images, the second shooting moment of each frame of the second images and the distance.--, in abstract);
classifying, based on the recognized target objects, the plurality of target images, wherein each class of target images comprise target images having a same target object; causing display of each class of target images (see JORQUERA: e.g., -- Avian detection systems have many applications, ranging from avian counts, classification and / or identification in a specific geographic location, to deterrence systems and countermeasures for aviation and wind production systems.--, under Foundation of the invention section, in page 3 of English version of ES 2821735 T3 as NPL provided with the Office Action; and,
--A moving object that is not changing substantially in distance from the screen may be equivalent to a subset of pixels that does not vary significantly in number over time, but will undergo a change in the location of the pixels with respect to a camera that is not moving. moves, compared to a round trip from a camera….any of the recognition algorithms can consist of a database of physical parameters associated with a flying bird species of interest, and the processor compares a specific physical parameter of the first camera or the second camera with a corresponding physical parameter of the base. of physical parameter data to filter moving objects that are not a flying bird or are not a flying bird species of interest and / or assign probabilities. Said parameters are also known herein as "avian identification parameters". The avian identification parameter is any observable parameter useful for classifying a mobile object as an avian species, including specific avian species. Examples include physical parameters of the bird, such as size, color, shape, or other physically characteristic characteristics. Other parameters include flight path or wing movement (or lack thereof).--, in pages 6-7 of English version of ES 2821735 T3 as NPL provided with the Office Action); and
after detecting a selection operation for a particular class of target images, splicing the particular class of target images in an order of moment of acquisition to generate a target video (see JORQUERA: e.g., -- Any of the systems and devices provided herein determine the distance of the moving object using a second camera which is a stereo camera that is positioned to capture the image of the moving object. In this way, objects that may be large, but located far away are distinguished in their position from smaller objects that are located closer to the system.
Any of the classification and / or identification steps may comprise a pattern recognition algorithm.
As used here, one or more threshold identification attributes can be selected from the group consisting of distance, path, limit parameter, limit shape, edge characteristic, pixel spacing, pixel intensity, pixel color, gradient of intensity, temporal evolution parameters, and any combination of these.
One or more threshold identification attributes is a contour or edge parameter. Accordingly, any of the methods provided herein may further comprise the step of comparing the demarcation parameters with a song characteristic typical of a flying bird. Examples of characteristics or song limits of a flying bird may include shapes, colors, intensity, and relative distributions of these. For example, for a bird, the specific shapes of the wingtips, the body, the heat, the tail feathers can provide data on the boundary shapes or characteristic songs, useful to be compared with the demarcation parameters. obtained from the production of the pixel subgroup.--, in page 10 of English version of ES 2821735 T3 as NPL provided with the Office Action; also see YANG: e.g., -- [0087] The first camera apparatus 11 is configured to send multiple frames of first images to the server 10, and the second camera apparatus 12 is configured to send multiple frames of second images to the server 10. The multiple frames of the first images may be multiple frames of images shot by the first camera apparatus 11, or may be a video shot by the first camera apparatus 11, which is then converted into multiple frames of the first images by the server 10. Similarly, the multiple frames of the second images may be multiple frames of images shot by the second camera apparatus 12, or may be a video shot by the second camera apparatus 12, which is then converted into multiple frames of the second images by the server 10.--, in [0087]).
Re Claims 8-14, claims 8-14 are the corresponding system claims to claims 1-7, respectively. Therefore, claims 8-14 are rejected for the similar reasons for claims 1-7 respectively. Furthermore, YANG as modified by JORQUERA further disclose an image acquisition system, comprising: a first camera; a second camera; and a server containing computer-readable instructions that, when executed by at least one processor of the server to perform the method (see YANG: e.g., -- [0087] The first camera apparatus 11 is configured to send multiple frames of first images to the server 10, and the second camera apparatus 12 is configured to send multiple frames of second images to the server 10. The multiple frames of the first images may be multiple frames of images shot by the first camera apparatus 11, or may be a video shot by the first camera apparatus 11, which is then converted into multiple frames of the first images by the server 10. Similarly, the multiple frames of the second images may be multiple frames of images shot by the second camera apparatus 12, or may be a video shot by the second camera apparatus 12, which is then converted into multiple frames of the second images by the server 10.--, in [0087], and, --[0192] As shown in FIG. 10, the electronic device includes: one or more processors 101, a memory 102, and interfaces for connecting various components, including high-speed interfaces and low-speed interfaces. The various components are connected to each other using different buses 105 and can be installed on a common motherboard or in other ways as needed. The processors 101 can process instructions executed within the electronic device, including instructions stored in or on the memory 102 ford is playing graphical information of the GUI on an external input/output apparatus such as a display device coupled to the interface. In other implementations, multiple processors 101 and/or multiple buses 105 can be used with multiple memories 102 if desired. Similarly, multiple electronic devices can be connected, and each device provides some necessary operations (for example, as a server array, a group of blade servers, or a multi-processor system). In FIG. 10, one processor 101 is taken as an example.--, in [0192]).
Re Claims 15-20, claims 15-20 are the corresponding device claims to claims 1-6, respectively. Therefore, claims 15-20 are rejected for the similar reasons for claims 1-6 respectively. Furthermore, YANG as modified by JORQUERA further disclose an electronic device, comprising: one or more processors, and memory having executable instructions that, when executed by the one or more processors, cause the electronic device to perform the method (see YANG: e.g., -- [0087] The first camera apparatus 11 is configured to send multiple frames of first images to the server 10, and the second camera apparatus 12 is configured to send multiple frames of second images to the server 10. The multiple frames of the first images may be multiple frames of images shot by the first camera apparatus 11, or may be a video shot by the first camera apparatus 11, which is then converted into multiple frames of the first images by the server 10. Similarly, the multiple frames of the second images may be multiple frames of images shot by the second camera apparatus 12, or may be a video shot by the second camera apparatus 12, which is then converted into multiple frames of the second images by the server 10.--, in [0087], and, --[0192] As shown in FIG. 10, the electronic device includes: one or more processors 101, a memory 102, and interfaces for connecting various components, including high-speed interfaces and low-speed interfaces. The various components are connected to each other using different buses 105 and can be installed on a common motherboard or in other ways as needed. The processors 101 can process instructions executed within the electronic device, including instructions stored in or on the memory 102 ford is playing graphical information of the GUI on an external input/output apparatus such as a display device coupled to the interface. In other implementations, multiple processors 101 and/or multiple buses 105 can be used with multiple memories 102 if desired. Similarly, multiple electronic devices can be connected, and each device provides some necessary operations (for example, as a server array, a group of blade servers, or a multi-processor system). In FIG. 10, one processor 101 is taken as an example.--, in [0192]; also (see JORQUERA: e.g., -- The system consists of a first camera that has a wide field of vision to detect a moving object; a second camera with a large zoom; a positioner connected to the second camera to position the second camera so that it captures the image detected by the first camera; and a processor connected in operation to receive image data from the first camera, the second camera, or both to identify a moving object that is a flying bird based on the image data.--, in pages 4-5/75 of English version of ES 2821735 T3 as NPL provided with the Office Action; and,
-- Upon identification of the presence of one or more threshold identification attributes, the processor analyzes the output of the pixel subset to determine one or more avian identification parameters.
The processor can compare the output of the pixel subset to one or more reference values in a reference image database to determine whether the moving object is a flying bird, including assigning a probability that the moving object is a flying bird and / or a species of flying bird interest. In this way, resources can be appropriately prioritized for objects of highest probability.--, in page 6 of English version of ES 2821735 T3 as NPL provided with the Office Action; and, -- a) Capture images of the airspace surrounding an image capture system with a plurality of first camera systems…. e) if the moving object (230, 231, 232, 233) is a moving object of interest: place a stereo camera (620) to capture the image of the moving object (230, 231, 232, 233), the stereo camera ( 620) that has a pair of second cameras (625) that independently of each other have a high zoom; {read on claimed enabling the second camera} f) Obtain one or more avian identification parameters for the moving object of interest from the stereo camera (620)--, in claim 6, in page 27 of English version of ES 2821735 T3 as NPL provided with the Office Action).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEI WEN YANG whose telephone number is (571)270-5670. The examiner can normally be reached on 8:00 - 5:00 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on 571-272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WEI WEN YANG/Primary Examiner, Art Unit 2662