DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description:
Reference Numeral “322” shown in Figure 3; and
Reference Numeral “1300” shown in Figure 13.
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description:
Reference Numeral “708” mentioned in Paragraph [0045] line 7.
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character “346” has been used to designate both Machine Learning (ML) Module and Report Detection Confidence Module in Figure 3, and on Paragraph [0037], lines 15-17; and reference character “1202” has been used to designate both Image generated using LWIR imaging and Image generated using polarimetric imaging in Figure 12.
Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 2, 6, 9, and 11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Aycock et al. (U.S. Pub. No. 2016/0307053).
Re claim 1: Aycock et al. disclose an apparatus (i.e., “system 100”, Paragraph [0037]) for detection, classification, discrimination, and/or identification of objects, the apparatus comprising:
a camera system configured to capture one or more images (i.e., “polarimeter 101 comprises a polarizing imaging device for recording polarized images, such as a digital camera or thermal imager that collects images”, Paragraph [0038]; and “polarimeter 101 captures an image”, Paragraph [0062]) including capturing polarized light (i.e., “LWIR imaging polarimeter measures both a radiance image and a polarization image”, Paragraph [0065]); and
at least one processor communicatively coupled with the camera system (i.e., FIG. 1, “signal processing unit 107”, Paragraph [0037]; and “signal processing unit 107 also comprises a processor 130”, Paragraph [0044]) and configured to receive the one or more images from the camera system (i.e., “signal processing unit 107 receives the raw image data”, Paragraph [0040]) and process the images to determine or identify one or more objects in the one or more images received from the camera system based on at least the polarized light (i.e., “Note that DoLP is linear polarization. As one with skill in the art would know, in some situations polarization that is not linear (e.g., circular) may be desired. Thus in other embodiments, step 1004 may use polarization images derived from any combination of S0, S1, S2, or S3 and is not limited to DoLP”, Paragraph [0075]; “contrast enhancing algorithms that are known in the art are applied to the multidimensional image from step 1004”, Paragraph [0084]; and “object detection algorithms that are known in the art are applied to the contrast enhanced data from step 1005”, Paragraph [0085]).
Re claim 2: Aycock et al. disclose the camera system including one or more polarimetric sensors for detecting the polarized light (i.e., “focal plane array 1202 comprises an array of light sensing pixels”, Paragraph [0042]).
Re claim 6: Aycock et al. disclose the at least one processor configured to determine one or more thermal characteristics in the one or more images for differentiating the object (i.e., “Of the Stokes parameters, S0 represents the conventional LWIR thermal image with no polarization information”, Paragraph [0066]).
Re claim 9: Aycock et al. disclose a method for detection, classification, discrimination, and/or identification of objects (i.e., “method using Long Wave Infrared Imaging Polarimetry for improved mapping and perception of a roadway or path and for perceiving or detecting obstacles”, Abstract), the method comprising:
utilizing a camera system configured to capture one or more images and polarized light (i.e., “polarimeter 101 comprises a polarizing imaging device for recording polarized images, such as a digital camera or thermal imager that collects images”, Paragraph [0038]; “polarimeter 101 captures an image”, Paragraph [0062]; and “LWIR imaging polarimeter measures both a radiance image and a polarization image”, Paragraph [0065]);
processing the images to determine and/or identify one or more objects in the images received from the camera system based on at least the polarized light (i.e., “signal processing unit 107 receives the raw image data”, Paragraph [0040]; “Note that DoLP is linear polarization. As one with skill in the art would know, in some situations polarization that is not linear (e.g., circular) may be desired. Thus in other embodiments, step 1004 may use polarization images derived from any combination of S0, S1, S2, or S3 and is not limited to DoLP”, Paragraph [0075]; “contrast enhancing algorithms that are known in the art are applied to the multidimensional image from step 1004”, Paragraph [0084]; and “object detection algorithms that are known in the art are applied to the contrast enhanced data from step 1005”, Paragraph [0085]).
Re claim 11: Aycock et al. disclose the camera system including one or more polarimetric sensors for detecting the polarized light (i.e., “focal plane array 1202 comprises an array of light sensing pixels”, Paragraph [0042]).
Claims 1-3, 9, 11 and 12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Barbour et al. (U.S. Pub. No. 2023/0048725).
Re claim 1: Barbour et al. disclose an apparatus (i.e., “data management system”, Paragraph [0039]) for detection, classification, discrimination, and/or identification of objects, the apparatus comprising:
a camera system configured to (i.e., “SPI system 100”, Paragraph [0049]) capture one or more images including capturing polarized light (i.e., “SPI system 100 is sensitive to EM radiation 104 that is incident upon it”, Paragraph [0049]; “Properties of the EM radiation 104 may be altered as it interacts with the object 102 … the primary angle of the reflected linearly polarized light, which is indicated as Theta in FIG. 3A, may be mathematically related to the in-plane angle of the reflecting surface”, Paragraph [0050]; and “the SPI system 100 includes an image sensor device 106 configured to generate 3D data”, Paragraph [0051]); and
at least one processor communicatively coupled with the camera system and configured to (i.e., “one or more edge processors 108 configured to process the 3D data”, Paragraph [0051]) receive the one or more images from the camera system and process the images to determine or identify one or more objects in the one or more images received from the camera system based on at least the polarized light (i.e., “one or more edge processors 108 may also be configured to cluster similar features or information related to the object 102 … the SPI system 100 can group the scene into different object types or group the object 102 into different surfaces, thus enabling segmentation of the object 102 from a cluttered scene”, Paragraph [0054]).
Re claim 2: Barbour et al. disclose the camera system including one or more polarimetric sensors for detecting the polarized light (i.e., “the SPI sensor 106-1 includes an EM detector (e.g., including an array of radiation-sensing pixels) and a polarization structure”, Paragraph [0056]).
Re claim 3: Barbour et al. disclose the camera system including stationary, non-rotating (See for example, “if the scene and camera containing the SPI sensor are stationary”, Paragraph [0112]; “FIG. 9 shows an example of passive ranging, where the SPI systems 100, 200 can identify, track, and analyze objects and scenes in three dimensions and at various distances”, Paragraph [0108]), thermal imagers (i.e., “The second image sensor 106-2 may include … a LWIR sensor”, Paragraph [0057]).
Re claim 9: Barbour et al. disclose a method for detection, classification, discrimination, and/or identification of objects (i.e., “process the captured 3D data to generate 3D surface data and 3D shape data; apply artificial intelligence (AI) to analyze the 3D surface data and 3D shape data; compare results to known parameters; and output real-time or near-time solutions”, Paragraph [0046]), the method comprising:
utilizing a camera system configured to capture one or more images and polarized light (i.e., “SPI system 100 is sensitive to EM radiation 104 that is incident upon it”, Paragraph [0049]; “Properties of the EM radiation 104 may be altered as it interacts with the object 102 … the primary angle of the reflected linearly polarized light, which is indicated as Theta in FIG. 3A, may be mathematically related to the in-plane angle of the reflecting surface”, Paragraph [0050]; and “the SPI system 100 includes an image sensor device 106 configured to generate 3D data”, Paragraph [0051]);
processing the images to determine and/or identify one or more objects in the images received from the camera system based on at least the polarized light (i.e., “one or more edge processors 108 may also be configured to cluster similar features or information related to the object 102 … the SPI system 100 can group the scene into different object types or group the object 102 into different surfaces, thus enabling segmentation of the object 102 from a cluttered scene”, Paragraph [0054]).
Re claim 11: Barbour et al. disclose the camera system including one or more polarimetric sensors for detecting the polarized light (i.e., “the SPI sensor 106-1 includes an EM detector (e.g., including an array of radiation-sensing pixels) and a polarization structure”, Paragraph [0056]).
Re claim 12: Barbour et al. disclose the camera system including stationary, non-rotating (See for example, “if the scene and camera containing the SPI sensor are stationary”, Paragraph [0112]; “FIG. 9 shows an example of passive ranging, where the SPI systems 100, 200 can identify, track, and analyze objects and scenes in three dimensions and at various distances”, Paragraph [0108]), thermal imagers (i.e., “The second image sensor 106-2 may include … a LWIR sensor”, Paragraph [0057]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4, 13, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Aycock et al. in view of Wenger et al. (U.S. Pub. No. 2018/0163700). The teachings of Aycock et al. have been discussed above.
As to claim 4, Aycock et al. teaches the at least one processor further configured to differentiate the object from among a plurality of objects (i.e., “objects 102”, Paragraph [0041]).
However, Aycock et al. does not explicitly disclose the plurality of objects including a fixed wing aircraft, a bird or other biologic creature, or a drone device.
Wenger et al. teaches a plurality of objects including a fixed wing aircraft, a bird or other biologic creature, or a drone device (See for example, “airborne objects may include flying or gliding objects or animals such as birds, bats, insects, other types of mammals, other types of birds, drones, aircraft, projectiles, other types of airborne objects”, Paragraph [0040]; “At block 205, the method 200 may detect a flying object”, Paragraph [0095]; and “At block 210, the high resolution imaging systems may use multiple techniques to classify the object”, Paragraph [0096]).
Aycock et al. and Wenger et al. are analogous art because they are from the field of digital image processing for object detection.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Aycock et al. by incorporating the plurality of objects including a fixed wing aircraft, a bird or other biologic creature, or a drone device, as taught by Wenger et al.
The suggestion/motivation for doing so would have been to mitigate the risks involved with airborne objects.
Therefore, it would have been obvious to combine Wenger et al. with Aycock et al. to obtain the invention as specified in claim 4.
As to claim 13, Aycock et al. teaches differentiating the object from among a plurality of objects (i.e., “objects 102”, Paragraph [0041]).
However, Aycock et al. does not explicitly disclose the plurality of objects including a fixed wing aircraft, a bird or other biologic creature, or a drone device.
Wenger et al. teaches a plurality of objects including a fixed wing aircraft, a bird or other biologic creature, or a drone device (See for example, “airborne objects may include flying or gliding objects or animals such as birds, bats, insects, other types of mammals, other types of birds, drones, aircraft, projectiles, other types of airborne objects”, Paragraph [0040]; “At block 205, the method 200 may detect a flying object”, Paragraph [0095]; and “At block 210, the high resolution imaging systems may use multiple techniques to classify the object”, Paragraph [0096]).
Therefore, in view of Wenger et al., it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Aycock et al. by incorporating the plurality of objects including a fixed wing aircraft, a bird or other biologic creature, or a drone device, as taught by Wenger et al., in order to mitigate the risks involved with airborne objects.
As to claim 14, Aycock et al. teaches determining one or more thermal characteristics in the one or more images for differentiating the object (i.e., “Of the Stokes parameters, S0 represents the conventional LWIR thermal image with no polarization information”, Paragraph [0066]).
Claim 5 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Barbour et el. in view of Seeber et al. (U.S. Pub. No. 2018/0129881). The teachings of Barbour et al. have been discussed above.
As to claim 5, Barbour et al. does not explicitly disclose the at least one processor configured to implement: a first operation mode comprising passive detection, tracking, and object classification; and a second mode operation performing long range interrogation once an object of potential interest is detected during the first mode of operation.
Seeber et al. teaches at least one processor (i.e., “The management module 104 may include hardware components such as a processor”, Paragraph [0061]) that configured to implement: a first operation mode comprising passive detection, tracking, and object classification (i.e., “systems, methods, apparatuses, and devices for identifying, tracking, and managing unmanned aerial vehicles (UAVs) using a plurality of sensors, hardware, and software”, Paragraph [0054]; “it may be desirable to be able to distinguish malicious UAVs (UAVs for spying, trespassing, etc.) from benign UAVs (UAVs for delivering consumer goods, etc.)”, Paragraph [0060]; and “monitoring the surroundings”, Paragraph [0100]); and a second mode operation performing long range interrogation once an object of potential interest is detected during the first mode of operation (i.e., Paragraph [0064]; and “an action regarding the UAV may be made based on a predetermined rule set. This decision could include a wide variety of actions, such as ignore the UAV because it is known to be trusted, or attempt to locate the controller of the UAV because it is identified as a threat”, Paragraph [0100]).
Barbour et al. and Seeber et al. are analogous art because they are from the field of digital image processing for object detection.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Barbour et al. by incorporating the at least one processor is configured to implement: a first operation mode comprising passive detection, tracking, and object classification, and a second mode operation performing long range interrogation once an object of potential interest is detected during the first mode of operation, as taught by Seeber et al.
The suggestion/motivation for doing so would have been to be able to distinguish malicious UAVs from benign UAVs in buildings and structures that require a safe and monitored airspace.
Therefore, it would have been obvious to combine Seeber et al. with Barbour et al. to obtain the invention as specified in claim 5.
As to claim 10, Barbour et al. teaches the using of stationary, non-rotating, thermal imagers with polarimetric sensors (See for example, “The second image sensor 106-2 may include … a LWIR sensor”, Paragraph [0057]; and “if the scene and camera containing the SPI sensor are stationary”, Paragraph [0112]; “FIG. 9 shows an example of passive ranging, where the SPI systems 100, 200 can identify, track, and analyze objects and scenes in three dimensions and at various distances”, Paragraph [0108]).
However, Barbour et al. does not explicitly disclose operating according to a first mode including passive detection, tracking, and UAV classification; and operating according to a second mode including long range interrogation after object of potential interest is detected using the first mode.
Seeber et al. teaches operating according to a first mode including passive detection, tracking, and UAV classification (i.e., “systems, methods, apparatuses, and devices for identifying, tracking, and managing unmanned aerial vehicles (UAVs) using a plurality of sensors, hardware, and software”, Paragraph [0054]; “it may be desirable to be able to distinguish malicious UAVs (UAVs for spying, trespassing, etc.) from benign UAVs (UAVs for delivering consumer goods, etc.)”, Paragraph [0060]; and “monitoring the surroundings”, Paragraph [0100]); and
operating according to a second mode including long range interrogation after object of potential interest is detected using the first mode (i.e., Paragraph [0064]; and “an action regarding the UAV may be made based on a predetermined rule set. This decision could include a wide variety of actions, such as ignore the UAV because it is known to be trusted, or attempt to locate the controller of the UAV because it is identified as a threat”, Paragraph [0100]).
Therefore, in view of Seeber et al., it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Barbour et al. by incorporating the operating according to a first mode including passive detection, tracking, and UAV classification, and operating according to a second mode including long range interrogation after object of potential interest is detected using the first mode, as taught by Seeber et al., in order to be able to distinguish malicious UAVs from benign UAVs in buildings and structures that require a safe and monitored airspace.
Claims 7, 8, 15, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Barbour et al. in view of Haskin et al. (U.S. Pub. No. 2024/0020968). The teaching of Barbour et al. have been discussed above.
As to claim 7, Barbour et al. does not explicitly disclose a low latency network system architecture.
Haskin et al. teaches a low latency network system architecture (i.e., “The system uses an efficient network architecture and a set of two hyper-parameters in order to build very small, low latency models that can be easily matched to the design requirements for mobile and embedded vision applications”, Paragraph [0173]; and “detecting, classifying, or recognizing herein, may comprises, may use, or may be based on, a method, scheme or architecture such as MobileNet, for example MobileNetV1, MobileNetV2, or MobileNetV3”, Paragraph [0449]).
Barbour et al. and Haskin et al. are analogous art because they are from the field of digital image processing for identifying objects in image data.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Barbour et al. by incorporating the low latency network system architecture, as taught by Haskin et al.
The suggestion/motivation for doing so would have been to build light weight deep neural networks, that is specifically tailored for mobile and resource constrained environments.
Therefore, it would have been obvious to combine Haskin et al. with Barbour et al. to obtain the invention as specified in claim 7.
As to claim 8, Barbour et al. does not explicitly disclose the low latency network system architecture comprising UDP protocols in combination with a data transmission pipeline configured to ensure lower system latency.
Haskin et al. teaches the low latency network system architecture comprising UDP protocols in combination with a data transmission pipeline configured to ensure lower system latency (See for example, Paragraph [0112]; and “Common features typically supported by operating systems include process management, interrupts handling, memory management, file system, device drivers, networking (such as TCP/IP and UDP)”, Paragraph [0475]).
Therefore, in view of Haskin et al., it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Barbour et al. by incorporating the low latency network system architecture comprising UDP protocols in combination with a data transmission pipeline configured to ensure lower system latency, as taught by Haskin et al., in order to build light weight deep neural networks, that is specifically tailored for mobile and resource constrained environments.
As to claim 15, Barbour et al. does not explicitly disclose employing a low latency network system architecture.
Haskin et al. teaches employing a low latency network system architecture (i.e., “The system uses an efficient network architecture and a set of two hyper-parameters in order to build very small, low latency models that can be easily matched to the design requirements for mobile and embedded vision applications”, Paragraph [0173]; and “detecting, classifying, or recognizing herein, may comprises, may use, or may be based on, a method, scheme or architecture such as MobileNet, for example MobileNetV1, MobileNetV2, or MobileNetV3”, Paragraph [0449]).
Therefore, in view of Haskin et al., it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Barbour et al. by incorporating the employing of a low latency network system architecture, as taught by Haskin et al., in order to build light weight deep neural networks, that is specifically tailored for mobile and resource constrained environments.
As to claim 16, Barbour et al. does not explicitly disclose the low latency network system architecture comprising UDP protocols in combination with a data transmission pipeline configured to ensure lower system latency.
Haskin et al. teaches the low latency network system architecture comprising UDP protocols in combination with a data transmission pipeline configured to ensure lower system latency (See for example, Paragraph [0112]; and “Common features typically supported by operating systems include process management, interrupts handling, memory management, file system, device drivers, networking (such as TCP/IP and UDP)”, Paragraph [0475]).
Therefore, in view of Haskin et al., it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Barbour et al. by incorporating the low latency network system architecture comprising UDP protocols in combination with a data transmission pipeline configured to ensure lower system latency, as taught by Haskin et al., in order to build light weight deep neural networks, that is specifically tailored for mobile and resource constrained environments.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSE M TORRES whose telephone number is (571)270-1356. The examiner can normally be reached Monday thru Friday; 10:00 AM to 6:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSE M TORRES/Examiner, Art Unit 2664 01/28/2026
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664