DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “A detection system, comprising: at least one light detection and ranging (LIDAR) module configured to scan…” in claim 1 and “A detection system, comprising: at least one controller; at least one sensing assembly including: at least one sensor device configured to scan…” in claim 13 and “A swarm detection and countermeasure system, comprising: at least one controller; at least one sensing assembly including: at least one sensor device configured to scan…” in claim 20.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 1 (and by dependency claims 2-12) are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claim recites “…one light detection and ranging (LIDAR) module configured to scan… at least one image processing module configured to process…”. The publication of the specification cites “Any of the processors disclosed herein can be part of or in communication with a machine (e.g., a computer device, a logic device, a circuit, an operating module (hardware, software, and/or firmware), etc.). The processor can be hardware (e.g., processor, integrated circuit, central processing unit, microprocessor, core processor, computer device, etc.), firmware, software, etc. configured to perform operations by execution of instructions embodied in computer program code, algorithms, program logic, control logic, data processing program logic, artificial intelligence programming, machine learning programming, artificial neural network programming, automated reasoning programming, etc.” ([0018]) and “The processor can include one or more processing or operating modules. A processing or operating module can be a software or firmware operating module configured to implement any of the functions disclosed herein. The processing or operating module can be embodied as software and stored in memory, the memory being operatively associated with the processor. A processing module can be embodied as a web application, a desktop application, a console application, etc.” ([0020]). In the case of software there would be no hardware to carry out the acts.
Claim 13 (and by dependency claims 14-19) are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claim recites “…at least one controller; at least one sensing assembly including: at least one sensor device configured to scan…”. The publication of the specification cites “Any of the processors disclosed herein can be part of or in communication with a machine (e.g., a computer device, a logic device, a circuit, an operating module (hardware, software, and/or firmware), etc.). The processor can be hardware (e.g., processor, integrated circuit, central processing unit, microprocessor, core processor, computer device, etc.), firmware, software, etc. configured to perform operations by execution of instructions embodied in computer program code, algorithms, program logic, control logic, data processing program logic, artificial intelligence programming, machine learning programming, artificial neural network programming, automated reasoning programming, etc.” ([0018]) and “The processor can include one or more processing or operating modules. A processing or operating module can be a software or firmware operating module configured to implement any of the functions disclosed herein. The processing or operating module can be embodied as software and stored in memory, the memory being operatively associated with the processor. A processing module can be embodied as a web application, a desktop application, a console application, etc.” ([0020]). In the case of software there would be no hardware to carry out the acts.
Claim 20 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claim recites “…swarm detection and countermeasure system, comprising: at least one controller; at least one sensing assembly including: at least one sensor device configured to scan…”. The publication of the specification cites “Any of the processors disclosed herein can be part of or in communication with a machine (e.g., a computer device, a logic device, a circuit, an operating module (hardware, software, and/or firmware), etc.). The processor can be hardware (e.g., processor, integrated circuit, central processing unit, microprocessor, core processor, computer device, etc.), firmware, software, etc. configured to perform operations by execution of instructions embodied in computer program code, algorithms, program logic, control logic, data processing program logic, artificial intelligence programming, machine learning programming, artificial neural network programming, automated reasoning programming, etc.” ([0018]) and “The processor can include one or more processing or operating modules. A processing or operating module can be a software or firmware operating module configured to implement any of the functions disclosed herein. The processing or operating module can be embodied as software and stored in memory, the memory being operatively associated with the processor. A processing module can be embodied as a web application, a desktop application, a console application, etc.” ([0020]). In the case of software there would be no hardware to carry out the acts.
Examiner recommends amending to either remove the language “module configured to” or for every citation of “module configured to” indicate a hardware structure that would be used to perform the act.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Robinson et al. (US 12374073 B1) in view of Lohr et al. (US 20210222383 A1).
Regarding claims 1 and 13, Robinson et al. disclose a detection system, comprising, and detection system, comprising: at least one controller (col. 10, lines 10-40); at least one sensing assembly (col. 10, lines 40-55) including: at least one light detection and ranging (LIDAR) module/device configured to scan a swarm of objects to generate image data of a first object associated with a swarm (While an example embodiment disclosed herein may employ a modality, such as a video color (VC), thermal, or multispectral (e.g., 450 nm, 550 nm, 650 nm, 750 nm, 850 nm, and 950 nm) modality, it should be understood that such an example embodiment is not limited to the number of modalities or the types of modalities employed. For non-limiting example, a depth, laser identification detection and ranging (LiDAR), radio detection and ranging (RADAR), or another modality may be employed, col. 4, lines 41-58, detect and track a drone and/or drone swarms flying over critical areas for non-limiting example, col. 5, line 58 - col. 6, line 2); and at least one image processing module configured to process the image data and control the at least one LIDAR module (MatrixSpace's 360 LiDAR tracking system to provide motion analysis and tracking with potential to later use the tracks to refine and improve object detection classification, col. 16, lines 22-33); wherein the at least one image processing module is configured to: detect presence of a first object (data-driven defeat sUAS (DS2) may be used to detect and track a drone and/or drone swarms flying over critical areas for non-limiting example, col. 5, line 58 - col. 6, line 2, detecting and tracking SUAS, col. 6, lines 15-25); detect a feature of a first object for which the presence has been detected (training the multi-modal system 112 or another multi-modal system to detect and track features in scenes, such as the feature 102 in the scene 104 that may be included in the training data 124, col. 6, line 60 - col. 7, line 10, three-stage detection, col. 7, lines 35-53, information provided by the other modalities regarding the same object, col. 7, lines 54-65, detects key features of drones, col. 25, lines 8 - 26, generate the respective bounded region for the potential regions identified, col. 11, lines 10-25, matches feature maps from different modalities with similar sizes, col. 13, lines 30-40, In the video color image, there is a flying object, yet the colors do not provide enough information to exactly classify the object. In the corresponding thermal image, thermal contrast is only distinguishable from background noise if it is already known that there is probability for an object to be at that exact location—this can be learned from the video color and vice versa such that combined information reinforces object detection, col. 13, lines 39-55, Motion Analysis is a field of Computer Vision used for processing many sequential frames of data to highlight motion between the frames. The tasks of motion analysis and capture include a) initialization; b) tracking; and c) recognition, col. 15, line 59 - col. 16, line 10, Feature extraction is performed on each modality's input image using the modality specific FPN backbone, col. 18, lines 50-60, detect the presence of hostile or unauthorized UAS, col. 29, lines 28-29); characterize, using image processing, a feature of a first object (Region Proposal Networks RPNs (RPNs) not only improves model performance by optimizing the way information is shared to make a final classification, col. 7, lines 54-65, Features are recommended to be efficient, robust and physically interpretable so as to obtain a machine processable data representation containing the key properties of the target. Discrimination from similar objects (e.g. birds, “small birds′, ‘big birds’, balloons etc.) is facilitated with data from known models, such as available via ImageNet, and computer-implemented methods can be further developed to discern different types of motion, col. 11, line 60 - col. 12, line 15, “Techniques from motion detection should be paired with such Deep Learning methods to allow approximations of complex functions (Chalapathy and Chawla, (2019). “Deep Learning for Anomaly Detection: A Survey,” ArXiv.org, 23 January. Retrieved from arxiv.org/abs/1901.03407). To evaluate motion/tracking, an example embodiment uses sequential data. Motion analysis techniques, such as background subtracted images disclosed below with regard to FIG. 5—or stacked images, can then test the model's ability to track objects that become occluded and to distinguish between types of moving objects, i.e., a drone or background motion”, col. 16, lines 10-21, The weights discovered in the ‘pretrained’ trials are taken from the best individual modality runs, disclosed in the table 1700 of FIG. 17, which incorporate ‘imageNet’ pretrained weights that are adept at detecting and classifying images of drones within their respective modality space., col. 23, line 55 - col. 24, line 5, fusion model is initialized with a backbone that accurately detects key features of drones, col. 25, lines 8 - 26, feature map, produce a proposal 2154-4 classifying a type 2164-4 of drone in the FOV, col. 28, lines 12-20); and initiate, based on the characterization of a feature, the at least one LIDAR module to any one or combination of track a first object for which the presence has been detected or scan a swarm to generate image data of a second object associated with a swarm (extracting features in the scene by applying the FPN to the sequence of observations, identifying potential regions of interest by applying the RPN to the features extracted, col. 2, lines 7-21, As a countermeasure, an example embodiment of a multi-modal, data-driven defeat sUAS (DS2) may be used to detect and track a drone and/or drone swarms flying over critical areas for non-limiting example, col. 5, line 58 - col. 6, line 2, tracking targets in real-time, col. 14, lines 45-57).
To the extent the limitation “initiate, based on the characterization of a feature, the at least one LIDAR module to any one or combination of track a first object for which the presence has been detected or scan a swarm to generate image data of a second object associated with a swarm” is not explicit, another reference is hereby provided.
Lohr et al. teach at least one light detection and ranging (LIDAR) module configured to scan a swarm of objects to generate image data of a first object associated with a swarm (video view or video monitoring, sensors, radar, lidar or ladar, [0081], FIG. 17 shows an intelligent barricade, equipped with video monitoring technology, high-resolution, ultra-high-resolution etc. digital cameras, IP-supported, thermal imaging cameras, camera domes, headlights, infrared illuminators and microphones. The cameras are day vision enabled and night vision enabled. Furthermore, the barricade can be equipped with lidar, ladar and radar systems, in order to localize objects, persons and their movements with meter precision and track them, [0090]); and at least one image processing module configured to process the image data and control the at least one LIDAR module (command-issuing remote control system can be triangulated in the neural network, The system can, when required, automatically orient cameras toward the drone, [0092]); wherein the at least one image processing module is configured to: detect presence of a first object (area in which, for example, movement is to be detected, alarm “drone detected”, [0090], When a drone is detected, a popup window in the management system will present all the further available steps, [0092]); detect a feature of a first object for which the presence has been detected (localize objects, persons and their movements, [0090]); characterize, using image processing, a feature of a first object (By using databases which contain image material and sound material as well as characteristic RF and HF signals as comparison variables it is possible, for example, to detect drones automatically, identify them and track them in real time, [0092]); and initiate, based on the characterization of a feature, the at least one LIDAR module to any one or combination of track a first object for which the presence has been detected or scan a swarm to generate image data of a second object associated with a swarm (By using a plurality of HF sensors, also with 2D and/or 3D antennas or directional antennas, both third-party drones and the persons with the command-issuing remote control system can be triangulated in the neural network, The system can, when required, automatically orient cameras toward the drone, perform evaluation and execute further measures and suitable countermeasures, such as for example bringing the third-party drone into the failsafe mode, which is also possible manually. By using databases which contain image material and sound material as well as characteristic RF and HF signals as comparison variables it is possible, for example, to detect drones automatically, identify them and track them in real time, detect, track and represent multiple objects such as, for example, flying cars, air taxis and microdrones with swarm intelligence, [0092]).
Robinson et al. and Lohr et al. are in the same art of drone/swarm detection (Robinson et al., col. 5, line 58 - col. 6, line 2; Lohr et al., [0092]). The combination of Lohr et al. with Robinson et al. will enable initiating based on the characterization of a feature, tracking a first object. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the responsive tracking of Lohr et al. with the invention of Robinson et al. as this was known at the time of filing, the combination would have predictable results, and as Lohr et al. indicate, “By using these devices it is possible to carry out geofencing, that is to say to detect third-party drones which penetrate a protected area over water, land, or in the air, and also to triangulate the radio remote control of the controlling person, track it and disrupt it. By using databases which contain image material and sound material as well as characteristic RF and HF signals as comparison variables it is possible, for example, to detect drones automatically, identify them and track them in real time. Further data from the database makes available, through constant agreement with the drone manufacturers, continuously updated codes and commands for taking over drones, controlling and landing them, activating failsafe functions, etc. These devices are false alarm-proof, even if further HF signals occur. A plurality of drones, irrespective of whether they are of the same or different types or models, can also be safely detected, taken over and controlled. These devices can also be implemented in the neural network of the intelligent barricades. Their use, also geodata-supported, at strategic or necessary locations results in an uninterrupted detection screen with a large range and high level of precision. As described, the system can also be incorporated into the management system. Further measures and countermeasures can be initiated or executed manually or automatically, as described. The system can also detect, track and represent multiple objects such as, for example, flying cars, air taxis and microdrones with swarm intelligence and can prepare and/or execute autonomous and/or manually started countermeasures” ([0092]) thereby providing a safety motivation and commercial application in policing and military applications.
Regarding claim 2, Robinson et al. and Lohr et al. disclose the detection system of claim 1. Robinson et al. and Lohr et al. further indicate the swarm of objects includes an ariel object, a ground object, and/or a marine object (Robinson et al., data-driven defeat sUAS (DS2) may be used to detect and track a drone and/or drone swarms flying over critical areas for non-limiting example, col. 5, line 58 - col. 6, line 2; Lohr et al., detect drones automatically, identify them and track them in real time, [0092]).
Regarding claim 3, Robinson et al. and Lohr et al. disclose the detection system of claim 1. Robinson et al. and Lohr et al. further indicate the detection system is configured to: scan an area to detect a swarm (Robinson et al., multi-modal, data-driven defeat sUAS (DS2) may be used to detect and track a drone and/or drone swarms flying over critical areas, col. 5, line 58 - col. 6, line 2; Lohr et al., monitoring region, The system can also detect, track and represent multiple objects such as, for example, flying cars, air taxis and microdrones with swarm intelligence, [0092]); receive a signal that a swarm has been detected and to begin scanning the swarm and/or receive a signal directing it to begin scanning an area to detect a swarm (Robinson et al., The method uses a projection function that synergistically shares information between or among the at least two modes to generate a corresponding fused predicted existence of the feature for each of the at least two modes of each observation, thereby maximizing the benefits of multi-modality. The feature may be an object of interest that may be occluded and moving within the environment. The predicted existence enables the object to be detected, tracked, and/or identified, abstract, extracting features in the scene by applying the FPN to the sequence of observations, identifying potential regions of interest by applying the RPN to the features extracted, col. 2, lines 7-21, The system 312 further comprises a controller 378 configured to operate the system 312 based on parameters 380. The controller is further configured to update the parameters 380 based on each observation of the sequence of observations as a function of: (i) the predicted existence (e.g., 375-1, 375-2) of the feature, col. 10, lines 10-40; Lohr et al., The system can, when required, automatically orient cameras toward the drone, perform evaluation and execute further measures and suitable countermeasures, such as for example bringing the third-party drone into the failsafe mode, which is also possible manually. By using databases which contain image material and sound material as well as characteristic RF and HF signals as comparison variables it is possible, for example, to detect drones automatically, identify them and track them in real time, detect, track and represent multiple objects such as, for example, flying cars, air taxis and microdrones with swarm intelligence, [0092]).
Regarding claim 4, Robinson et al. and Lohr et al. disclose the detection system of claim 1. Robinson et al. further indicate the at least one LIDAR module is configured to generate image data as three-dimensional (3-D) point cloud data (LiDAR and RADAR can be used to temporally visualize the same 3D space, col. 12, lines 45-61, “LIDAR-based 3D Object Perception,” col. 16, lines 35-65).
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Robinson et al. (US 12374073 B1) and Lohr et al. (US 20210222383 A1) as applied to claim 1 above, further in view of Kurtz et al. (US 20230090281 A1).
Regarding claim 5, Robinson et al. and Lohr et al. disclose the detection system of claim 1. Robinson et al. and Lohr et al. do not disclose the at least one LIDAR module is a solid-state LIDAR device, the solid-state LIDAR device comprising: a microelectromechanical (MEM) control or a photonic control configured to direct an optical pulse at a swarm.
Kurtz et al. teach at least one LIDAR module is a solid-state LIDAR device, the solid-state LIDAR device comprising: a microelectromechanical (MEM) control or a photonic control configured to direct an optical pulse at a swarm (optical sensors, detecting moving vehicles, such as airborne or ground based cars, aircraft, drones, missiles, [0141], A MEMS or OPA LIDAR scanning system, as illustrated conceptually in FIG. 15A and FIGS. 16A and 16B, can be designed as shown conceptually in FIG. 16C. A LIDAR laser 1100 can provide light via beam shaping optics 470B and a mirror 480 so as to work with relay optical elements 435 to focus laser light near the objective lens aperture stop 355, such that nominally collimated light beams can then emerge from the objective lens 320 with a beam waist at, or near, or somewhat beyond the outer surface of the outermost compressor lens element of the objective lens 320. As a result, the LIDAR sub-system, through the objective lens, scans an environment, such that a single pulse represents a single chief ray. A mask at the aperture stop 355 of the objective lens 320 or the relay optical system (a secondary aperture stop 455) can also be “color” dependent, using spatially variant filters to provide a different stop diameter for IR light than for visible light”, [0159]).
Robinson et al. and Lohr et al. and Kurtz et al. are in the same art of drone/swarm detection (Robinson et al., col. 5, line 58 - col. 6, line 2; Lohr et al., [0092]; Kurtz et al., [0141]). The combination of Kurtz et al. with Robinson et al. and Lohr et al. will enable using a solid-state LIDAR device. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the solid-state LIDAR device of Kurtz et al. with the invention of Robinson et al. and Lohr et al. as this was known at the time of filing, the combination would have predictable results, solid-state LIDAR devices are one of a limited number of types of LIDAR devices, and as MEMS LIDAR devices are known in the art as being typically smaller and cheaper, providing a financial benefit to combining inventions.
Claim(s) 6-12 and 14-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Robinson et al. (US 12374073 B1) and Lohr et al. (US 20210222383 A1) as applied to claims 1 and 13 above, further in view of Droz et al. (US 20210173047 A1).
Regarding claim 6, Robinson et al. and Lohr et al. disclose the detection system of claim 1. Robinson et al. and Lohr et al. do not disclose the at least one LIDAR module and the at least one image processing module are configured as a unitary sensor device, comprising: plural sensor devices, the plural sensor devices including at least a first sensor device configured to scan a first sector of a swarm and at least a second sensor device configured to scan a second sector of a swarm; and wherein a portion of a first sector overlaps a portion of a second sector.
Droz et al. teach at least one LIDAR module and the at least one image processing module are configured as a unitary sensor device, comprising: plural sensor devices, the plural sensor devices including at least a first sensor device configured to scan a first sector of a swarm and at least a second sensor device configured to scan a second sector of a swarm; and wherein a portion of a first sector overlaps a portion of a second sector (“In example embodiments, lidar devices may include one or more transmitters (e.g., laser diodes) and one or more receivers (e.g., light detectors). For example, an example lidar device may include an array of laser transmitters and a corresponding array of light detectors. Such arrays may illuminate objects in the scene and receive reflected light from objects in the scene so as to collect data that may be used to generate a point cloud for a particular angular field of view relative to the lidar device. Further, to generate a point cloud with an enhanced field of view (e.g., a complete 360° field of view), the array of transmitters and the corresponding array of receivers may send and receive light at predetermined times and/or locations within that enhanced field of view. For example, the lidar may arrange the array of transmitters and the corresponding array of receivers around the vertical axis such that light is transmitted and received in multiple directions around the 360° field of view simultaneously. As another example, a lidar may be rotated about a central axis to transmit/receive multiple sets of data. The data can be used to form point clouds that can be composited to generate the enhanced field of view. In some embodiments, the arrays of transmitters/corresponding receivers may not have uniform density. For example, central portions of the arrays might have an increased density of transmitters/receivers when compared to the periphery of the arrays. This may allow for increased resolution in certain portions of the point cloud of the field of view (e.g., a central region of the point cloud may have higher density than peripheral regions of the point cloud). The increased resolution may correspond to regions of interest of a surrounding scene. For example, in a vehicle operating in an autonomous mode, objects in front of the vehicle may be of increased interest when compared to objects above or below the vehicle. As another example, objects at certain elevations (e.g., along the horizon) or specific locations on a predetermined map (e.g., near a street corner that includes a pedestrian crosswalk), or specific locations relative to the car may be of increased interest”, [0028], The first and second radar units 208, 210 and/or the first and second lidar units 204, 206 can actively scan the surrounding environment for the presence of potential obstacles and can be similar to the radar 126 and/or laser rangefinder/lidar 128 in the vehicle 100, [0066], “The sensor unit 202 is mounted atop the vehicle 200 and includes one or more sensors configured to detect information about an environment surrounding the vehicle 200, and output indications of the information. For example, sensor unit 202 can include any combination of cameras, radars, lidars, range finders, inertial sensors, humidity sensors, and acoustic sensors. The sensor unit 202 can include one or more movable mounts that could be operable to adjust the orientation of one or more sensors in the sensor unit 202. In one embodiment, the movable mount could include a rotating platform that could scan sensors so as to obtain information from each direction around the vehicle 200. In another embodiment, the movable mount of the sensor unit 202 could be movable in a scanning fashion within a particular range of angles and/or azimuths and/or elevations”, [0067]) [by rotating around, each view will partly overlap the previous view] [Also, Robinson et al. indicate overlapping sensor FOVs: “One of the biggest problems with multi-modality is spatial asynchrony within data sources. Neural network (NN) fusion can help with this by using temporally aligned (synchronously collected) data sharing a common field of view (FOV)”, col. 5, line 59 - col. 5, line 2].
Robinson et al. Droz et al. are in the same art of detection using LIDAR (Robinson et al., col. 4, lines 41-58; Droz et al., abstract). The combination of Droz et al. with Robinson et al. and Lohr et al. will enable using plural LIDAR sensor devices. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the plural sensor devices of Droz et al. with the invention of Robinson et al. and Lohr et al. as this was known at the time of filing, the combination would have predictable results, and as Droz et al. indicate “The disclosure relates to a pulse energy plan for lidar devices based on areas of interest and thermal budgets. As a lidar device scans an environment (e.g., to generate a point cloud), the lidar device may generate excess heat (e.g., from inefficiencies in the light emitters of the lidar device). If too much excess heat is produced, such heat can have detrimental effects on the lidar device components. In order to prevent potential degradation to the lidar device, power provided to the light emitters in the lidar device may be allocated according to a pulse energy plan, thereby limiting the amount of excess heat produced. One method of allocating the amount of power provided to the light emitters is to identify regions of interest in the environment surrounding the lidar device and then provide greater power to the light emitters when the lidar device is scanning those regions of interest” [0004] and “In one aspect, a lidar device is provided. The lidar device includes a plurality of light emitters configured to emit light pulses into an environment of the lidar device in a plurality of different emission directions. The lidar device also includes circuitry configured to power the plurality of light emitters. Further, the lidar device includes a plurality of detectors. Each detector in the plurality of detectors is configured to detect reflections of light pulses emitted by a corresponding light emitter in the plurality of light emitters and received from the environment of the lidar device. Additionally, the lidar device includes a controller configured to (i) determine a pulse energy plan based on one or more regions of interest in the environment of the lidar device and a thermal budget. The pulse energy plan specifies a pulse energy level for each light pulse emitted by each light emitter in the plurality of light emitters and (ii) control the circuitry based on the pulse energy plan” ([0005]) thereby suggesting how areas that need to be imaged can be balanced by a proper energy plan when the inventions are combined.
Regarding claim 7, Robinson et al. and Lohr et al. and Droz et al. disclose the detection system of claim 6. Robinson et al. and Lohr et al. and Droz et al. further indicate at least one controller in communication with the plural sensor devices, the at least one controller configured to coordinate scanning and image data generation performed by the plural sensor devices; and/or at least one controller in communication with at least one sensor device and at least one other sensor device (Robinson et al., The projection function 376 shares information between or among modes to generate a corresponding fused predicted existence (377-1, 377-2) of the feature for each of the at least two modes of each observation. The system 312 further comprises a controller 378 configured to operate the system 312 based on parameters 380, col. 10, lines 10-40; Lohr et al., The mobile barricade can also be operated, steered and moved from the remote control center since it optionally has corresponding sensor systems and actuators, such as for example all-round video view or video monitoring with evaluations of events, persons, objects, movement, microphones, loudspeakers for generating signals and for reproducing speech, distant sensor systems, such as for example ultrasonic sensors and infrared sensors, radar, lidar or ladar, GPS, electromagnetic compass, smoke detectors for self-protection and for monitoring the surroundings, temperature sensors, headlights, infrared illuminators and high-resolution daylight cameras and thermal imaging cameras, [0081], An HF sensor receives control commands and video feeds which are exchanged between a third-party drone and its remote control, [0092]; Droz et al., “Further, the lidar device includes a plurality of detectors configured to detect reflections of light pulses emitted by the plurality of light emitters. In addition, the lidar device includes a controller configured to (i) determine a pulse energy plan based on one or more regions of interest in the environment and a thermal budget and (ii) control the circuitry based on the pulse energy plan. The pulse energy plan specifies a pulse energy level for each light pulse emitted by each light emitter in the plurality of light emitters”, abstract, “Further, to generate a point cloud with an enhanced field of view (e.g., a complete 360° field of view), the array of transmitters and the corresponding array of receivers may send and receive light at predetermined times and/or locations within that enhanced field of view. For example, the lidar may arrange the array of transmitters and the corresponding array of receivers around the vertical axis such that light is transmitted and received in multiple directions around the 360° field of view simultaneously”, [0028], Additionally or alternatively, if the lidar device is rotated about an axis, there may be specific regions of interest where an increased range is desired (e.g., when the array of transmitters is oriented such that a region in front or to the sides of a vehicle is being illuminated, as opposed to a region behind a vehicle), [0030], control one or more of propulsion system 102, sensor system 104, control system 106, and peripherals 108, [0056], dynamically controlling various scanning parameters of the first lidar unit 204, [0108]), the at least one other sensor device including any one or combination of a vibrational sensor, a pressure sensor, a motion sensor, a radio detection and ranging (RADAR) sensor, an acoustic sensor, a magnetic sensor, an accelerometer, an electric sensor, or an optical sensor (Robinson et al., fused predicted existence (377-1, 377-2) of the feature for each of the at least two modes of each observation, col. 10, lines 10-40, multi-modal sensors (multispectral, RGB, video, thermal, potentially LiDAR and RADAR) can be used, col. 12, lines 45-61; Lohr et al., The mobile barricade can also be operated, steered and moved from the remote control center since it optionally has corresponding sensor systems and actuators, such as for example all-round video view or video monitoring with evaluations of events, persons, objects, movement, microphones, loudspeakers for generating signals and for reproducing speech, distant sensor systems, such as for example ultrasonic sensors and infrared sensors, radar, lidar or ladar, GPS, electromagnetic compass, smoke detectors for self-protection and for monitoring the surroundings, temperature sensors, headlights, infrared illuminators and high-resolution daylight cameras and thermal imaging cameras, [0081], The system includes HF and RF sensors, microphones, networked remote video cameras, etc., [0092]; Droz et al., lidar, [0030], For example, sensor unit 202 can include any combination of cameras, radars, lidars, range finders, inertial sensors, humidity sensors, and acoustic sensors, [0067]); the at least one controller being part of the detection system or a separate component of the detection system (Lohr et al., FIG. 17 shows an intelligent barricade, equipped with video monitoring technology, high-resolution, ultra-high-resolution etc. digital cameras, IP-supported, thermal imaging cameras, camera domes, headlights, infrared illuminators and microphones, pivotably mounted cameras [0090], Commands can also be sent and telemetry data received from the center, i.e. a control center as well as from further mobile command stations such as, for example, networked vehicles and/or container offices in which the home base together with a drone can be mounted and managed or kept in constant standby mode, which is added to the management system and the management can be managed centrally and/or managed in a decentralized fashion, [0101], Droz et al., “Additionally, the sensors of sensor unit 202 could be distributed in different locations and need not be collocated in a single location. Some possible sensor types and mounting locations include the two additional locations 216, 218. Furthermore, each sensor of sensor unit 202 can be configured to be moved or scanned independently of other sensors of sensor unit 202”, [0068]).
Regarding claim 8, Robinson et al. and Lohr et al. and Droz et al. disclose the detection system of claim 7. Robinson et al. and Lohr et al. and Droz et al. further indicate the image processing modules of the plural sensor devices are configured to generate movement data of the objects and the at least one controller is configured to coordinate scanning and image data generation performed by the plural sensor devices based on movement data (Lohr et al., FIG. 17 shows an intelligent barricade, equipped with video monitoring technology, high-resolution, ultra-high-resolution etc. digital cameras, IP-supported, thermal imaging cameras, camera domes, headlights, infrared illuminators and microphones. The cameras are day vision enabled and night vision enabled. Furthermore, the barricade can be equipped with lidar, ladar and radar systems, in order to localize objects, persons and their movements with meter precision and track them, The area in which, for example, movement is to be detected, can be configured by software and limited. As can also the free rotational radius of the pivotably mounted cameras [0090]; Droz et al., “It is understood that the regions of interest may be identified in a variety of ways in addition to or instead of using previous measurements (e.g., from auxiliary sensors of the vehicle 500). In some embodiments, control systems of the vehicle 500 may indicate the location of one or more regions of interest. For example, if the vehicle 500 is turning left or changing lanes, the regions of interest may be adjusted to accommodate such a maneuver (e.g., additional regions of interest may be allocated to the left side of the vehicle 500). Similarly, when a vehicle 500 is driving reverse instead of forward, the number of and/or angular range of regions of interest may change”, [0116]).
Regarding claim 9, Robinson et al. and Lohr et al. and Droz et al. disclose the detection system of claim 8. Droz et al. further indicate the at least one controller is configured to coordinate scanning and image data generation performed by the plural sensor devices based on movement data and tracking data of the first object (Sensor fusion algorithm 138 may include a Kalman filter, Bayesian network, or other algorithms that can process data from sensor system 104. In some embodiments, sensor fusion algorithm 138 may provide assessments based on incoming sensor data, such as evaluations of individual objects and/or features, evaluations of a particular situation, and/or evaluations of potential impacts within a given situation, [0049], navigation/pathing system 142 may use data from sensor fusion algorithm 138, GPS 122, and maps, among other sources to navigate vehicle 100, evaluate potential obstacles based on sensor data and cause systems of vehicle 100 to avoid or otherwise negotiate the potential obstacles, [0051], vehicle 500 may use the lidar device 520 to scan a surrounding environment to perform object detection and avoidance, As the actuator(s) rotate the light emitters and detectors, distances to different regions of the surrounding environment may be determined by emitting and detecting a series of light signals. Such distances may be amalgamated into a point cloud, [0111], “The pulse energy plan may incorporate one or more regions of interest in the environment surrounding the lidar device 520. For example, based on previous measurements (e.g., from auxiliary sensors of the vehicle 500), it may be determined that only specific regions of the scene surrounding the vehicle 500 include objects separated from the lidar device 520 by more than a threshold distance. Those specific regions may constitute identified regions of interest”, [0115]).
Regarding claim 10, Robinson et al. and Lohr et al. and Droz et al. disclose the detection system of claim 7. Robinson et al. and Droz et al. further indicate the at least one controller is configured process data from the plural sensor devices and/or the at least one other sensor device using a sensor fusion technique, wherein the data from the plural sensor devices and/or the at least one other sensor device is raw data, processed data, or a combination thereof (Robinson et al., One of the biggest problems with multi-modality is spatial asynchrony within data sources. Neural network (NN) fusion can help with this by using temporally aligned (synchronously collected) data sharing a common field of view (FOV), col. 5, line 59 - col. 5, line 2, A hypothesis herein is that when used together, each sensor can feasibly improve object detection in a complementary manner, where signatures in one mode may augment low, confused, or absent signatures in other modes (Himmelsbach et al., 2008, “LIDAR-based 3D Object Perception,” Velodyne Lidar, p. 1-7.; Cho et al., 2014, “A multi-sensor fusion system for moving object detection and tracking in urban driving environments,” Proceedings of the 4 IEEE International Conference on Robotics and Automation (ICRA); Hong Kong, China. 31 May-7 Jun. 2014; pp. 1836-1843) affording continued drone detection irrespective of extenuating circumstances, col. 16, lines 35-65; Droz et al., sensor fusion, [0048]-[0051]).
Regarding claims 11 and 16, Robinson et al. and Lohr et al. and Droz et al. disclose the detection system of claims 7 and 14. Lohr et al. and Droz et al. further indicate the at least one controller is in communication with the plural sensor devices via one or more of a distributed network architecture or a centralized network architecture; and/or the at least one controller is in communication with at least one sensor device and at least one other sensor device via one or more of a distributed network architecture or a centralized network architecture (Lohr et al., a communication network can be set up between the individual obstacles, [0025], All the elements are supplied both with voltage and with communication channels, regardless of whether they are securely anchored or interlinked. The possibility of radio networking, either as a local network or through a public cell phone network, is redundantly present as a fallback level, [0075], The system includes HF and RF sensors, microphones, networked remote video cameras, etc. An HF sensor receives control commands and video feeds which are exchanged between a third-party drone and its remote control, [0092]; Droz et al., Wireless communication system 146 may wirelessly communicate with one or more devices directly or via a communication network, [0053], computing devices that may serve to control individual components or subsystems of vehicle 100 in a distributed fashion, [0055], FIG. 3 is a conceptual illustration of wireless communication between various computing systems related to an autonomous vehicle, according to example embodiments. In particular, wireless communication may occur between remote computing system 302 and vehicle 200 via network 304. Wireless communication may also occur between server computing system 306 and remote computing system 302, and between server computing system 306 and vehicle 200, [0074], The point cloud may be generated on-board by the lidar device 520 or another on-board computing device from the determined distances, [0111]).
Regarding claims 12 and 17, Robinson et al. and Lohr et al. and Droz et al. disclose the detection system of claims 7 and 14. Robinson et al. and Lohr et al. and Droz et al. further indicate the at least one controller and/or the plural sensor devices is configured to perform scanning and image data generation via one or more of a centralized data processing technique or a decentralized data processing technique; and/or the at least one controller, the at least one sensor device, and/or the at least one other sensor device is configured to perform scanning and image data generation via one or more of a centralized data processing technique or a decentralized data processing technique (Lohr et al., a communication network can be set up between the individual obstacles, [0025], All the elements are supplied both with voltage and with communication channels, regardless of whether they are securely anchored or interlinked. The possibility of radio networking, either as a local network or through a public cell phone network, is redundantly present as a fallback level, [0075], The system includes HF and RF sensors, microphones, networked remote video cameras, etc. An HF sensor receives control commands and video feeds which are exchanged between a third-party drone and its remote control, [0092]; Droz et al., Wireless communication system 146 may wirelessly communicate with one or more devices directly or via a communication network, [0053], computing devices that may serve to control individual components or subsystems of vehicle 100 in a distributed fashion, [0055], “FIG. 3 is a conceptual illustration of wireless communication between various computing systems related to an autonomous vehicle, according to example embodiments. In particular, wireless communication may occur between remote computing system 302 and vehicle 200 via network 304. Wireless communication may also occur between server computing system 306 and remote computing system 302, and between server computing system 306 and vehicle 200”, [0074], The point cloud may be generated on-board by the lidar device 520 or another on-board computing device from the determined distances. Additionally or alternatively, the point cloud may be generated using a separate computing device (e.g., a networked computing device, such as a server device) from the determined distances, [0111]) [server vs networked computing = centralized and decentralized options].
Regarding claim 14, Robinson et al. and Lohr et al. disclose the detection system of claim 13.
Lohr et al. further indicate the at least one controller is configured to coordinate scanning and image data generation performed by the at least one sensing assembly (Lohr et al., FIG. 17 shows an intelligent barricade, equipped with video monitoring technology, high-resolution, ultra-high-resolution etc. digital cameras, IP-supported, thermal imaging cameras, camera domes, headlights, infrared illuminators and microphones. The cameras are day vision enabled and night vision enabled. Furthermore, the barricade can be equipped with lidar, ladar and radar systems, in order to localize objects, persons and their movements with meter precision and track them, The area in which, for example, movement is to be detected, can be configured by software and limited. As can also the free rotational radius of the pivotably mounted cameras [0090]).
Robinson et al. and Lohr et al. do not explicitly disclose the at least one sensing assembly includes plural LIDAR sensor devices.
Droz et al. teach at least one sensing assembly includes plural LIDAR sensor devices (“In some embodiments, computer system 112 may make a determination about various objects based on data that is provided by systems other than the radio system. For example, vehicle 100 may have lasers or other optical sensors configured to sense objects in a field of view of the vehicle. Computer system 112 may use the outputs from the various sensors to determine information about objects in a field of view of the vehicle, and may determine distance and direction information to the various objects. Computer system 112 may also determine whether objects are desirable or undesirable based on the outputs from the various sensors”, [0062], “The example vehicle 200 includes a sensor unit 202, a first lidar unit 204, a second lidar unit 206, a first radar unit 208, a second radar unit 210, a first lidar/radar unit 212, a second lidar/radar unit 214, and two additional locations 216, 218 at which a radar unit, lidar unit, laser rangefinder unit, and/or other type of sensor or sensor(s) could be located on the vehicle 200. Each of the first lidar/radar unit 212 and the second lidar/radar unit 214 can take the form of a lidar unit, a radar unit, or both”, [0065], “For example, to apply a different respective refresh rate of the first lidar unit 204, in the first pointing direction (i.e., contour 404) relative to the second pointing direction (i.e., contour 406), the first lidar unit 204 can emit one light pulse in the first pointing direction for every complete rotation of the first lidar unit 204 about axis 232, and for every two complete rotations of the first lidar unit 204 about axis 232. By doing so, for instance, the first pointing direction can be assigned a higher refresh rate than the second pointing direction. As another example, to apply a different respective horizontal scanning resolution, the first lidar unit 204 can be configured to emit light pulses at a different pulse rate (e.g., number of pulses per second) when the first lidar unit 204 is oriented in the first pointing direction than a pulse rate applied when the first lidar unit 204 is oriented in the second pointing direction”, [0110], “Such a lidar device 520 may include an array of light emitters and a corresponding array of detectors, where the light emitters emit light signals toward an environment surrounding the lidar device 520 and the detectors detect reflections of the emitted light signals from objects in the environment surrounding the lidar device 520. Based on the time delay between the emission time and the detection time, a distance to an object in the environment may be determined. In order to identify the distance to multiple objects in the environment, the lidar device 520 may include one or more actuators that are configured to rotate the light emitters and detectors (e.g., in an azimuthal direction and/or elevation direction). For example, the actuators may azimuthally rotate the light emitters and detectors such that a 360° field of view is observed”, [0111], light emitters 522 alternate between a first pulse energy level 612 and a second pulse energy level 614 with respect to elevation angle according to the pulse energy plan, a continuum of pulse energy levels could also be used in the pulse energy plan, a continuum could allow for enhanced precision when it comes to tuning the range probed by the lidar device 520 for different regions of interest, [0124]); and the at least one controller is configured to coordinate scanning and image data generation performed by the at least one sensing assembly (It is understood that the regions of interest may be identified in a variety of ways in addition to or instead of using previous measurements (e.g., from auxiliary sensors of the vehicle 500). In some embodiments, control systems of the vehicle 500 may indicate the location of one or more regions of interest. For example, if the vehicle 500 is turning left or changing lanes, the regions of interest may be adjusted to accommodate such a maneuver (e.g., additional regions of interest may be allocated to the left side of the vehicle 500). Similarly, when a vehicle 500 is driving reverse instead of forward, the number of and/or angular range of regions of interest may change, [0116]).
Robinson et al. Droz et al. are in the same art of detection using LIDAR (Robinson et al., col. 4, lines 41-58; Droz et al., abstract). The combination of Droz et al. with Robinson et al. and Lohr et al. will enable using plural LIDAR sensor devices. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the plural sensor devices of Droz et al. with the invention of Robinson et al. and Lohr et al. as this was known at the time of filing, the combination would have predictable results, and as Droz et al. indicate “The disclosure relates to a pulse energy plan for lidar devices based on areas of interest and thermal budgets. As a lidar device scans an environment (e.g., to generate a point cloud), the lidar device may generate excess heat (e.g., from inefficiencies in the light emitters of the lidar device). If too much excess heat is produced, such heat can have detrimental effects on the lidar device components. In order to prevent potential degradation to the lidar device, power provided to the light emitters in the lidar device may be allocated according to a pulse energy plan, thereby limiting the amount of excess heat produced. One method of allocating the amount of power provided to the light emitters is to identify regions of interest in the environment surrounding the lidar device and then provide greater power to the light emitters when the lidar device is scanning those regions of interest” [0004] and “In one aspect, a lidar device is provided. The lidar device includes a plurality of light emitters configured to emit light pulses into an environment of the lidar device in a plurality of different emission directions. The lidar device also includes circuitry configured to power the plurality of light emitters. Further, the lidar device includes a plurality of detectors. Each detector in the plurality of detectors is configured to detect reflections of light pulses emitted by a corresponding light emitter in the plurality of light emitters and received from the environment of the lidar device. Additionally, the lidar device includes a controller configured to (i) determine a pulse energy plan based on one or more regions of interest in the environment of the lidar device and a thermal budget. The pulse energy plan specifies a pulse energy level for each light pulse emitted by each light emitter in the plurality of light emitters and (ii) control the circuitry based on the pulse energy plan” ([0005]) thereby suggesting how areas that need to be imaged can be balanced by a proper energy plan when the inventions are combined.
Regarding claim 15, Robinson et al. and Lohr et al. and Droz et al. disclose the detection system of claim 14. Robinson et al. and Droz et al. further indicate the at least one controller is configured process data from the plural LIDAR sensor devices and/or the at least one sensing device using a sensor fusion technique (Robinson et al., One of the biggest problems with multi-modality is spatial asynchrony within data sources. Neural network (NN) fusion can help with this by using temporally aligned (synchronously collected) data sharing a common field of view (FOV), col. 5, line 59 - col. 5, line 2, A hypothesis herein is that when used together, each sensor can feasibly improve object detection in a complementary manner, where signatures in one mode may augment low, confused, or absent signatures in other modes (Himmelsbach et al., 2008, “LIDAR-based 3D Object Perception,” Velodyne Lidar, p. 1-7.; Cho et al., 2014, “A multi-sensor fusion system for moving object detection and tracking in urban driving environments,” Proceedings of the 4 IEEE International Conference on Robotics and Automation (ICRA); Hong Kong, China. 31 May-7 Jun. 2014; pp. 1836-1843) affording continued drone detection irrespective of extenuating circumstances, col. 16, lines 35-65; Droz et al., sensor fusion, [0048]-[0051])
Claim(s) 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Robinson et al. (US 12374073 B1) and Lohr et al. (US 20210222383 A1) as applied to claim 13 above, further in view of Ray (US 20220051576 A1).
Regarding claim 18, Robinson et al. and Lohr et al. disclose the detection system of claim 13. Robinson et al. and Lohr et al. do not explicitly disclose the at least one controller is configured to process movement data from the plural LIDAR sensor devices to: identify a formation; and predict, based at least in part on an identified formation, behavior of at least one object associated with the swarm, a subset of objects associated with a swarm, and/or all of the objects associated with a swarm.
Ray teaches at least one controller is configured to process movement data from the plural LIDAR sensor devices to: identify a formation; and predict, based at least in part on an identified formation, behavior of at least one object associated with the swarm, a subset of objects associated with a swarm, and/or all of the objects associated with a swarm ((a) processing the frames to determine whether the pixel data is indicative of the presence of a swarm of flying objects in the volume of space or not; (b) determining a current angular velocity and a current angular position of the swarm based on the pixel data in response to a determination in processing operation (a) that a swarm is present; (c) extrapolating a future angular position of the swarm based on the current angular velocity and current angular position of the swarm, [0008], extrapolate the swarm angular position a given number of seconds s into the future (time t+s) assuming the swarm's current angular position g and current angular velocity v, [0040], In accordance with one proposed implementation, the machine vision processing unit 22 comprises a computer system that executes software configured to process successive video frames. More specifically, the current frame is gridded into an array of subarrays which are roughly the angular size of a typical swarm. Each subarray extracted from array and then processed in sequence using the swarm angle tracking algorithm to detect when a swarm has appeared in the image and then determine the current angular position and angular velocity of that swarm. The machine vision processing unit 22 is further configured to extrapolate the swarm angular position at a future time based on the swarm's current angular position and angular velocity and then determine whether this extrapolated angular position is within an angular collision region, [0056]).
Robinson et al. and Lohr et al. and Ray are in the same art of drone/swarm detection (Robinson et al., col. 5, line 58 - col. 6, line 2; Lohr et al., [0092]; Ray, abstract). The combination of Ray with Robinson et al. and Lohr et al. will enable predicting, based at least in part on an identified formation, behavior of at least one object associated with the swarm. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the predicting of Ray with the invention of Robinson et al. and Lohr et al. as this was known at the time of filing, the combination would have predictable results, and as Ray indicates “The camera-based collision avoidance system is configured to detect and angularly track small objects flying in a swarm at sufficiently far ranges to cue or alert the flight control system onboard an autonomous or piloted aircraft to avoid the swarm. A swarm angle tracking algorithm is used to recognize swarms of flying objects that move in unison in a certain direction by detecting a consistent characteristic of the pixel values in captured images which is indicative of swarm motion” (abstract) thereby improving safety by better avoiding collisions.
Regarding claim 19, Robinson et al. and Lohr et al. and Ray disclose the detection system of claim 18. Robinson et al. further indicate the at least one controller processes the movement data via one or more of a multivariant analysis, a neural network analysis, or a Bayesian network analysis (controller, col. 10, lines 10-40, “Motion Analysis is a field of Computer Vision used for processing many sequential frames of data to highlight motion between the frames. The tasks of motion analysis and capture include a) initialization; b) tracking; and c) recognition (Moeslund, et al., (2001). “A Survey of Computer Vision-Based Human Motion Capture,” ELSEVIER, 81 (3), 231-268. doi: https://doi.org/10.1006/cviu.2000.0897). Briefly, initialization describes the first exposure to data, tracking the prediction of motion, and recognition the final motion classification. Traditional object detection methods, such as YOLO (Redmon, et al., (2016). “You Only Look Once: Unified, Real-Time Object Detection,” Retrieved from https://arxiv.org/abs/1506.02640) and Faster RCNN (Ren, et al., (2016). “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” Retrieved from https://arxiv.org/abs/1506.01497), developed exclusively for object detection, do not consider previous data when making real-time decisions”, col. 15, line 59 - col. 16, line 10).
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Robinson et al. (US 12374073 B1) in view of Lohr et al. (US 20210222383 A1) in view of Alzahrani (US 20220324572 A1) in view of Droz et al. (US 20210173047 A1).
Regarding claim 20, Robinson et al. disclose swarm detection and countermeasure system, comprising: at least one controller; at least one sensing assembly including: at least one sensor device configured to scan an area to detect a swarm of objects and transmit a swarm detection signal to the at least one controller (detect and track a drone and/or drone swarms flying over critical areas for non-limiting example, col. 5, line 58 - col. 6, line 2, MatrixSpace's 360 LiDAR tracking system to provide motion analysis and tracking with potential to later use the tracks to refine and improve object detection classification, col. 16, lines 22-33); plural light detection and ranging (LIDAR) sensor devices, at least one LIDAR sensor device configured to receive a control signal from the at least one controller to direct an optical pulse at a swarm based on a swarm detection signal (MatrixSpace's 360 LiDAR tracking system to provide motion analysis and tracking with potential to later use the tracks to refine and improve object detection classification, col. 16, lines 22-33), wherein the at least one LIDAR sensor device is configured to: scan a swarm to generate image data of a first object associated with a swarm (While an example embodiment disclosed herein may employ a modality, such as a video color (VC), thermal, or multispectral (e.g., 450 nm, 550 nm, 650 nm, 750 nm, 850 nm, and 950 nm) modality, it should be understood that such an example embodiment is not limited to the number of modalities or the types of modalities employed. For non-limiting example, a depth, laser identification detection and ranging (LiDAR), radio detection and ranging (RADAR), or another modality may be employed, col. 4, lines 41-58, detect and track a drone and/or drone swarms flying over critical areas for non-limiting example, col. 5, line 58 - col. 6, line 2); detect presence of a first object (data-driven defeat sUAS (DS2) may be used to detect and track a drone and/or drone swarms flying over critical areas for non-limiting example, col. 5, line 58 - col. 6, line 2, detecting and tracking SUAS, col. 6, lines 15-25); detect a feature of a first object (training the multi-modal system 112 or another multi-modal system to detect and track features in scenes, such as the feature 102 in the scene 104 that may be included in the training data 124, col. 6, line 60 - col. 7, line 10, three-stage detection, col. 7, lines 35-53, information provided by the other modalities regarding the same object, col. 7, lines 54-65, generate the respective bounded region for the potential regions identified, col. 11, lines 10-25, matches feature maps from different modalities with similar sizes, col. 13, lines 30-40, In the video color image, there is a flying object, yet the colors do not provide enough information to exactly classify the object. In the corresponding thermal image, thermal contrast is only distinguishable from background noise if it is already known that there is probability for an object to be at that exact location—this can be learned from the video color and vice versa such that combined information reinforces object detection, col. 13, lines 39-55, Motion Analysis is a field of Computer Vision used for processing many sequential frames of data to highlight motion between the frames. The tasks of motion analysis and capture include a) initialization; b) tracking; and c) recognition, col. 15, line 59 - col. 16, line 10, Feature extraction is performed on each modality's input image using the modality specific FPN backbone, col. 18, lines 50-60, detect the presence of hostile or unauthorized UAS, col. 29, lines 28-29); characterize, using image processing, a feature of a first object (Region Proposal Networks RPNs (RPNs) not only improves model performance by optimizing the way information is shared to make a final classification, col. 7, lines 54-65, Features are recommended to be efficient, robust and physically interpretable so as to obtain a machine processable data representation containing the key properties of the target. Discrimination from similar objects (e.g. birds, “small birds′, ‘big birds’, balloons etc.) is facilitated with data from known models, such as available via ImageNet, and computer-implemented methods can be further developed to discern different types of motion, col. 11, line 60 - col. 12, line 15, “Techniques from motion detection should be paired with such Deep Learning methods to allow approximations of complex functions (Chalapathy and Chawla, (2019). “Deep Learning for Anomaly Detection: A Survey,” ArXiv.org, 23 January. Retrieved from arxiv.org/abs/1901.03407). To evaluate motion/tracking, an example embodiment uses sequential data. Motion analysis techniques, such as background subtracted images disclosed below with regard to FIG. 5—or stacked images, can then test the model's ability to track objects that become occluded and to distinguish between types of moving objects, i.e., a drone or background motion”, col. 16, lines 10-21, The weights discovered in the ‘pretrained’ trials are taken from the best individual modality runs, disclosed in the table 1700 of FIG. 17, which incorporate ‘imageNet’ pretrained weights that are adept at detecting and classifying images of drones within their respective modality space., col. 23, line 55 - col. 24, line 5, fusion model is initialized with a backbone that accurately detects key features of drones, col. 25, lines 8 - 26, feature map, produce a proposal 2154-4 classifying a type 2164-4 of drone in the FOV, col. 28, lines 12-20); and based on the characterization of the feature, track a first object or scan a swarm to generate image data of a second object associated with a swarm (As a countermeasure, an example embodiment of a multi-modal, data-driven defeat sUAS (DS2) may be used to detect and track a drone and/or drone swarms flying over critical areas for non-limiting example, col. 5, line 58 - col. 6, line 2, tracking targets in real-time, col. 14, lines 45-57); wherein the at least one controller is configured to process movement data from the plural LIDAR sensor devices to: identify a formation (The speed and varying shapes of such drones makes discovery of a sUAS a complex and difficult goal, col. 6, lines 3-15, discern different types of motion, col. 11, line 60 - col. 12, line 15).
Robinson et al. do not make explicit plural light detection and ranging (LIDAR) sensor devices, at least one LIDAR sensor device configured to receive a control signal from the at least one controller to direct an optical pulse at a swarm based on a swarm detection signal; based on the characterization of the feature, track a first object or scan a swarm to generate image data of a second object associated with a swarm, and predict behavior of at least one object, a subset of objects associated with a swarm, and/or all of the objects associated with a swarm; wherein the at least one controller is configured, via an automated reasoning technique, to develop a countermeasure that will disrupt a formation and/or a predicted behavior.
Lohr et al. teach at least one controller; at least one sensing assembly including: at least one sensor device configured to scan an area to detect a swarm of objects and transmit a swarm detection signal to the at least one controller (video view or video monitoring, sensors, radar, lidar or ladar, [0081], FIG. 17 shows an intelligent barricade, equipped with video monitoring technology, high-resolution, ultra-high-resolution etc. digital cameras, IP-supported, thermal imaging cameras, camera domes, headlights, infrared illuminators and microphones. The cameras are day vision enabled and night vision enabled. Furthermore, the barricade can be equipped with lidar, ladar and radar systems, in order to localize objects, persons and their movements with meter precision and track them, [0090], command-issuing remote control system can be triangulated in the neural network, The system can, when required, automatically orient cameras toward the drone, [0092]); plural light detection and ranging (LIDAR) sensor devices, at least one LIDAR sensor device configured to receive a control signal from the at least one controller to direct an optical pulse at a swarm based on a swarm detection signal, wherein the at least one LIDAR sensor device is configured to: scan a swarm to generate image data of a first object associated with a swarm (video view or video monitoring, sensors, radar, lidar or ladar, [0081], FIG. 17 shows an intelligent barricade, equipped with video monitoring technology, high-resolution, ultra-high-resolution etc. digital cameras, IP-supported, thermal imaging cameras, camera domes, headlights, infrared illuminators and microphones. The cameras are day vision enabled and night vision enabled. Furthermore, the barricade can be equipped with lidar, ladar and radar systems, in order to localize objects, persons and their movements with meter precision and track them, [0090]); detect presence of a first object (area in which, for example, movement is to be detected, alarm “drone detected”, [0090], When a drone is detected, a popup window in the management system will present all the further available steps, [0092]); detect a feature of a first object (localize objects, persons and their movements, [0090]); characterize, using image processing, a feature of a first object (By using databases which contain image material and sound material as well as characteristic RF and HF signals as comparison variables it is possible, for example, to detect drones automatically, identify them and track them in real time, [0092]); and based on the characterization of the feature, track a first object or scan a swarm to generate image data of a second object associated with a swarm (By using a plurality of HF sensors, also with 2D and/or 3D antennas or directional antennas, both third-party drones and the persons with the command-issuing remote control system can be triangulated in the neural network, The system can, when required, automatically orient cameras toward the drone, perform evaluation and execute further measures and suitable countermeasures, such as for example bringing the third-party drone into the failsafe mode, which is also possible manually. By using databases which contain image material and sound material as well as characteristic RF and HF signals as comparison variables it is possible, for example, to detect drones automatically, identify them and track them in real time, detect, track and represent multiple objects such as, for example, flying cars, air taxis and microdrones with swarm intelligence, [0092]); wherein the at least one controller is configured to process movement data from the plural LIDAR sensor devices to: identify a formation (barricade can be equipped with lidar, ladar and radar systems, in order to localize objects, persons and their movements with meter precision and track them, [0090]); wherein the at least one controller is configured, via an automated reasoning technique, to develop a countermeasure that will disrupt a formation and/or a predicted behavior (“The received data, such as for example coordinates, make, type, flying direction, etc. can be compared with further sensor data, such as for example radar data and infrared sensors, in the management system, combined and provided in a visualized fashion. The system can, when required, automatically orient cameras toward the drone, perform evaluation and execute further measures and suitable countermeasures, such as for example bringing the third-party drone into the failsafe mode, which is also possible manually. The use of a jammer is therefore automatically or manually possible. A limited irradiation angle of the jammer is extended by a plurality of these devices which are installed in a circular shape, for example on a mast or telescopic mast, so that 360° coverage can be implemented. The devices can where necessary be activated in a directional fashion or activated as a grouping. When a drone is detected, a popup window in the management system will present all the further available steps. It is therefore automatically possible to start an interception drone which is already on continuous standby itself, in order to identify the third-party drone”, [0092]).
Robinson et al. and Lohr et al. are in the same art of drone/swarm detection (Robinson et al., col. 5, line 58 - col. 6, line 2; Lohr et al., [0092]). The combination of Lohr et al. with Robinson et al. will enable initiating based on the characterization of a feature, tracking a first object. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the responsive tracking of Lohr et al. with the invention of Robinson et al. as this was known at the time of filing, the combination would have predictable results, and as Lohr et al. indicate, “By using these devices it is possible to carry out geofencing, that is to say to detect third-party drones which penetrate a protected area over water, land, or in the air, and also to triangulate the radio remote control of the controlling person, track it and disrupt it. By using databases which contain image material and sound material as well as characteristic RF and HF signals as comparison variables it is possible, for example, to detect drones automatically, identify them and track them in real time. Further data from the database makes available, through constant agreement with the drone manufacturers, continuously updated codes and commands for taking over drones, controlling and landing them, activating failsafe functions, etc. These devices are false alarm-proof, even if further HF signals occur. A plurality of drones, irrespective of whether they are of the same or different types or models, can also be safely detected, taken over and controlled. These devices can also be implemented in the neural network of the intelligent barricades. Their use, also geodata-supported, at strategic or necessary locations results in an uninterrupted detection screen with a large range and high level of precision. As described, the system can also be incorporated into the management system. Further measures and countermeasures can be initiated or executed manually or automatically, as described. The system can also detect, track and represent multiple objects such as, for example, flying cars, air taxis and microdrones with swarm intelligence and can prepare and/or execute autonomous and/or manually started countermeasures” ([0092]) thereby providing a safety motivation and commercial application in policing and military applications.
Robinson et al. and Lohr et al. do not disclose plural light detection and ranging (LIDAR) sensor devices, at least one LIDAR sensor device configured to receive a control signal from the at least one controller to direct an optical pulse at a swarm based on a swarm detection signal, and predict behavior of at least one object, a subset of objects associated with a swarm, and/or all of the objects associated with a swarm.
Alzahrani teaches at least one sensor device configured to scan an area to detect a swarm of objects and transmit a swarm detection signal to the at least one controller (deploying an example Family of Unmanned Aircraft Systems (FoUAS) platform in C-UAS operations against a swarm of LSS UAS threats, detection of approaching swarm of UAS threats 140 a-b-c-d in real-time via the use of onboard low-cost 3D airborne radar 107e, EO/IR camera 107c, stereo vision sensors 107b, 3D LiDAR 107d, [0086]); plural light detection and ranging (LIDAR) sensor devices, at least one LIDAR sensor device configured to receive a control signal from the at least one controller to direct an optical pulse at a swarm based on a swarm detection signal (“One aspect of the present disclosure relates to an apparatus of a low-cost, attritable, and agile Small Tactical Unmanned Aerial System (STUAS) mothership capable of carrying and launching payloads while being autonomously guided via GPS with way-point navigation or semi-autonomously operated via a single or multiple Pilots in Command (PICs) with real-time command and control (C2) link and Satellite Communication (SATCOM). In the terminal phase of the flight, the STUAS mothership could be autonomously guided via multiple on-board internal payloads including Electro-Optical/Infra-Red (EO/IR) camera or seekers, stereo vision sensors, 2D and 3D Light Detection and Ranging (LiDAR), low-cost and lightweight 3D airborne radar”, [0017], 4A illustrates an example apparatus of a Family of Unmanned Aircraft Systems (FoUAS) platform that includes an example attritable Small Tactical Unmanned Aircraft System (STUAS) mothership equipped with internal payloads including EO/IR camera, stereo vision sensors, 2D LiDAR, low-cost 3D airborne radar, and lightweight SATCOM, [0048]), wherein the at least one LIDAR sensor device is configured to: scan a swarm to generate image data of a first object associated with a swarm (camera, LIDAR, images, [0017], swarm, [0014], [0086]); detect presence of a first object (detection, tracking, and classification of LSS UASs, [0022], [0080], Light Detection and Ranging (LiDAR) 202c used for detecting and tracking the heat signature of approaching LSS UAS threats autonomously, [0083]); detect a feature of a first object (localize, and track approaching LSS UAS threats, [0080], Light Detection and Ranging (LiDAR) 202c used for detecting and tracking the heat signature of approaching LSS UAS threats autonomously, [0083], localizing swarm of UAS threats 140 a-b-c-d, [0086]); characterize, using image processing, a feature of a first object (2D LiDAR; used for terminal homing guidance, searching, identifying, acquiring, attacking, and destroying multiple targets autonomously, [0018], (LiDAR) 202c used for identifying multiple targets autonomously, [0065]); and based on the characterization of the feature, track a first object or scan a swarm to generate image data of a second object associated with a swarm (localizing and tracking the swarm of UAS threats 140 a-b-c-d, [0086]); wherein the at least one controller is configured to process movement data from the plural LIDAR sensor devices to: identify a formation (localizing and tracking approaching LSS UAS threat, [0084], airborne the detection of approaching swarm of UAS threats 140 a-b-c-d, [0086]); and predict behavior of at least one object, a subset of objects associated with a swarm, and/or all of the objects associated with a swarm (expected flight path or trajectory of approaching multiple UAS threats 140 a-b-c-d, [0086], expected flight path or trajectory of approaching swarm of UAS threats, [0087]); wherein the at least one controller is configured, via an automated reasoning technique, to develop a countermeasure that will disrupt a formation and/or a predicted behavior (“In an operational scenario against a single UAS threat, the FoUAS platform may be operated as a single agent whereas a STUAS mothership equipped with EO/IR camera, stereo vision sensors, 2D LiDAR, and 3D airborne radar payloads; may be launched by a mobile or non-mobile GCS via a single or multiple PICs simultaneously with real-time C2 link or SATCOM, and flown directly into the expected flight path or trajectory of approaching UAS threat after detecting it by the ground-based air surveillance radars and the on-board 3D airborne radar”, [0022], at least two STUAS 101a-b being guided via multiple PICs 121a-b from a mobile Ground Control Station (GCS) 108b in real-time within a range up to 250 km or Beyond-Line-of-sight (BLOS) using SATCOM link depending on terrain; flown directly into the expected flight path or trajectory of approaching multiple UAS threats 140 a-b-c-d after detecting them, intercept the swarm of UAS threats 140 a-b-c-d and destruct them mid-air effectively, [0086], second STUAS mothership is directly flown into the expected flight path or trajectory of approaching swarm of UAS threats, LMs of second STUAS mothership flying autonomously to detect and intercept swarm of UAS threats, [0087]).
Robinson et al. and Lohr et al. and Alzahrani are in the same art of drone/swarm detection (Robinson et al., col. 5, line 58 - col. 6, line 2; Lohr et al., [0092]; Alzahrani, [0086]). The combination of Alzahrani with Robinson et al. and Lohr et al. will enable predicting behavior of at least one object, a subset of objects associated with a swarm, and/or all of the objects associated with a swarm. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the predicting of Alzahrani with the invention of Robinson et al. and Lohr et al. as this was known at the time of filing, the combination would have predictable results, and as Alzahrani indicates “The fourth phase may include the guidance of four LMs 102 a-b-c-d by onboard companion computer; having built-in Reinforcement Learning (RL) and Machine Vision (MV) algorithms processing in real-time, then rely on their onboard EO/IR seekers, stereo vision sensors, and 2D LiDAR sensors to intercept the swarm of UAS threats 140 a-b-c-d and destruct them mid-air effectively” ([0086]) thereby demonstrating a defense benefit to the combination of inventions.
Robinson et al. and Lohr et al. and Alzahrani do not make explicit plural light detection and ranging (LIDAR) sensor devices, at least one LIDAR sensor device configured to receive a control signal from the at least one controller to direct an optical pulse at a swarm based on a swarm detection signal.
Droz et al. teach plural light detection and ranging (LIDAR) sensor devices, at least one LIDAR sensor device configured to receive a control signal from the at least one controller to direct an optical pulse at a swarm based on a swarm detection signal (“In some embodiments, computer system 112 may make a determination about various objects based on data that is provided by systems other than the radio system. For example, vehicle 100 may have lasers or other optical sensors configured to sense objects in a field of view of the vehicle. Computer system 112 may use the outputs from the various sensors to determine information about objects in a field of view of the vehicle, and may determine distance and direction information to the various objects. Computer system 112 may also determine whether objects are desirable or undesirable based on the outputs from the various sensors”, [0062], “The example vehicle 200 includes a sensor unit 202, a first lidar unit 204, a second lidar unit 206, a first radar unit 208, a second radar unit 210, a first lidar/radar unit 212, a second lidar/radar unit 214, and two additional locations 216, 218 at which a radar unit, lidar unit, laser rangefinder unit, and/or other type of sensor or sensor(s) could be located on the vehicle 200. Each of the first lidar/radar unit 212 and the second lidar/radar unit 214 can take the form of a lidar unit, a radar unit, or both”, [0065], “For example, to apply a different respective refresh rate of the first lidar unit 204, in the first pointing direction (i.e., contour 404) relative to the second pointing direction (i.e., contour 406), the first lidar unit 204 can emit one light pulse in the first pointing direction for every complete rotation of the first lidar unit 204 about axis 232, and for every two complete rotations of the first lidar unit 204 about axis 232. By doing so, for instance, the first pointing direction can be assigned a higher refresh rate than the second pointing direction. As another example, to apply a different respective horizontal scanning resolution, the first lidar unit 204 can be configured to emit light pulses at a different pulse rate (e.g., number of pulses per second) when the first lidar unit 204 is oriented in the first pointing direction than a pulse rate applied when the first lidar unit 204 is oriented in the second pointing direction”, [0110], “Such a lidar device 520 may include an array of light emitters and a corresponding array of detectors, where the light emitters emit light signals toward an environment surrounding the lidar device 520 and the detectors detect reflections of the emitted light signals from objects in the environment surrounding the lidar device 520. Based on the time delay between the emission time and the detection time, a distance to an object in the environment may be determined. In order to identify the distance to multiple objects in the environment, the lidar device 520 may include one or more actuators that are configured to rotate the light emitters and detectors (e.g., in an azimuthal direction and/or elevation direction). For example, the actuators may azimuthally rotate the light emitters and detectors such that a 360° field of view is observed”, [0111], light emitters 522 alternate between a first pulse energy level 612 and a second pulse energy level 614 with respect to elevation angle according to the pulse energy plan, a continuum of pulse energy levels could also be used in the pulse energy plan, a continuum could allow for enhanced precision when it comes to tuning the range probed by the lidar device 520 for different regions of interest, [0124]); and based on the characterization of the feature, track a first object or scan a swarm to generate image data of a second object associated with a swarm (“Such a lidar device 520 may include an array of light emitters and a corresponding array of detectors, where the light emitters emit light signals toward an environment surrounding the lidar device 520 and the detectors detect reflections of the emitted light signals from objects in the environment surrounding the lidar device 520. Based on the time delay between the emission time and the detection time, a distance to an object in the environment may be determined. In order to identify the distance to multiple objects in the environment, the lidar device 520 may include one or more actuators that are configured to rotate the light emitters and detectors (e.g., in an azimuthal direction and/or elevation direction). For example, the actuators may azimuthally rotate the light emitters and detectors such that a 360° field of view is observed”, [0111]).
Robinson et al. Droz et al. are in the same art of detection using LIDAR (Robinson et al., col. 4, lines 41-58; Droz et al., abstract). The combination of Droz et al. with Robinson et al. and Lohr et al. and Alzahrani will enable using plural LIDAR sensor devices. It would have been obvious at the time of filing to one of ordinary skill in the art to combine the plural sensor devices of Droz et al. with the invention of Robinson et al. and Lohr et al. and Alzahrani as this was known at the time of filing, the combination would have predictable results, and as Droz et al. indicate “The disclosure relates to a pulse energy plan for lidar devices based on areas of interest and thermal budgets. As a lidar device scans an environment (e.g., to generate a point cloud), the lidar device may generate excess heat (e.g., from inefficiencies in the light emitters of the lidar device). If too much excess heat is produced, such heat can have detrimental effects on the lidar device components. In order to prevent potential degradation to the lidar device, power provided to the light emitters in the lidar device may be allocated according to a pulse energy plan, thereby limiting the amount of excess heat produced. One method of allocating the amount of power provided to the light emitters is to identify regions of interest in the environment surrounding the lidar device and then provide greater power to the light emitters when the lidar device is scanning those regions of interest” [0004] and “In one aspect, a lidar device is provided. The lidar device includes a plurality of light emitters configured to emit light pulses into an environment of the lidar device in a plurality of different emission directions. The lidar device also includes circuitry configured to power the plurality of light emitters. Further, the lidar device includes a plurality of detectors. Each detector in the plurality of detectors is configured to detect reflections of light pulses emitted by a corresponding light emitter in the plurality of light emitters and received from the environment of the lidar device. Additionally, the lidar device includes a controller configured to (i) determine a pulse energy plan based on one or more regions of interest in the environment of the lidar device and a thermal budget. The pulse energy plan specifies a pulse energy level for each light pulse emitted by each light emitter in the plurality of light emitters and (ii) control the circuitry based on the pulse energy plan” ([0005]) thereby suggesting how areas that need to be imaged can be balanced by a proper energy plan when the inventions are combined.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE M ENTEZARI HAUSMANN whose telephone number is (571)270-5084. The examiner can normally be reached 10-7 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent M Rudolph can be reached at (571) 272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHELLE M ENTEZARI HAUSMANN/Primary Examiner, Art Unit 2671