Prosecution Insights
Last updated: April 19, 2026
Application No. 19/112,138

TECHNOLOGY CONFIGURED TO ENABLE CAPTURE AND/OR IDENTIFICATION OF INSECTS AND OTHER CREATURES

Non-Final OA §102
Filed
Mar 14, 2025
Examiner
YANG, WEI WEN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Jasgo R&D Pty Ltd.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
539 granted / 657 resolved
+20.0% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
34 currently pending
Career history
691
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
72.5%
+32.5% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 657 resolved cases

Office Action

§102
DETAILED ACTION In response to the Restriction Requirement, Applicant, on 12/19/2025, has elected without traverse Group I (claims 1-19) for examination, and the non-elected in Group II (20-22). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-19 are rejected under 35 U.S.C. 102 (a)(2) as being anticipated over Patch (US 20220361471 A1, Date Filed: 2022-05-11). Re Claim 1, Patch discloses a trap device configured to facilitate identification and capture of insects (see Patch: e. g., Fig. 1, --An intelligent insect trap and identification system is disclosed. The intelligent insect trap and identification system can include an insect imaging chamber and identification system.--, in abstract), the device including: a trap assembly, the trap assembly (see Patch: e. g., Fig. 1, --The intelligent insect trap and identification system can include an insect imaging chamber and identification system. The chamber can include a first cell for accepting insects, a second cell, a first reflector in the second cell, and a first imaging device in the second cell for recording one or more first visual images of the one or more insects in the first cell. Based on the image, the insect imaging chamber can detect and identify the insects.--, in abstract) including: (i) one or more passageways through which an insect is able to crawl, each aperture having an external opening and in internal opening (see Patch: e. g., Fig. 1, -- the insect guide tube 108 can include two openings 110, 112, an elongated tube 114, and/or a trapdoor 116. In some scenarios, a first opening 110 of the insect guide tube 108 can be close to the chamber opening 106….. a second opening 112 of the insect guide tube 108 can be an exit or and entrance of an insect. In some examples, a clear lid 118 can be kept on top of the second opening 112 of the imaging insect guide tube 108 to incentive an insect to leave the imaging chamber 100--; in [0050]-[0052]); (ii) a cavity into which the internal openings feed (see Patch: e. g., Fig. 1, -- the insect guide tube 108 can include two openings 110, 112, an elongated tube 114, and/or a trapdoor 116. In some scenarios, a first opening 110 of the insect guide tube 108 can be close to the chamber opening 106….. a second opening 112 of the insect guide tube 108 can be an exit or and entrance of an insect. In some examples, a clear lid 118 can be kept on top of the second opening 112 of the imaging insect guide tube 108 to incentive an insect to leave the imaging chamber 100--; in [0050]-[0052]); and (iii) a pitfall trap arrangement within the cavity, such that an insect that crawls through one of the passageways and egresses through the internal opening of that passageway is transported into a capture zone of the pitfall trap arrangement (see Patch: e. g., Fig. 1, --example insect traps or imaging chambers are disclosed. An insect imaging chamber includes: a first cell for accepting one or more insects; a second cell, the second cell being separated from the first cell; a first reflector in the second cell;--, and, -- The example insect trap is time-efficient, non-lethal, and user-friendly. In addition, the example insect imaging chamber in the present disclosure can be non-lethal traps for insect biodiversity monitoring. The example insect trap and identification system can preprocess collected images (video frames, films, photographs, etc.) to reduce the number of images and only use part of the image (bounding boxes) for insect trap and identification. Accordingly, the example insect trap and identification system can effectively and efficiently detect and classify insects using an AI system.--, in [0004], and [0017]; -- the insect guide tube 108 can include two openings 110, 112, an elongated tube 114, and/or a trapdoor 116. In some scenarios, a first opening 110 of the insect guide tube 108 can be close to the chamber opening 106….. a second opening 112 of the insect guide tube 108 can be an exit or and entrance of an insect. In some examples, a clear lid 118 can be kept on top of the second opening 112 of the imaging insect guide tube 108 to incentive an insect to leave the imaging chamber 100--; in [0050]-[0052]); a monitoring unit, the monitoring unit including: (i) an image capture module configured to capture image data for a field of view that includes the capture zone (see Patch: e. g., Fig. 1, --[0017] Example intelligent insect trap and identification system (e.g., example insect imaging chambers and systems to process insect images) in the present disclosure can use a sophisticated camera system paired with novel software for insect detection and identification. The example insect trap is time-efficient, non-lethal, and user-friendly. In addition, the example insect imaging chamber in the present disclosure can be non-lethal traps for insect biodiversity monitoring. The example insect trap and identification system can preprocess collected images (video frames, films, photographs, etc.) to reduce the number of images and only use part of the image (bounding boxes) for insect trap and identification. Accordingly, the example insect trap and identification system can effectively and efficiently detect and classify insects using an AI system.--, in [0017]; and, -- [0034] The example insect trap and identification system can include two main components: an example insect imaging chamber and an example insect identification application. The insect trap and identification system can collect data. The example insect identification application can extract meaningful information from the data. [0035] Hardware Stack: The example insect imaging chamber can represent a minimal baseline for collection of environmental and video data. The example insect imaging chamber can include a tube with two imaging devices (e.g., 12.3 megapixel HD cameras or any other suitable cameras) positioned such that they will capture the top and side views of insects as they crawl through the trap.--, in [0034]-[0035]); (ii) a communications module (see Patch: e. g., Fig. 1, -- In some embodiments, in the second cell 104 one or more visual images (e.g., photograph, film, video, etc.) are recorded, stored, and/or transmitted via an imaging device and/or a controller. For example, the second cell 104 can include a first reflector 120 and a first imaging device 122 for recording one or more first visual images (e.g., photograph, film, video, etc.) of the insect…. the first imaging device 122 can be a digital camera, a video recording device, a camcorder, a motion picture camera; or any other suitable device capable of recording, storing, or transmitting visual images (e.g., photographs or videos) of insects.--, in [0055]-[0057], and [0063]; and, -- the imaging chamber 100 including one or more imaging devices can record the multiple images of the one or more objects and store the multiple images in a non-transitory computer-readable medium. In a non-limiting examples, the system can receive the multiple images stored in the non-transitory computer-readable medium via any suitable communication network or combination of communication networks (e.g., a Wi-Fi network, a peer-to-peer network, a cellular network, a wired network, etc.).--, in [0067]); and, (iii) a processing unit that is configured to execute logical instructions thereby to cause the image capture module to capture images in accordance with a predefined capture protocol, and the communications module to communicate resultant image data to a remote processing system in accordance with a predefined transmission protocol (see Patch: e. g., Fig. 1, -- In some embodiments, in the second cell 104 one or more visual images (e.g., photograph, film, video, etc.) are recorded, stored, and/or transmitted via an imaging device and/or a controller. For example, the second cell 104 can include a first reflector 120 and a first imaging device 122 for recording one or more first visual images (e.g., photograph, film, video, etc.) of the insect…. the first imaging device 122 can be a digital camera, a video recording device, a camcorder, a motion picture camera; or any other suitable device capable of recording, storing, or transmitting visual images (e.g., photographs or videos) of insects.--, in [0055]-[0057], and, --the imaging chamber 100 can further include the power supply and voltage converter to power all imaging devices 122, 130, 142 and computer components. In even further examples, the imaging chamber 100 can further include a non-transitory computer readable medium (e.g., memory, solid-state hard drive, etc.) to store the visual images. In even further examples, the imaging chamber 100 can include a transceiver to transmit the visual images to a server and/or receive weather data to synchronize with the captured data. In even further examples, the imaging chamber 100 may include a processor with a memory to transmit data to another imaging chamber or any other suitable remote location. In some examples, the processor in the imaging chamber 100 can provide classification results to other imaging chambers with different deep learning models to identify insects. This can allow for easy remote monitoring of several imaging chambers deployed in close proximity and sending out unknown images for prompt human intervention. --, in [0063]; and, -- the imaging chamber 100 including one or more imaging devices can record the multiple images of the one or more objects and store the multiple images in a non-transitory computer-readable medium. In a non-limiting examples, the system can receive the multiple images stored in the non-transitory computer-readable medium via any suitable communication network or combination of communication networks (e.g., a Wi-Fi network, a peer-to-peer network, a cellular network, a wired network, etc.).--, in [0067]); wherein the remote processing system is configured to: (i) receive a data transmission including at least one instance of image data transmitted by the trap device processing unit (see Patch: e. g., Fig. 1, -- In some embodiments, in the second cell 104 one or more visual images (e.g., photograph, film, video, etc.) are recorded, stored, and/or transmitted via an imaging device and/or a controller. For example, the second cell 104 can include a first reflector 120 and a first imaging device 122 for recording one or more first visual images (e.g., photograph, film, video, etc.) of the insect…. the first imaging device 122 can be a digital camera, a video recording device, a camcorder, a motion picture camera; or any other suitable device capable of recording, storing, or transmitting visual images (e.g., photographs or videos) of insects.--, in [0055]-[0057], and, --the imaging chamber 100 can further include the power supply and voltage converter to power all imaging devices 122, 130, 142 and computer components. In even further examples, the imaging chamber 100 can further include a non-transitory computer readable medium (e.g., memory, solid-state hard drive, etc.) to store the visual images. In even further examples, the imaging chamber 100 can include a transceiver to transmit the visual images to a server and/or receive weather data to synchronize with the captured data. In even further examples, the imaging chamber 100 may include a processor with a memory to transmit data to another imaging chamber or any other suitable remote location. In some examples, the processor in the imaging chamber 100 can provide classification results to other imaging chambers with different deep learning models to identify insects. This can allow for easy remote monitoring of several imaging chambers deployed in close proximity and sending out unknown images for prompt human intervention. --, in [0063]; and, -- the imaging chamber 100 including one or more imaging devices can record the multiple images of the one or more objects and store the multiple images in a non-transitory computer-readable medium. In a non-limiting examples, the system can receive the multiple images stored in the non-transitory computer-readable medium via any suitable communication network or combination of communication networks (e.g., a Wi-Fi network, a peer-to-peer network, a cellular network, a wired network, etc.).--, in [0067]); (ii) determine a unique identifier representative of the processing unit (see Patch: e. g., Fig. 1, -- In some embodiments, in the second cell 104 one or more visual images (e.g., photograph, film, video, etc.) are recorded, stored, and/or transmitted via an imaging device and/or a controller. For example, the second cell 104 can include a first reflector 120 and a first imaging device 122 for recording one or more first visual images (e.g., photograph, film, video, etc.) of the insect…. the first imaging device 122 can be a digital camera, a video recording device, a camcorder, a motion picture camera; or any other suitable device capable of recording, storing, or transmitting visual images (e.g., photographs or videos) of insects.--, in [0055]-[0057], and, --the imaging chamber 100 can further include the power supply and voltage converter to power all imaging devices 122, 130, 142 and computer components. In even further examples, the imaging chamber 100 can further include a non-transitory computer readable medium (e.g., memory, solid-state hard drive, etc.) to store the visual images. In even further examples, the imaging chamber 100 can include a transceiver to transmit the visual images to a server and/or receive weather data to synchronize with the captured data. In even further examples, the imaging chamber 100 may include a processor with a memory to transmit data to another imaging chamber or any other suitable remote location. In some examples, the processor in the imaging chamber 100 can provide classification results to other imaging chambers with different deep learning models to identify insects. This can allow for easy remote monitoring of several imaging chambers deployed in close proximity and sending out unknown images for prompt human intervention. --, in [0063]; and, -- the imaging chamber 100 including one or more imaging devices can record the multiple images of the one or more objects and store the multiple images in a non-transitory computer-readable medium. In a non-limiting examples, the system can receive the multiple images stored in the non-transitory computer-readable medium via any suitable communication network or combination of communication networks (e.g., a Wi-Fi network, a peer-to-peer network, a cellular network, a wired network, etc.).--, in [0067]); (iii) process the instance of image data via a classifier module, thereby to define detection data representative of identified presence of one or more insects of known insect types (see Patch: e. g., Fig. 1, --[0043] Object Classification: An example current classifier model can discriminate between four insect orders: Coleoptera, Diptera, Hymenoptera and Lepidoptera (beetles, flies, wasps/bees, butterflies/moths) at above 90% accuracy. The example classifier model is a VGG16 that has been pretrained on ImageNet. In some examples, the output layer can be modified to have 13 output classes, each denoting a taxonomic label (e.g. Formicidae) within a taxonomic level (e.g. family), and then fine-tuned the network on 82,000 images of local insects obtained from iNaturalist and GBIF. In further examples, Tensorflow and Keras can be used to train and classify images.--, in [0043] -- In some embodiments, in the second cell 104 one or more visual images (e.g., photograph, film, video, etc.) are recorded, stored, and/or transmitted via an imaging device and/or a controller. For example, the second cell 104 can include a first reflector 120 and a first imaging device 122 for recording one or more first visual images (e.g., photograph, film, video, etc.) of the insect…. the first imaging device 122 can be a digital camera, a video recording device, a camcorder, a motion picture camera; or any other suitable device capable of recording, storing, or transmitting visual images (e.g., photographs or videos) of insects.--, in [0055]-[0057], and, --the imaging chamber 100 can further include the power supply and voltage converter to power all imaging devices 122, 130, 142 and computer components. In even further examples, the imaging chamber 100 can further include a non-transitory computer readable medium (e.g., memory, solid-state hard drive, etc.) to store the visual images. In even further examples, the imaging chamber 100 can include a transceiver to transmit the visual images to a server and/or receive weather data to synchronize with the captured data. In even further examples, the imaging chamber 100 may include a processor with a memory to transmit data to another imaging chamber or any other suitable remote location. In some examples, the processor in the imaging chamber 100 can provide classification results to other imaging chambers with different deep learning models to identify insects. This can allow for easy remote monitoring of several imaging chambers deployed in close proximity and sending out unknown images for prompt human intervention. --, in [0063]; and, -- the imaging chamber 100 including one or more imaging devices can record the multiple images of the one or more objects and store the multiple images in a non-transitory computer-readable medium. In a non-limiting examples, the system can receive the multiple images stored in the non-transitory computer-readable medium via any suitable communication network or combination of communication networks (e.g., a Wi-Fi network, a peer-to-peer network, a cellular network, a wired network, etc.).--, in [0067], and, --to perform a cyclical or self-learning approach at step 514 to providing additional classification power to the algorithm/process. In some embodiments, a regression or other statistical method may be performed to associate factors like time of day, temperature, time from/to sunrise/sunset, etc. with high confidence-score classifications of insects. Thus, a weighting factor can be associated with any of these factors. Then, when an image is analyzed by the deep learning model and an insufficiently-high confidence score (or two or more similar confidence scores) is returned for a given classification, the presence of additional factors at the time the image(s) were acquired can be considered. If, e.g., time of day, season, or temperature might highly correlate with a given species among the possible classifications, but none of the others, the algorithm can give a preliminary or tentative classification of the given species and notify a user of the factor that was used to supplement the prediction. Or, if an insect is able to climb a specific type of surface, which other likely insects are not, this information could rule out some results of the classifier model. In other embodiments, the additional/environmental data can be combined with image data and provided to the deep learning model as training data, such that a new or re-tuned model can be generated directly from the additional/environmental data. In a non-limiting scenario, the system can receive environmental information from the user, any suitable device (e.g., insect imaging chamber 100, etc.) and/or any suitable database (e.g., National Oceanic and Atmospheric Administration (NOAA) Online Weather Database, etc.).--, in [0093]); and, (iv) record the detection data in a database such that the detection data is associated with the processing unit via the unique identifier of the processing unit (see Patch: e. g., Fig. 1, --[0043] Object Classification: An example current classifier model can discriminate between four insect orders: Coleoptera, Diptera, Hymenoptera and Lepidoptera (beetles, flies, wasps/bees, butterflies/moths) at above 90% accuracy. The example classifier model is a VGG16 that has been pretrained on ImageNet. In some examples, the output layer can be modified to have 13 output classes, each denoting a taxonomic label (e.g. Formicidae) within a taxonomic level (e.g. family), and then fine-tuned the network on 82,000 images of local insects obtained from iNaturalist and GBIF. In further examples, Tensorflow and Keras can be used to train and classify images.--, in [0043] -- In some embodiments, in the second cell 104 one or more visual images (e.g., photograph, film, video, etc.) are recorded, stored, and/or transmitted via an imaging device and/or a controller. For example, the second cell 104 can include a first reflector 120 and a first imaging device 122 for recording one or more first visual images (e.g., photograph, film, video, etc.) of the insect…. the first imaging device 122 can be a digital camera, a video recording device, a camcorder, a motion picture camera; or any other suitable device capable of recording, storing, or transmitting visual images (e.g., photographs or videos) of insects.--, in [0055]-[0057], and, --the imaging chamber 100 can further include the power supply and voltage converter to power all imaging devices 122, 130, 142 and computer components. In even further examples, the imaging chamber 100 can further include a non-transitory computer readable medium (e.g., memory, solid-state hard drive, etc.) to store the visual images. In even further examples, the imaging chamber 100 can include a transceiver to transmit the visual images to a server and/or receive weather data to synchronize with the captured data. In even further examples, the imaging chamber 100 may include a processor with a memory to transmit data to another imaging chamber or any other suitable remote location. In some examples, the processor in the imaging chamber 100 can provide classification results to other imaging chambers with different deep learning models to identify insects. This can allow for easy remote monitoring of several imaging chambers deployed in close proximity and sending out unknown images for prompt human intervention. --, in [0063]; and, -- the imaging chamber 100 including one or more imaging devices can record the multiple images of the one or more objects and store the multiple images in a non-transitory computer-readable medium. In a non-limiting examples, the system can receive the multiple images stored in the non-transitory computer-readable medium via any suitable communication network or combination of communication networks (e.g., a Wi-Fi network, a peer-to-peer network, a cellular network, a wired network, etc.).--, in [0067], and, --to perform a cyclical or self-learning approach at step 514 to providing additional classification power to the algorithm/process. In some embodiments, a regression or other statistical method may be performed to associate factors like time of day, temperature, time from/to sunrise/sunset, etc. with high confidence-score classifications of insects. Thus, a weighting factor can be associated with any of these factors. Then, when an image is analyzed by the deep learning model and an insufficiently-high confidence score (or two or more similar confidence scores) is returned for a given classification, the presence of additional factors at the time the image(s) were acquired can be considered. If, e.g., time of day, season, or temperature might highly correlate with a given species among the possible classifications, but none of the others, the algorithm can give a preliminary or tentative classification of the given species and notify a user of the factor that was used to supplement the prediction. Or, if an insect is able to climb a specific type of surface, which other likely insects are not, this information could rule out some results of the classifier model. In other embodiments, the additional/environmental data can be combined with image data and provided to the deep learning model as training data, such that a new or re-tuned model can be generated directly from the additional/environmental data. In a non-limiting scenario, the system can receive environmental information from the user, any suitable device (e.g., insect imaging chamber 100, etc.) and/or any suitable database (e.g., National Oceanic and Atmospheric Administration (NOAA) Online Weather Database, etc.).--, in [0093]). Re Claim 2, Patch further discloses one or more infrared lights configured to illuminate the capture zone during image capture (see Patch: e. g., in [0090]). Re Claim 3, Patch further discloses wherein the classifier module is a trained classifier module, and wherein the classifier module is trained using labelled images of insects of one or more insect types captured under infrared illumination (see Patch: e. g., -- [0043] Object Classification: An example current classifier model can discriminate between four insect orders: Coleoptera, Diptera, Hymenoptera and Lepidoptera (beetles, flies, wasps/bees, butterflies/moths) at above 90% accuracy. The example classifier model is a VGG16 that has been pretrained on ImageNet. In some examples, the output layer can be modified to have 13 output classes, each denoting a taxonomic label (e.g. Formicidae) within a taxonomic level (e.g. family), and then fine-tuned the network on 82,000 images of local insects obtained from iNaturalist and GBIF. In further examples, Tensorflow and Keras can be used to train and classify images. [0044] Data Sources: When it comes to machine learning and AI, data is important. Data in the example insect trap and identification system can include images of insects. There could be two resources of data for training and evaluating our models: 1) iNaturalist, and 2) insect imaging chamber. [0045] All the training images can be collected from the iNaturalist webpage (e.g., through an aggregator called GBIF). iNaturalist can provide images of animals and plants which are captured and annotated by the public. Since multiple people corroborated the identification through iNaturalist, identification using the training images from iNaturalist can be highly accurate. Additionally, insect images previously collected from the imaging chamber can be used for the model training.--, in [0043]-[0045]). Re Claim 4, Patch further discloses wherein the capture zone includes a capture surface on which the insect is maintained following transportation into the capture zone, wherein the surface is configured to absorb infrared light (see Patch: e.g., -- the elongated tube 114 could be a cuboid shape. However, the elongated tube 114 could be any other suitable shape (e.g., a cubic shape, a cylindrical shape, etc.). In a non-limiting example, the size of the insect entrance of the elongated tube 114 can be designed for an insect to move in a limited and predicted way. The insect entrance of the elongated tube 114 can be a part of the elongated tube 114, which is connected to the first opening 110. An imaging device can record one or more visual images (e.g., photograph, film, video, or any other suitable image) of an expected position of the insect when the insect moves through the elongated tube 114. It should be appreciated that the size of the insect entrance of the elongated tube 114 can be big enough to accept multiple insects at the same time. In another example, the size of the insect entrance of the elongated tube 114 can be designed based on the sizes of insects that the user wants to analyze. Movement and positioning of an insect can also be influenced by alternative surface composition within the insect guide tube 108. For example, one inner surface of the elongated tube 114 can include a rough or adhesive surface for an insect to walk on the rough surface. However, other inner surfaces of the elongated tube 114 can include smooth surfaces for an insect not to be able to walk on the sooth surfaces. In some examples, rough surfaces can be used to orient the insect to an imaging device and to move the insect to the second opening 112 or to the third cell 106. Other surfaces, for example, can be coated in non-stick polytetrafluoroethylene (PTFE, Fluon or Teflon) coatings to prevent the insect from adhering and walking on those surfaces. Thus, the imaging device in the imaging chamber 100 can capture an expected position or side of the insect. In some examples, the elongated tube 114 can include a scent, food, artificial light, natural light from the outside of the imaging chamber 100, or any other suitable means for an insect to pass through the elongated tube 114.--, in [0050], and, -- Yet further, traps may also have additional sensors such as weather sensors, daylight sensors, pollen sensors, dust sensors, air quality analyzers, GPS, or other sensors to collect and record additional data as described herein. And, other embodiments of a trap may have insect-specific adaptations, such as various colors inside the chamber (for contrast with the colors of various insects), various surfaces (e.g., sticky, rough, smooth, slanted, etc.), and various attractants (e.g., scents, CO.sub.2, UV light, etc) that can be alternatingly used.--, in [0094]). Re Claim 5, Patch further discloses wherein the capture surface is textured to absorb infrared light (see Patch: e.g., -- the elongated tube 114 could be a cuboid shape. However, the elongated tube 114 could be any other suitable shape (e.g., a cubic shape, a cylindrical shape, etc.). In a non-limiting example, the size of the insect entrance of the elongated tube 114 can be designed for an insect to move in a limited and predicted way. The insect entrance of the elongated tube 114 can be a part of the elongated tube 114, which is connected to the first opening 110. An imaging device can record one or more visual images (e.g., photograph, film, video, or any other suitable image) of an expected position of the insect when the insect moves through the elongated tube 114. It should be appreciated that the size of the insect entrance of the elongated tube 114 can be big enough to accept multiple insects at the same time. In another example, the size of the insect entrance of the elongated tube 114 can be designed based on the sizes of insects that the user wants to analyze. Movement and positioning of an insect can also be influenced by alternative surface composition within the insect guide tube 108. For example, one inner surface of the elongated tube 114 can include a rough or adhesive surface for an insect to walk on the rough surface. However, other inner surfaces of the elongated tube 114 can include smooth surfaces for an insect not to be able to walk on the sooth surfaces. In some examples, rough surfaces can be used to orient the insect to an imaging device and to move the insect to the second opening 112 or to the third cell 106. Other surfaces, for example, can be coated in non-stick polytetrafluoroethylene (PTFE, Fluon or Teflon) coatings to prevent the insect from adhering and walking on those surfaces. Thus, the imaging device in the imaging chamber 100 can capture an expected position or side of the insect. In some examples, the elongated tube 114 can include a scent, food, artificial light, natural light from the outside of the imaging chamber 100, or any other suitable means for an insect to pass through the elongated tube 114.--, in [0050], and, -- Yet further, traps may also have additional sensors such as weather sensors, daylight sensors, pollen sensors, dust sensors, air quality analyzers, GPS, or other sensors to collect and record additional data as described herein. And, other embodiments of a trap may have insect-specific adaptations, such as various colors inside the chamber (for contrast with the colors of various insects), various surfaces (e.g., sticky, rough, smooth, slanted, etc.), and various attractants (e.g., scents, CO.sub.2, UV light, etc) that can be alternatingly used.--, in [0094]). . Re Claim 6, Patch further discloses wherein the capture surface is formed of a textured plastic (see Patch: e.g., -- the elongated tube 114 could be a cuboid shape. However, the elongated tube 114 could be any other suitable shape (e.g., a cubic shape, a cylindrical shape, etc.). In a non-limiting example, the size of the insect entrance of the elongated tube 114 can be designed for an insect to move in a limited and predicted way. The insect entrance of the elongated tube 114 can be a part of the elongated tube 114, which is connected to the first opening 110. An imaging device can record one or more visual images (e.g., photograph, film, video, or any other suitable image) of an expected position of the insect when the insect moves through the elongated tube 114. It should be appreciated that the size of the insect entrance of the elongated tube 114 can be big enough to accept multiple insects at the same time. In another example, the size of the insect entrance of the elongated tube 114 can be designed based on the sizes of insects that the user wants to analyze. Movement and positioning of an insect can also be influenced by alternative surface composition within the insect guide tube 108. For example, one inner surface of the elongated tube 114 can include a rough or adhesive surface for an insect to walk on the rough surface. However, other inner surfaces of the elongated tube 114 can include smooth surfaces for an insect not to be able to walk on the sooth surfaces. In some examples, rough surfaces can be used to orient the insect to an imaging device and to move the insect to the second opening 112 or to the third cell 106. Other surfaces, for example, can be coated in non-stick polytetrafluoroethylene (PTFE, Fluon or Teflon) coatings to prevent the insect from adhering and walking on those surfaces. Thus, the imaging device in the imaging chamber 100 can capture an expected position or side of the insect. In some examples, the elongated tube 114 can include a scent, food, artificial light, natural light from the outside of the imaging chamber 100, or any other suitable means for an insect to pass through the elongated tube 114.--, in [0050], and, -- Yet further, traps may also have additional sensors such as weather sensors, daylight sensors, pollen sensors, dust sensors, air quality analyzers, GPS, or other sensors to collect and record additional data as described herein. And, other embodiments of a trap may have insect-specific adaptations, such as various colors inside the chamber (for contrast with the colors of various insects), various surfaces (e.g., sticky, rough, smooth, slanted, etc.), and various attractants (e.g., scents, CO.sub.2, UV light, etc) that can be alternatingly used.--, in [0094]). Re Claim 7, Patch further discloses wherein the trap assembly includes a base and a sidewall assembly upwardly extending from the base to a sidewall assembly top edge, wherein the one or more passageways are formed though the sidewall assembly (see Patch: e. g., Fig. 1, -- the insect guide tube 108 can include two openings 110, 112, an elongated tube 114, and/or a trapdoor 116. In some scenarios, a first opening 110 of the insect guide tube 108 can be close to the chamber opening 106….. a second opening 112 of the insect guide tube 108 can be an exit or and entrance of an insect. In some examples, a clear lid 118 can be kept on top of the second opening 112 of the imaging insect guide tube 108 to incentive an insect to leave the imaging chamber 100--; in [0050]-[0052]) Re Claim 8, Patch further discloses wherein the one or more passageways are formed by gaps between the sidewall assembly top edge and a base of the monitoring unit (see Patch: e. g., Fig. 1, -- In some embodiments, in the second cell 104 one or more visual images (e.g., photograph, film, video, etc.) are recorded, stored, and/or transmitted via an imaging device and/or a controller. For example, the second cell 104 can include a first reflector 120 and a first imaging device 122 for recording one or more first visual images (e.g., photograph, film, video, etc.) of the insect…. the first imaging device 122 can be a digital camera, a video recording device, a camcorder, a motion picture camera; or any other suitable device capable of recording, storing, or transmitting visual images (e.g., photographs or videos) of insects.--, in [0043]-[0045], [0050]-[0052], and [0055]-[0057], and, --the imaging chamber 100 can further include the power supply and voltage converter to power all imaging devices 122, 130, 142 and computer components. In even further examples, the imaging chamber 100 can further include a non-transitory computer readable medium (e.g., memory, solid-state hard drive, etc.) to store the visual images. In even further examples, the imaging chamber 100 can include a transceiver to transmit the visual images to a server and/or receive weather data to synchronize with the captured data. In even further examples, the imaging chamber 100 may include a processor with a memory to transmit data to another imaging chamber or any other suitable remote location. In some examples, the processor in the imaging chamber 100 can provide classification results to other imaging chambers with different deep learning models to identify insects. This can allow for easy remote monitoring of several imaging chambers deployed in close proximity and sending out unknown images for prompt human intervention. --, in [0063]; and, -- the imaging chamber 100 including one or more imaging devices can record the multiple images of the one or more objects and store the multiple images in a non-transitory computer-readable medium. In a non-limiting examples, the system can receive the multiple images stored in the non-transitory computer-readable medium via any suitable communication network or combination of communication networks (e.g., a Wi-Fi network, a peer-to-peer network, a cellular network, a wired network, etc.).--, in [0067]). Re Claim 9, Patch further discloses wherein the monitoring unit is housed in a monitoring unit body of the trap assembly, the monitoring unit body mounted to the sidewall assembly top edge via one or more connector members, wherein the one or more connector members define the openings such that each opening is bound at vertical sides thereof by edges of adjacent connector members, and at horizontal sides thereof by the sidewall assembly and the monitoring unit (see Patch: e. g., Fig. 1, -- In some embodiments, in the second cell 104 one or more visual images (e.g., photograph, film, video, etc.) are recorded, stored, and/or transmitted via an imaging device and/or a controller. For example, the second cell 104 can include a first reflector 120 and a first imaging device 122 for recording one or more first visual images (e.g., photograph, film, video, etc.) of the insect…. the first imaging device 122 can be a digital camera, a video recording device, a camcorder, a motion picture camera; or any other suitable device capable of recording, storing, or transmitting visual images (e.g., photographs or videos) of insects.--, in [0043]-[0045], [0050]-[0052], and [0055]-[0057], and, --the imaging chamber 100 can further include the power supply and voltage converter to power all imaging devices 122, 130, 142 and computer components. In even further examples, the imaging chamber 100 can further include a non-transitory computer readable medium (e.g., memory, solid-state hard drive, etc.) to store the visual images. In even further examples, the imaging chamber 100 can include a transceiver to transmit the visual images to a server and/or receive weather data to synchronize with the captured data. In even further examples, the imaging chamber 100 may include a processor with a memory to transmit data to another imaging chamber or any other suitable remote location. In some examples, the processor in the imaging chamber 100 can provide classification results to other imaging chambers with different deep learning models to identify insects. This can allow for easy remote monitoring of several imaging chambers deployed in close proximity and sending out unknown images for prompt human intervention. --, in [0063]; and, -- the imaging chamber 100 including one or more imaging devices can record the multiple images of the one or more objects and store the multiple images in a non-transitory computer-readable medium. In a non-limiting examples, the system can receive the multiple images stored in the non-transitory computer-readable medium via any suitable communication network or combination of communication networks (e.g., a Wi-Fi network, a peer-to-peer network, a cellular network, a wired network, etc.).--, in [0067]). Re Claim 10, Patch further discloses wherein the monitoring unit body covers a top of the cavity (see Patch: e. g., Fig. 1, Fig.4, --[0017] Example intelligent insect trap and identification system (e.g., example insect imaging chambers and systems to process insect images) in the present disclosure can use a sophisticated camera system paired with novel software for insect detection and identification. The example insect trap is time-efficient, non-lethal, and user-friendly. In addition, the example insect imaging chamber in the present disclosure can be non-lethal traps for insect biodiversity monitoring. The example insect trap and identification system can preprocess collected images (video frames, films, photographs, etc.) to reduce the number of images and only use part of the image (bounding boxes) for insect trap and identification. Accordingly, the example insect trap and identification system can effectively and efficiently detect and classify insects using an AI system.--, in [0017]; and, -- [0034] The example insect trap and identification system can include two main components: an example insect imaging chamber and an example insect identification application. The insect trap and identification system can collect data. The example insect identification application can extract meaningful information from the data. [0035] Hardware Stack: The example insect imaging chamber can represent a minimal baseline for collection of environmental and video data. The example insect imaging chamber can include a tube with two imaging devices (e.g., 12.3 megapixel HD cameras or any other suitable cameras) positioned such that they will capture the top and side views of insects as they crawl through the trap.--, in [0034]-[0035], and, --The second side image of the one or more insects can be different than the first side image captured from the first imaging device 122. For example, the first imaging device 122 can capture a top-view image of the insect based on the first reflector 120 while the second imaging device 130 can capture a side-view image of the insect based on the second reflector 128.--, in [0058]) . Re Claim 11, Patch further discloses wherein the monitoring unit body includes a base surface configured to enable: (i) downward image capture by the image capture module through the cavity toward the capture zone; and (ii) illumination of the capture zone by one or more infrared lights (see Patch: e. g., Fig. 1, --[0043] Object Classification: An example current classifier model can discriminate between four insect orders: Coleoptera, Diptera, Hymenoptera and Lepidoptera (beetles, flies, wasps/bees, butterflies/moths) at above 90% accuracy. The example classifier model is a VGG16 that has been pretrained on ImageNet. In some examples, the output layer can be modified to have 13 output classes, each denoting a taxonomic label (e.g. Formicidae) within a taxonomic level (e.g. family), and then fine-tuned the network on 82,000 images of local insects obtained from iNaturalist and GBIF. In further examples, Tensorflow and Keras can be used to train and classify images.--, in [0043] -- In some embodiments, in the second cell 104 one or more visual images (e.g., photograph, film, video, etc.) are recorded, stored, and/or transmitted via an imaging device and/or a controller. For example, the second cell 104 can include a first reflector 120 and a first imaging device 122 for recording one or more first visual images (e.g., photograph, film, video, etc.) of the insect…. the first imaging device 122 can be a digital camera, a video recording device, a camcorder, a motion picture camera; or any other suitable device capable of recording, storing, or transmitting visual images (e.g., photographs or videos) of insects.--, in [0055]-[0057], and, --the imaging chamber 100 can further include the power supply and voltage converter to power all imaging devices 122, 130, 142 and computer components. In even further examples, the imaging chamber 100 can further include a non-transitory computer readable medium (e.g., memory, solid-state hard drive, etc.) to store the visual images. In even further examples, the imaging chamber 100 can include a transceiver to transmit the visual images to a server and/or receive weather data to synchronize with the captured data. In even further examples, the imaging chamber 100 may include a processor with a memory to transmit data to another imaging chamber or any other suitable remote location. In some examples, the processor in the imaging chamber 100 can provide classification results to other imaging chambers with different deep learning models to identify insects. This can allow for easy remote monitoring of several imaging chambers deployed in close proximity and sending out unknown images for prompt human intervention. --, in [0063]; and, -- the imaging chamber 100 including one or more imaging devices can record the multiple images of the one or more objects and store the multiple images in a non-transitory computer-readable medium. In a non-limiting examples, the system can receive the multiple images stored in the non-transitory computer-readable medium via any suitable communication network or combination of communication networks (e.g., a Wi-Fi network, a peer-to-peer network, a cellular network, a wired network, etc.).--, in [0067], and, --to perform a cyclical or self-learning approach at step 514 to providing additional classification power to the algorithm/process. In some embodiments, a regression or other statistical method may be performed to associate factors like time of day, temperature, time from/to sunrise/sunset, etc. with high confidence-score classifications of insects. Thus, a weighting factor can be associated with any of these factors. Then, when an image is analyzed by the deep learning model and an insufficiently-high confidence score (or two or more similar confidence scores) is returned for a given classification, the presence of additional factors at the time the image(s) were acquired can be considered. If, e.g., time of day, season, or temperature might highly correlate with a given species among the possible classifications, but none of the others, the algorithm can give a preliminary or tentative classification of the given species and notify a user of the factor that was used to supplement the prediction. Or, if an insect is able to climb a specific type of surface, which other likely insects are not, this information could rule out some results of the classifier model. In other embodiments, the additional/environmental data can be combined with image data and provided to the deep learning model as training data, such that a new or re-tuned model can be generated directly from the additional/environmental data. In a non-limiting scenario, the system can receive environmental information from the user, any suitable device (e.g., insect imaging chamber 100, etc.) and/or any suitable database (e.g., National Oceanic and Atmospheric Administration (NOAA) Online Weather Database, etc.).--, in [0093]) Re Claim 12, Patch further discloses wherein the sidewall assembly tapers inward between the base and the sidewall assembly top edge (see Patch: e. g., Fig. 1, -- In some embodiments, in the second cell 104 one or more visual images (e.g., photograph, film, video, etc.) are recorded, stored, and/or transmitted via an imaging device and/or a controller. For example, the second cell 104 can include a first reflector 120 and a first imaging device 122 for recording one or more first visual images (e.g., photograph, film, video, etc.) of the insect…. the first imaging device 122 can be a digital camera, a video recording device, a camcorder, a motion picture camera; or any other suitable device capable of recording, storing, or transmitting visual images (e.g., photographs or videos) of insects.--, in [0043]-[0045], [0050]-[0052], and [0055]-[0057], and, --the imaging chamber 100 can further include the power supply and voltage converter to power all imaging devices 122, 130, 142 and computer components. In even further examples, the imaging chamber 100 can further include a non-transitory computer readable medium (e.g., memory, solid-state hard drive, etc.) to store the visual images. In even further examples, the imaging chamber 100 can include a transceiver to transmit the visual images to a server and/or receive weather data to synchronize with the captured data. In even further examples, the imaging chamber 100 may include a processor with a memory to transmit data to another imaging chamber or any other suitable remote location. In some examples, the processor in the imaging chamber 100 can provide classification results to other imaging chambers with different deep learning models to identify insects. This can allow for easy remote monitoring of several imaging chambers deployed in close proximity and sending out unknown images for prompt human intervention. --, in [0063]; and, -- the imaging chamber 100 including one or more imaging devices can record the multiple images of the one or more objects and store the multiple images in a non-transitory computer-readable medium. In a non-limiting examples, the system can receive the multiple images stored in the non-transitory computer-readable medium via any suitable communication network or combination of communication networks (e.g., a Wi-Fi network, a peer-to-peer network, a cellular network, a wired network, etc.).--, in [0067]). Re Claim 13, Patch further discloses wherein the cavity is encircled by a cavity sidewall, and wherein the cavity sidewall is configured to inhibit upward crawling by an insect (see Patch: e. g., Fig. 1, -- In some embodiments, in the second cell 104 one or more visual images (e.g., photograph, film, video, etc.) are recorded, stored, and/or transmitted via an imaging device and/or a controller. For example, the second cell 104 can include a first reflector 120 and a first imaging device 122 for recording one or more first visual images (e.g., photograph, film, video, etc.) of the insect…. the first imaging device 122 can be a digital camera, a video recording device, a camcorder, a motion picture camera; or any other suitable device capable of recording, storing, or transmitting visual images (e.g., photographs or videos) of insects.--, in [0043]-[0045], [0050]-[0052], and [0055]-[0057], and, --the imaging chamber 100 can further include the power supply and voltage converter to power all imaging devices 122, 130, 142 and computer components. In even further examples, the imaging chamber 100 can further include a non-transitory computer readable medium (e.g., memory, solid-state hard drive, etc.) to store the visual images. In even further examples, the imaging chamber 100 can include a transceiver to transmit the visual images to a server and/or receive weather data to synchronize with the captured data. In even further examples, the imaging chamber 100 may include a processor with a memory to transmit data to another imaging chamber or any other suitable remote location. In some examples, the processor in the imaging chamber 100 can provide classification results to other imaging chambers with different deep learning models to identify insects. This can allow for easy remote monitoring of several imaging chambers deployed in close proximity and sending out unknown images for prompt human intervention. --, in [0063]; and, -- the imaging chamber 100 including one or more imaging devices can record the multiple images of the one or more objects and store the multiple images in a non-transitory computer-readable medium. In a non-limiting examples, the system can receive the multiple images stored in the non-transitory computer-readable medium via any suitable communication network or combination of communication networks (e.g., a Wi-Fi network, a peer-to-peer network, a cellular network, a wired network, etc.).--, in [0067]). Re Claim 14, Patch further discloses wherein the cavity sidewall is formed of a smooth material configured to inhibit upward crawling by an insect (see Patch: e. g., Fig. 1, Fig.4, --[0017] Example intelligent insect trap and identification system (e.g., example insect imaging chambers and systems to process insect images) in the present disclosure can use a sophisticated camera system paired with novel software for insect detection and identification. The example insect trap is time-efficient, non-lethal, and user-friendly. In addition, the example insect imaging chamber in the present disclosure can be non-lethal traps for insect biodiversity monitoring. The example insect trap and identification system can preprocess collected images (video frames, films, photographs, etc.) to reduce the number of images and only use part of the image (bounding boxes) for insect trap and identification. Accordingly, the example insect trap and identification system can effectively and efficiently detect and classify insects using an AI system.--, in [0017]; and, -- [0034] The example insect trap and identification system can include two main components: an example insect imaging chamber and an example insect identification application. The insect trap and identification system can collect data. The example insect identification application can extract meaningful information from the data. [0035] Hardware Stack: The example insect imaging chamber can represent a minimal baseline for collection of environmental and video data. The example insect imaging chamber can include a tube with two imaging devices (e.g., 12.3 megapixel HD cameras or any other suitable cameras) positioned such that they will capture the top and side views of insects as they crawl through the trap.--, in [0034]-[0035], and, --The second side image of the one or more insects can be different than the first side image captured from the first imaging device 122. For example, the first imaging device 122 can capture a top-view image of the insect based on the first reflector 120 while the second imaging device 130 can capture a side-view image of the insect based on the second reflector 128.--, in [0058]). Re Claim 15, Patch further discloses wherein the cavity sidewall tapers inwardly from a top edge to a bottom edge (see Patch: e. g., Fig. 1, Fig.4, --[0017] Example intelligent insect trap and identification system (e.g., example insect imaging chambers and systems to process insect images) in the present disclosure can use a sophisticated camera system paired with novel software for insect detection and identification. The example insect trap is time-efficient, non-lethal, and user-friendly. In addition, the example insect imaging chamber in the present disclosure can be non-lethal traps for insect biodiversity monitoring. The example insect trap and identification system can preprocess collected images (video frames, films, photographs, etc.) to reduce the number of images and only use part of the image (bounding boxes) for insect trap and identification. Accordingly, the example insect trap and identification system can effectively and efficiently detect and classify insects using an AI system.--, in [0017]; and, -- [0034] The example insect trap and identification system can include two main components: an example insect imaging chamber and an example insect identification application. The insect trap and identification system can collect data. The example insect identification application can extract meaningful information from the data. [0035] Hardware Stack: The example insect imaging chamber can represent a minimal baseline for collection of environmental and video data. The example insect imaging chamber can include a tube with two imaging devices (e.g., 12.3 megapixel HD cameras or any other suitable cameras) positioned such that they will capture the top and side views of insects as they crawl through the trap.--, in [0034]-[0035], and, --The second side image of the one or more insects can be different than the first side image captured from the first imaging device 122. For example, the first imaging device 122 can capture a top-view image of the insect based on the first reflector 120 while the second imaging device 130 can capture a side-view image of the insect based on the second reflector 128.--, in [0058]). Re Claim 16, Patch further discloses wherein the bottom edge adjoins the capture zone (see Patch: e. g., Fig. 1, Fig.4, --[0017] Example intelligent insect trap and identification system (e.g., example insect imaging chambers and systems to process insect images) in the present disclosure can use a sophisticated camera system paired with novel software for insect detection and identification. The example insect trap is time-efficient, non-lethal, and user-friendly. In addition, the example insect imaging chamber in the present disclosure can be non-lethal traps for insect biodiversity monitoring. The example insect trap and identification system can preprocess collected images (video frames, films, photographs, etc.) to reduce the number of images and only use part of the image (bounding boxes) for insect trap and identification. Accordingly, the example insect trap and identification system can effectively and efficiently detect and classify insects using an AI system.--, in [0017]; and, -- [0034] The example insect trap and identification system can include two main components: an example insect imaging chamber and an example insect identification application. The insect trap and identification system can collect data. The example insect identification application can extract meaningful information from the data. [0035] Hardware Stack: The example insect imaging chamber can represent a minimal baseline for collection of environmental and video data. The example insect imaging chamber can include a tube with two imaging devices (e.g., 12.3 megapixel HD cameras or any other suitable cameras) positioned such that they will capture the top and side views of insects as they crawl through the trap.--, in [0034]-[0035], and, --The second side image of the one or more insects can be different than the first side image captured from the first imaging device 122. For example, the first imaging device 122 can capture a top-view image of the insect based on the first reflector 120 while the second imaging device 130 can capture a side-view image of the insect based on the second reflector 128.--, in [0058]). Re Claim 17, Patch further discloses a removable lure module mounted proximate to the capture zone (see Patch: e. g., Fig. 1, Fig.4, --[0017] Example intelligent insect trap and identification system (e.g., example insect imaging chambers and systems to process insect images) in the present disclosure can use a sophisticated camera system paired with novel software for insect detection and identification. The example insect trap is time-efficient, non-lethal, and user-friendly. In addition, the example insect imaging chamber in the present disclosure can be non-lethal traps for insect biodiversity monitoring. The example insect trap and identification system can preprocess collected images (video frames, films, photographs, etc.) to reduce the number of images and only use part of the image (bounding boxes) for insect trap and identification. Accordingly, the example insect trap and identification system can effectively and efficiently detect and classify insects using an AI system.--, in [0017]; and, -- [0034] The example insect trap and identification system can include two main components: an example insect imaging chamber and an example insect identification application. The insect trap and identification system can collect data. The example insect identification application can extract meaningful information from the data. [0035] Hardware Stack: The example insect imaging chamber can represent a minimal baseline for collection of environmental and video data. The example insect imaging chamber can include a tube with two imaging devices (e.g., 12.3 megapixel HD cameras or any other suitable cameras) positioned such that they will capture the top and side views of insects as they crawl through the trap.--, in [0034]-[0035], --[0049] In further examples, the first cell 102 can include an insect guide tube 108 for placing the insect to be recorded by an imaging device, which can be disposed in the second cell 104. In some scenarios, the insect guide tube 108 can be removable. For example, the insect guide tube 108 can be removed from the imaging chamber 100 and replaced with a different insect guide tube 108 having a different size depending on expected subjects to be placed in the tube 108. In further examples, the insect guide tube 108 can be transparent such that the insect in the insect guide tube 108 can be seen from the outside of the insect guide tube 108 and recorded by an imaging device from the outside of the insect guide tube 108.--, in [0049]; and, --The second side image of the one or more insects can be different than the first side image captured from the first imaging device 122. For example, the first imaging device 122 can capture a top-view image of the insect based on the first reflector 120 while the second imaging device 130 can capture a side-view image of the insect based on the second reflector 128.--, in [0058]). Re Claim 18, Patch further discloses wherein the capture zone of the trap assembly includes a porous base to enable dissemination of scent from a lure positioned underneath the capture zone (see Patch: e. g., Fig. 1, Fig.4, --[0017] Example intelligent insect trap and identification system (e.g., example insect imaging chambers and systems to process insect images) in the present disclosure can use a sophisticated camera system paired with novel software for insect detection and identification. The example insect trap is time-efficient, non-lethal, and user-friendly. In addition, the example insect imaging chamber in the present disclosure can be non-lethal traps for insect biodiversity monitoring. The example insect trap and identification system can preprocess collected images (video frames, films, photographs, etc.) to reduce the number of images and only use part of the image (bounding boxes) for insect trap and identification. Accordingly, the example insect trap and identification system can effectively and efficiently detect and classify insects using an AI system.--, in [0017]; and, -- [0034] The example insect trap and identification system can include two main components: an example insect imaging chamber and an example insect identification application. The insect trap and identification system can collect data. The example insect identification application can extract meaningful information from the data. [0035] Hardware Stack: The example insect imaging chamber can represent a minimal baseline for collection of environmental and video data. The example insect imaging chamber can include a tube with two imaging devices (e.g., 12.3 megapixel HD cameras or any other suitable cameras) positioned such that they will capture the top and side views of insects as they crawl through the trap.--, in [0034]-[0035], --[0049] In further examples, the first cell 102 can include an insect guide tube 108 for placing the insect to be recorded by an imaging device, which can be disposed in the second cell 104. In some scenarios, the insect guide tube 108 can be removable. For example, the insect guide tube 108 can be removed from the imaging chamber 100 and replaced with a different insect guide tube 108 having a different size depending on expected subjects to be placed in the tube 108. In further examples, the insect guide tube 108 can be transparent such that the insect in the insect guide tube 108 can be seen from the outside of the insect guide tube 108 and recorded by an imaging device from the outside of the insect guide tube 108.--, in [0049]; and, --The second side image of the one or more insects can be different than the first side image captured from the first imaging device 122. For example, the first imaging device 122 can capture a top-view image of the insect based on the first reflector 120 while the second imaging device 130 can capture a side-view image of the insect based on the second reflector 128.--, in [0058]). Re Claim 19, Patch further discloses a battery power supply, and wherein the image capture protocol and the data transmission protocol are configured to optimize battery power conservation (see Patch: e.g., -- the first imaging device 122 can be a digital camera, a video recording device, a camcorder, a motion picture camera; or any other suitable device capable of recording, storing, or transmitting visual images (e.g., photographs or videos) of insects. In addition, the first imaging device 122 can further include a motion sensor such that the first imaging device 122 records the visual images when the motion sensor detects movement of the insect in the elongated tube 114 to increase battery life if the first imaging device 122 is an battery powered device and save memory space by only storing insect images in the memory. In further examples, the first imaging device 122 or a controller can dynamically reduce a frame rate of the visual images (e.g., videos) to save battery usage.--, in [0057]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEI WEN YANG whose telephone number is (571)270-5670. The examiner can normally be reached on 8:00 - 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on 571-272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WEI WEN YANG/Primary Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Mar 14, 2025
Application Filed
Apr 05, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602789
ENDOSCOPIC IMAGE SEGMENTATION METHOD BASED ON SINGLE IMAGE AND DEEP LEARNING NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12586413
METHOD FOR RECOGNIZING ACTIVITIES USING SEPARATE SPATIAL AND TEMPORAL ATTENTION WEIGHTS
2y 5m to grant Granted Mar 24, 2026
Patent 12582359
IMAGE DISPLAY METHOD, STORAGE MEDIUM, AND IMAGE DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12573034
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND PROGRAM, AND IMAGE PROCESSING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12567168
DATA PROCESSING METHOD AND APPARATUS, DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+10.9%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 657 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month