Prosecution Insights
Last updated: April 19, 2026
Application No. 18/751,548

MULTISENSORY IMAGING METHODS AND APPARATUS FOR CONTROLLED ENVIRONMENT HORTICULTURE USING IRRADIATORS AND CAMERAS AND/OR SENSORS

Final Rejection §102§103
Filed
Jun 24, 2024
Examiner
MESSMORE, JONATHAN R
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Agnetix, Inc.
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
86%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
375 granted / 491 resolved
+18.4% vs TC avg
Moderate +9% lift
Without
With
+9.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
40 currently pending
Career history
531
Total Applications
across all art units

Statute-Specific Performance

§101
4.0%
-36.0% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
27.0%
-13.0% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 491 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 24 January 2026 have been fully considered but they are not persuasive. Applicant argues the primary reference, Barbour, does not disclose “a reference condition library”, “a plurality of labeled feature sets corresponding to reference conditions”, and “wherein at least one reference labeled feature set of the plurality of labeled feature sets includes a plurality of reference values, each reference value of the plurality of reference values corresponding to a unique measurable condition of the at least two different measurable conditions.” Examiner respectfully disagrees and respectfully directs Applicant to the cited portions of Barbour below. Examiner respectfully submits “a reference condition library” appears to be a broad term which may be considered data in an electronic format, the data refers to a characteristic of some kind in some regard. Examiner respectfully directs Applicant’s attention to Barbour: ¶ [0126]: library routines… AI methodologies generate an output that can be used in segmentations, identification, and characterization… techniques and analytics… use vectors and surface information over scalar and point values which discloses what may be considered “reference condition library”. Examiner respectfully submits “a plurality of labeled feature sets corresponding to reference conditions” appears broad and may be considered and electronic data that relates in any way to a characteristic of a data set. Examiner respectfully directs Applicant’s attention to Barbour: ¶ [0093]: thus informing expected shape, deviation from shape, and estimation of surface roughness; ¶ [0095]: degree of change of curvature around a cluster of pixels; ¶ [0096]: surface roughness estimation can utilize polarization information. As well as the entirety of ¶ [0109] such as “the muscle conditions in surfacing whales”. Examiner respectfully submits “labeled feature sets… including a plurality of reference values” appears broad and may be considered data with regard to references. Examiner respectfully directs Applicant to Barbour: ¶ [0047]: re-describe objects and scenes in their FOVs in terms of spatial phase data… the shape of the object, the type of material from which it is made, the orientation of the object relative to the observer, etc., affect the spatial phase of the EM radiation incident upon the SPI systems. Applicant’s arguments, see Response to Office Action mailed 24 July 2025, filed 24 January 2026, with respect to Claim Rejections under 35 USC §112 have been fully considered and are persuasive. The Claim Rejections under 35 USC §112 of Claim 2 has been withdrawn. Applicant’s arguments, see Response to Office Action mailed 24 July 2025, filed 24 January 2026, with respect to Claim Objections have been fully considered and are persuasive. The Claim Objections of Claims 2, 8, and 11 has been withdrawn. Claim Rejections - 35 USC § 102 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim(s) 11-16 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Barbour et al. (US 2023/0048725 A1). Regarding Claim 11, Barbour discloses multisensory imaging system, comprising: a spatial arrangement of sensors to sense [Barbour: ¶ [0049]: Additionally or alternatively, the EM radiation 104 may include EM energy that is projected onto the object 102 by an EM energy source (e.g., projected light 104b) and reflected off, emitted from surfaces of the object 102 or transmitted through the object 102], within a field of view of the spatial arrangement of sensors, a plurality of measurable conditions within the field of view [Barbour: ¶ [0109]: The example applications discussed above are merely some of a large, broad set of application areas for the SPI systems 100, 200. Other example of applications for the SPI systems 100, 200 include metrology, inspection, maintenance, navigation, facial recognition, security, situational awareness, entertainment, 3D printing, autonomy, healthcare, wound care, tracking, ranging to name a few. For example, the SPI systems 100, 200 may be used for: astronomy; research; nuclear analysis; material integrity analysis (e.g. to detect cracks and other material defects or anomalies); foreign object detection (e.g., to detect foreign objects that should not exist in specific materials); unique pattern identification (e.g., fingerprint matching or to identify other unique patterns on objects or areas of interest); material wear and tear (e.g., to analyze material surface condition); materials discrimination (e.g., to determine material properties and verification of finite element analysis); optical 3D dimensional deformation detection (e.g., to monitor real-time vehicle roof deformation that occurs in a high impact collision); bruise damage measurement and analysis, ice detection (e.g., to detect ice at various distances over various weather conditions); extended range 3D facial recognition; assessment of body conditions (e.g., to visualize fundamental aspects of muscle conditions in surfacing whales); identification of infrastructure states (e.g., to identify degradation of infrastructure components such as pipes, bridges, and rails); 3D volumetric body motion analysis (e.g., to map the trajectory of areas of the body without tagging); quality control and inspection of aircraft parts (e.g., to determine defects, wear and tear of parts, and preventative maintenance); determining angle of incidence on missile targets (e.g., to accurately determine measured difference between weapon body axes and the target axes of impact); scattering media visualization (e.g., to image under poor environmental conditions such as fog and haze); terrain navigation of unmanned vehicles (e.g., in complex terrain and urban environments where access to GPS and communications may be limited); face muscle tracking (e.g., for facial gesture recognition and tracking; camouflage discrimination (e.g., to discern camouflaged targets from scene surroundings); metal loss calculation (e.g., where a region of interest is identified, area and depth calculations are made, and comparison with ground truth results are within 98% of each other); corrosion blister calculations; surface profile calculations; etc.], the spatial arrangement of sensors including a one-dimensional (1D), two-dimensional (2D) [Barbour¶ [0061]: The imaging wafer 202 includes an array of integrated image sensors 204. The image sensors 204 can be mixed or similar imager types, such as visible, NIR, Si SWIR, SWIR, MWIR, LWIR, UV, THz, X-ray, depth, spectral (Single, Multi, hyper), etc. As described in further detail below in FIGS. 4A and 4B, each integrated image sensor 204 includes an EM detector (e.g., including an array of radiation-sensing pixels) and a polarization structure. In some implementations, each integrated image sensor 204 can include additional layers, examples being color, multispectral, hyperspectral, polarization, lenslets, multiple types of other depth pixels or imagers, etc. In some implementations, the polarization structure is disposed over (e.g., placed on) the array of radiation-sensing pixels, while in other implementations (e.g., backside illuminated image sensors), the polarization structure is integrated into radiation-sensing pixels (e.g., at the anode or cathode level of the radiation-sensing pixels)], or three- dimensional array of sensor nodes [Barbour: ¶ [0071]: Forming the polarization sensitive photodiode yields the advantage of increasing the effective angular signal from the surface and the accuracy of the 3D measurements due to the elimination of noise in the SPI sensor], at least some sensor nodes of the plurality of sensor nodes including at least one sensor such that the at least some sensor nodes sense at least two different measurable conditions of the plurality of measurable conditions [Barbour: ¶ [0049]]; and an image processor, coupled to the spatial arrangement of sensors, to process a plurality of mono-sensory images respectively corresponding to the at least two different measurable conditions [Barbour: ¶ [0098]: For example, FIG. 6 shows an example of images obtained from EM radiation passing through a gradient layer. Specifically, the images shown in FIG. 6 depict the infra-red, polarized, and fused images of a land mine (the object of interest) buried in 4 to 6 inches of soil], wherein: each mono-sensory image of the plurality of mono-sensory images includes a plurality of pixels collectively representing a unique measurable condition of the at least two measurable conditions [Barbour: ¶ [0098]: the normal to the surface and the frequency of the spectrum are captured for each pixel in the SPI systems 100, 200, thus creating a frequency distribution as well as the normal distribution]; and each pixel of the plurality of pixels is digitally represented by: pixel coordinates representing a spatial position in the field of view at which the unique measurable condition of the at least two different measurable conditions is sensed by the at least one sensor [Barbour: ¶ [0138]]; and a measurement value representing the unique measurable condition of the at least two different measurable conditions [Barbour: ¶ [0047]; and ¶ [0138]], at the spatial position in the field of view, wherein the image processor is configured to process the plurality of mono-sensory images, based at least in part on a reference condition library [Barbour: ¶ [0126]] comprising a plurality of labeled feature sets corresponding to reference conditions [Barbour: ¶ [0093]-[0096]; and ¶ [0109]], to estimate or determine at least one environmental condition at respective spatial positions in the field of view [Barbour: ¶ [0109]], wherein at least one labeled feature set of the plurality of labeled feature sets includes a plurality of reference values, each reference value of the plurality of reference values corresponding to a unique measurable condition of the at least two different measurable conditions [Barbour: ¶ [0047]]. Regarding Claim 12, Barbour discloses all the limitations of Claim 11, and is analyzed as previously discussed with respect to that claim. Furthermore, Barbour discloses wherein the at least one environmental condition estimated or determined by the image processor at the respective spatial positions in the field of view include at least one of: one or more states or conditions of one or more objects at the respective spatial positions [Barbour: ¶ [0100]]; identification of substances or compounds present in the one or more objects at the respective spatial positions spatial position [Barbour: ¶ [0106]]; or one or more ambient conditions at the respective spatial positions. Regarding Claim 13, Barbour discloses all the limitations of Claim 12, and is analyzed as previously discussed with respect to that claim. Furthermore, Barbour discloses wherein respective ones of the one or more objects at the respective spatial positions include at least one of: a plurality of plants; a single plant; or a particular part of a plant [Barbour: ¶ [0108]]. Regarding Claim 14, Barbour discloses all the limitations of Claim 11, and is analyzed as previously discussed with respect to that claim. Furthermore, Barbour discloses wherein the at least two measurable conditions include at least two of: visible radiation; near infrared radiation; Infrared radiation [Barbour: ¶ [0049]]; air temperature; relative humidity; carbon dioxide; or distance. Regarding Claim 15, Barbour discloses all the limitations of Claim 14, and is analyzed as previously discussed with respect to that claim. Furthermore, Barbour discloses wherein the spatial arrangement of sensors includes the two-dimensional (2D) array of sensor nodes [Barbour: ¶ [0108]]. Regarding Claim 16, Barbour discloses all the limitations of Claim 14, and is analyzed as previously discussed with respect to that claim. Furthermore, Barbour discloses wherein the spatial arrangement of sensors includes the three-dimensional (3D) array of sensor nodes [Barbour: ¶ [0014]; and ¶ [0041]]. Claim Rejections - 35 USC § 103 Claim(s) 2-5, 7-8, and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Barbour et al. (US 2023/0048725 A1) in view of Ortiz et al. (US 2020/0228753 A1). Regarding Claims 2, 8, and 17¸ Barbour discloses a method using an imaging system, comprising: irradiate at least one object with the source radiation having the different irradiation wavelengths [Barbour: ¶ [0049]: Additionally or alternatively, the EM radiation 104 may include EM energy that is projected onto the object 102 by an EM energy source (e.g., projected light 104b) and reflected off, emitted from surfaces of the object 102 or transmitted through the object 102]; at least one sensor to sense, within a field of view of the at least one sensor, reflected or emitted radiation reflected or emitted by the at least one object in response to irradiation of the at least one object by the source radiation having the different irradiation wavelengths, the at least one sensor generating a plurality of narrowband images respectively corresponding to the different irradiation wavelengths [Barbour: ¶ [0047]: The SPI systems 100, 200 include image sensors which function as shape-based sensors that are configured to passively capture spatial phase and radiometric information of the EM radiation that is collected by the SPI systems 100, 200. In a general aspect, the SPI systems 100, 200 are sensitive to spatial phase of the EM radiation incident upon it. The SPI systems 100, 200 re-describe objects and scenes in their FOVs in terms of spatial phase data. In particular, the spatial phase of EM radiation emanating from the surfaces of objects and scenes, whether it is emitted, transmitted, or reflected, has a measurable spatial phase. Thus, the shape of the object, the type of material from which it is made, the orientation of the object relative to the observer, etc., affect the spatial phase of the EM radiation incident upon the SPI systems 100, 200. As a result, each feature of the object 102 has a distinct spatial phase signature. Consequently, the EM radiation incident upon the SPI systems 100, 200 contains information indicative of the interaction of EM energy with objects and scenes in their FOVs], wherein: each narrowband image of the plurality of narrowband images includes a plurality of pixels [Barbour: ¶ [0046]: In a general aspect, the SPI systems 100, 200 are lensed 3D data acquisition sensor and analytics platforms that are configured to capture 3D data at the pixel level and across the electromagnetic (EM) spectrum]; and each pixel of the plurality of pixels is digitally represented by: pixel coordinates representing a spatial position in the field of view at which the at least one sensor sensed the reflected or emitted radiation [Barbour: ¶ [0138]: The shape operating system 1900 includes a storage engine 1902. In some examples, the input to the shape operating system 1900 includes the digital forms of the rich data set (e.g., CMOS images and pXSurface and pXShape pixel surface). The input data is received by the storage engine 1902 and stored in a record format with an associated suffix (e.g., “.pXSx”). In addition to the actual surface image (which, for example, includes the normal to the surface and the frequency of light for each pixel), the following can also be stored by the storage engine 1902: the date/time of acquisition; location in a specified coordinate system; sensor type; and other relevant meta data appropriate to the acquired signal]; and a radiation value representing an amount of the reflected or emitted radiation sensed by the at least one sensor at the spatial position in the field of view [Barbour: ¶ [0047]; and ¶ [0138]]; and an image processor to process the plurality of narrowband images [Barbour: ¶ [0068]: Therefore, the SPI systems 100, 200 may be configured to single or multiple wavelengths or wavebands (e.g., including various separations of specular and diffuse bands) to determine the various features of the object 102], based at least in part on a reference condition library [Barbour: ¶ [0126]: In some examples, the SPI systems 100, 200 are configured to perform hybrid AI analytics that are a combination of first-generation AI methodologies, second-generation AI methodologies, and third-generation AI methodologies. In some examples, first-generation AI methodologies can include existing algorithms, library routines, and first principle analytics gleaned from physics] comprising a plurality of labeled feature sets [Barbour: ¶ [0044]: The methods and systems presented here are based on directionality of all light sources (e.g., global light map), yielding a rich set of attributes including angle, edges, slope, rates of slope, objects, and sub-sections of objects, and another attribute. Segmentation of a scene is performed by examining similar or dissimilar values of these attributes providing an ability to examine surfaces and sub-surfaces based on these attribute sets such as angles, index of refraction, etc. in addition to the intensity and RGB values] corresponding to reference conditions of the at least one object [Barbour: ¶ [0090]: With each pixel now being able to represent various attributes of the object 102 (e.g., as expressed through the first- and second-order primitives), the edge processors 108, 212 can cluster pixels having similar attributes into panels and can segment these panels from other dissimilar pixels or panels. In some implementations, a pixel cluster is identified by clustering pixels having attribute values in a predetermined interval of values. For example, pixels having attribute values within a 10 percent variation from a pre-determined value or a mean value, etc. may be clustered in some instances. This specific pixel cluster can be used to define the surface of an object by grouping multiple pixels with attribute values in a predefined interval. The clustered and segmented parameters form a family of representations called pXSurface and pXShape, where “X” defines the attribute type. In the example discussed above, the attribute set associated with the normals to the surfaces of the object 102 and the corresponding frequency broadband distribution captured for each pixel is denoted as a pNSurface and pNShape description of the object 102, where “N” denotes the surface normal vectors or orientations], to estimate or determine at least one actual condition of the at least one object at respective spatial positions in the field of view of the at least one sensor [Barbour: ¶ [0109]: The example applications discussed above are merely some of a large, broad set of application areas for the SPI systems 100, 200. Other example of applications for the SPI systems 100, 200 include metrology, inspection, maintenance, navigation, facial recognition, security, situational awareness, entertainment, 3D printing, autonomy, healthcare, wound care, tracking, ranging to name a few. For example, the SPI systems 100, 200 may be used for: astronomy; research; nuclear analysis; material integrity analysis (e.g. to detect cracks and other material defects or anomalies); foreign object detection (e.g., to detect foreign objects that should not exist in specific materials); unique pattern identification (e.g., fingerprint matching or to identify other unique patterns on objects or areas of interest); material wear and tear (e.g., to analyze material surface condition); materials discrimination (e.g., to determine material properties and verification of finite element analysis); optical 3D dimensional deformation detection (e.g., to monitor real-time vehicle roof deformation that occurs in a high impact collision); bruise damage measurement and analysis, ice detection (e.g., to detect ice at various distances over various weather conditions); extended range 3D facial recognition; assessment of body conditions (e.g., to visualize fundamental aspects of muscle conditions in surfacing whales); identification of infrastructure states (e.g., to identify degradation of infrastructure components such as pipes, bridges, and rails); 3D volumetric body motion analysis (e.g., to map the trajectory of areas of the body without tagging); quality control and inspection of aircraft parts (e.g., to determine defects, wear and tear of parts, and preventative maintenance); determining angle of incidence on missile targets (e.g., to accurately determine measured difference between weapon body axes and the target axes of impact); scattering media visualization (e.g., to image under poor environmental conditions such as fog and haze); terrain navigation of unmanned vehicles (e.g., in complex terrain and urban environments where access to GPS and communications may be limited); face muscle tracking (e.g., for facial gesture recognition and tracking; camouflage discrimination (e.g., to discern camouflaged targets from scene surroundings); metal loss calculation (e.g., where a region of interest is identified, area and depth calculations are made, and comparison with ground truth results are within 98% of each other); corrosion blister calculations; surface profile calculations; etc.], wherein at least one labeled feature set of the plurality of labeled feature sets includes a plurality of reference values [Barbour: ¶ [0047]], each reference value of the plurality of reference values corresponding to a unique irradiation wavelength of the different irradiation wavelengths [Barbour: ¶ [0047]]. Barbour may not explicitly disclose a plurality of narrowband irradiators to respectively emit source radiation having different irradiation wavelengths; a flash controller to sequentially control the plurality of narrowband irradiators to successively irradiate at least one object with the source radiation having the different irradiation wavelengths. However, Ortiz discloses a plurality of narrowband irradiators to respectively emit source radiation having different irradiation wavelengths [Ortiz: ¶ [0013]: FIG. 1 schematically shows a camera system 100 configured to automatically monitor a retail shopping area in order to identify interactions between human subjects and objects for purchase in the retail shopping area. The camera system 100 comprises one or more cameras 102. In one example, the camera(s) 102 may be ambient invariant depth+multi-spectral cameras. Each camera 102 includes a sensor array 104, an infrared (IR) illuminator 106, and a plurality of spectral illuminators 108. The IR illuminator 106 may be configured to emit active IR light in an IR light sub-band. Each spectral illuminator 108 may be configured to emit active spectral light in a different spectral light sub-band]; a flash controller to sequentially control the plurality of narrowband irradiators to successively irradiate at least one object with the source radiation having the different irradiation wavelengths [Ortiz: ¶ [0074]: In some examples, the spectral controller machine 630 may change (e.g., tune) the transmission wavelength of the tunable optical filter 624 to sequentially select multiple narrow sub-bands that are within the emission band or spectrum of the blue spectral illuminator to acquire spectral data for the different narrow sub-bands]; at least one sensor to sense, within a field of view of the at least one sensor, reflected or emitted radiation reflected or emitted by the at least one object in response to irradiation of the at least one object by the radiation having the different irradiation wavelengths [Ortiz: ¶ [0013]], the at least one sensor generating a plurality of narrowband images respectively corresponding to the different irradiation wavelengths [Ortiz: ¶ [0013]], wherein; each narrowband image of the plurality of narrowband images includes a plurality of pixels [Ortiz: ¶ [0015]: The camera system 100 may be configured to computer analyze the ambient-light images 112 to identify an above-threshold motion. For example, a camera 102 may operate in the low power mode when below-threshold motion or no motion is detected in a field of view of the camera. In some implementations, each camera 102 may include an on-board motion detection machine 110 configured to detect the above-threshold motion in the ambient-light images 112. For example, the motion detection machine 110 may be configured to perform a comparison of different ambient-light images acquired at different times (e.g., a sequence of ambient-light images) to identify an above-threshold motion. In some implementations, the threshold for identifying motion may correspond to a number of pixels changing from frame to frame. For example, above threshold motion may be triggered if contiguous pixels occupying at least 3% of a field of view change by more than 5% from frame to frame. However, this is just an example, and other parameters/threshold may be used. In some examples, above-threshold motion may correspond to a human subject entering or moving in a field of view of the camera 112]; and each pixel of the plurality of pixels is digitally represented by; pixel coordinates representing a spatial position in the field of view at which the at least one sensor sensed the reflected or emitted radiation [Ortiz: ¶ [0087]: The spectral controller machine 630 may be configured to normalize the spectral light measurements in the different spectral light sub-bands based on one or more of the measured depth and the surface normal. This provides a position—and ambient light-invariant spectral signature of an imaged scene or subject]; and a radiation value representing an amount of the reflected or emitted radiation sensed by the at least one sensor at the spatial position in the field of view [Barbour: ¶ [0086]: The term ‘spectral light image’ refers to a matrix of pixels registered to corresponding regions (X.sub.i, Y.sub.i) of an imaged scene, with a spectral value (SV) indicating, for each pixel, the spectral signature of the corresponding region in the particular spectral light sub-band. For acquiring the spectral light images in each of the sub-bands (e.g., for a multi-spectral light image), the spectral controller machine 130 is configured to determine a spectral value for each of the differential sensors based on the depth value and a differential measurement of active spectral light and ambient light for the differential sensor]. It would have been obvious to one having ordinary skill in the art before the effective filing date to combine light source of Ortiz with the image processing of Barbour in order to provide controlled illumination, thereby reducing variables and improving analysis accuracy. Regarding Claim 3, Barbour in view of Ortiz disclose(s) all the limitations of Claim 2, and is/are analyzed as previously discussed with respect to that claim. Furthermore, Barbour in view of Ortiz discloses wherein at least some radiation values, in at least one narrowband image corresponding to one of the different irradiation wavelengths, are at a same wavelength as the one of the different irradiation wavelengths [Barbour: ¶ [0047]]. Regarding Claim 4, Barbour in view of Ortiz disclose(s) all the limitations of Claim 2, and is/are analyzed as previously discussed with respect to that claim. Furthermore, Barbour in view of Ortiz discloses wherein at least some radiation values, in at least one narrowband image corresponding to one of the different irradiation wavelengths, are at a longer wavelength than the one of the different irradiation wavelengths [Barbour: ¶ [0068]]. Regarding Claim 5, Barbour in view of Ortiz disclose(s) all the limitations of Claim 2, and is/are analyzed as previously discussed with respect to that claim. Furthermore, Barbour in view of Ortiz discloses further comprising a thermal sensor to sense thermal radiation of the at least one object within the field of view [Barbour: ¶ [0145]: The output of the analytics engine 1910 can enable multiple data views, which include, but are not limited to, the following:… LWIR data (which provide thermal information contributing to analysis and calculations)], and to generate a thermal image having a plurality of thermal pixels, each thermal pixel in the plurality of thermal pixels digitally represented by: pixel coordinates representing a spatial position in the field of view at which the thermal sensor sensed the thermal radiation [Barbour: ¶ [0138]: In addition to the actual surface image (which, for example, includes the normal to the surface and the frequency of light for each pixel), the following can also be stored by the storage engine 1902: the date/time of acquisition; location in a specified coordinate system; sensor type; and other relevant meta data appropriate to the acquired signal]; and a thermal radiation value representing an amount of the thermal radiation sensed by the thermal sensor at the spatial position in the field of view [Barbour: ¶ [0145]]. Regarding Claim 7, Barbour in view of Ortiz disclose(s) all the limitations of Claim 2, and is/are analyzed as previously discussed with respect to that claim. Furthermore, Barbour in view of Ortiz discloses wherein the at least one sensor includes at least one camera responsive to a spectrum of radiation including ultraviolet, visible, and infrared radiation [Barbour: ¶ [0049]; and Ortiz: ¶ [0060]: Such operation allows for the same sensor array to be used to measure active light across a broad spectrum including ultraviolet, visible, NIR, and IR light]. Regarding Claim 10, Barbour in view of Ortiz disclose(s) all the limitations of Claim 8, and is/are analyzed as previously discussed with respect to that claim. Furthermore, Barbour in view of Ortiz discloses wherein the at least one object is at least one plant [Barbour: ¶ [0108]: Other sensor data & analytics can also be included such as passive hyperspectral (active: laser & radar), as well as analytics from agriculture sector: terrain solutions, plant identifications, etc.; wherein the specific object being imaged does not appear to hold patentable weight as it would be an obvious variant]. Regarding Claim 18, Barbour in view of Ortiz disclose(s) all the limitations of Claim 17, and is/are analyzed as previously discussed with respect to that claim. Furthermore, Barbour in view of Ortiz discloses wherein: the spatial arrangement of sensors further includes at least one thermal sensor to sense thermal radiation of the at least one object within the field of view; and the plurality of mono-sensory images includes at least one thermal image [Barbour: ¶ [0145]]. Regarding Claim 19, Barbour in view of Ortiz disclose(s) all the limitations of Claim 18, and is/are analyzed as previously discussed with respect to that claim. Furthermore, Barbour in view of Ortiz discloses wherein the at least two measurable conditions include visible radiation, infrared radiation and at least one of: near infrared radiation; air temperature; relative humidity; carbon dioxide; or distance [Barbour: ¶ [0049] and Ortiz: ¶ [0060]]. Regarding Claim 20, Barbour in view of Ortiz disclose(s) all the limitations of Claim 19, and is/are analyzed as previously discussed with respect to that claim. Furthermore, Barbour in view of Ortiz discloses wherein the spatial arrangement of sensors includes the two-dimensional (2D) array of sensor nodes [Barbour: ¶ [0108]]. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Barbour in view of Ortiz as applied to claim 5 above, and further in view of Naimi et al. (US 2011/0243409 A1). Regarding Claim 6, Barbour in view of Ortiz disclose(s) all the limitations of Claim 5, and is/are analyzed as previously discussed with respect to that claim. Furthermore, Barbour in view of Ortiz discloses wherein: the image processor processes the thermal image, based at least in part on the reference condition library, along with the plurality of narrowband images, to estimate or determine the at least one actual condition of the at least one object at respective spatial positions in the field of view; the reference condition library includes at least one first labeled feature set of the plurality of labeled feature sets that includes a first plurality of reference values [Barbour: ¶ [0047]-[0049]; and ¶ ]0126]]. Barbour in view of Ortiz may not explicitly disclose at least one reference value of the first plurality of reference values corresponds to a thermal radiation reference value. However, Naimi discloses at least one reference value of the first plurality of reference values corresponds to a thermal radiation reference value [Naimi: ¶ [0140]: The method can also compare the thermal signature to a reference thermal signature. The reference thermal signature generally corresponds to a reference thermospatial representation, which can be obtained from a library or can be constructed by the method of the present embodiments]. It would have been obvious to one having ordinary skill in the art before the effective filing date to combine the thermal image of Naimi with the multi-spectral image processing of Barbour in view of Ortiz in order to provide more spectrums to improve accuracy. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Barbour as applied to claim 8 above, and further in view of Dai et al. (US 2014/0078277 A1). Regarding Claim 9, Barbour disclose(s) all the limitations of Claim 8, and is/are analyzed as previously discussed with respect to that claim. Barbour may not explicitly disclose further comprising deactivating illumination sources in an environment of the at least one object such that the environment of the at least one object is dark prior to sequentially irradiating the at least one object and during the sensing of the reflected or emitted radiation. However, Dai discloses further comprising deactivating illumination sources in an environment of the at least one object such that the environment of the at least one object is dark prior to sequentially irradiating the at least one object and during the sensing of the reflected or emitted radiation [Dai: ¶ [0055]: controlling the light source to provide strobed light. The strobed light may substantially illuminate the otherwise dark environment only within sequential vertical blanking periods, but not substantially illuminate the dark environment within video image frames during readout of rows of pixels between the vertical blanking periods] It would have been obvious to one having ordinary skill in the art before the effective filing date to combine the illumination after no illumination of Dai in order to provide the most contrast available, improving accuracy. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN R MESSMORE whose telephone number is (571)272-2773. The examiner can normally be reached Monday-Friday 9-5 EST/EDT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at 571-272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN R MESSMORE/Primary Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Jun 24, 2024
Application Filed
Jul 22, 2025
Non-Final Rejection — §102, §103
Jan 24, 2026
Response Filed
Feb 17, 2026
Final Rejection — §102, §103
Mar 06, 2026
Applicant Interview (Telephonic)
Mar 06, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598306
PARSING FRIENDLY AND ERROR RESILIENT MERGE FLAG CODING
2y 5m to grant Granted Apr 07, 2026
Patent 12587680
Attribute Layers And Signaling In Point Cloud Coding
2y 5m to grant Granted Mar 24, 2026
Patent 12581073
VIDEO ENCODING AND DECODING
2y 5m to grant Granted Mar 17, 2026
Patent 12556683
INTRA BLOCK COPY WITH TEMPLATE MATCHING FOR VIDEO ENCODING AND DECODING
2y 5m to grant Granted Feb 17, 2026
Patent 12556663
GAMING TABLE EVENTS DETECTING AND PROCESSING
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
86%
With Interview (+9.3%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 491 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month