DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is responsive to the request for continued examination (RCE), amendments and remarks received 14 January 2026. Claims 1 - 20 are currently pending.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 14 January 2026 has been entered.
Claim Objections
The objection to claim 20, due to a minor informality, is hereby withdrawn in view of the amendments and remarks received 14 January 2026.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 - 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1, 3, 6, 10 - 13, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gurzoni, Jr. et al. U.S. Publication No. 2021/0133443 A1 in view of Gowda et al. U.S. Patent No. 11,640,692.
- With regards to claim 1, Gurzoni, Jr. et al. disclose a computer-implemented method (Gurzoni, Jr. et al., Figs. 1 - 5, Pg. 2 ¶ 0025 - Pg. 3 ¶ 0031, Pg. 6 ¶ 0058 - 0066) comprising: capturing a point cloud using a stereo camera; (Gurzoni et al., Abstract, Figs. 1, 2, 4 & 5, Pg. 1 ¶ 0005 and 0010, Pg. 3 ¶ 0033 - 0034, Pg. 4 ¶ 0036 and 0040 - 0041, Pg. 7 ¶ 0069 - 0070 and 0073 - 0077 [“Alternative range sensors have also been used including ultrasound, structured light, and stereo vision”, “collecting, with the inspection system, area data comprising laser-scan data (e.g., LiDAR scan data, stereoscopic cameras, etc.)”, “a stereoscopic camera” and “laser scan data 120 may comprise range data collected from one or more 2D laser scanners, 3D scanners, other type of LiDAR (“Light Detection and Ranging”) scanners, or other type of range measurement devices. The range data may comprise 3D coordinate data (x, y, z coordinates) relative to the 3D scanner, or some other suitable point of origin”]) determining a region of interest using the point cloud; (Gurzoni, Jr. et al., Abstract, Fig. 11, Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 9 ¶ 0096, Pg. 10 ¶ 0099 - 0102) differentiating, using a semantic detector, material components from a two-dimensional image; (Gurzoni et al., Figs. 5, 7 & 11, Pg. 7 ¶ 0068, Pg. 7 ¶ 0073 - Pg. 8 ¶ 0078, Pg. 9 ¶ 0087, 0090 and 0096) producing a filtered point cloud, after the capture of the point cloud, by (i) applying a filter to the point cloud; (Gurzoni et al., Fig. 11, Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 9 ¶ 0096, Pg. 10 ¶ 0099 - 0102) and (ii) excluding points from the point cloud using the region of interest; (Gurzoni et al., Fig. 11, Pg. 5 ¶ 0045 - 0047, Pg. 7 ¶ 0068 - 0070, Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 9 ¶ 0094 - 0097, Pg. 10 ¶ 0099 - 0102) and generating a material density estimate, for the material components, using the filtered point cloud. (Gurzoni et al., Figs. 9 & 10, Pg. 1 ¶ 0006, Pg. 5 ¶ 0047 - 0049, Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 8 ¶ 0085 - 0086, Pg. 10 ¶ 0098 - 0102 [“map 114 may be presented on a graphical user interface including graphical representations (e.g., 2D and/or 3D) graphical representations, indicating within the map 114 and/or modeling the various properties within the surveyed plant area. The various properties may include growth stage, estimated crop yield, estimated plant health, disease estimate, nutrient deficiency estimate, fruit quantity, flower quantity, fruit size, fruit diameter, tree or vine height, trunk or stem diameter, trunk or stem circumference, tree or vine volume, canopy volume, color, leaf color, leaf density… or other suitable properties” and “datacenter 108 uses the segmented image data, 3D point cloud data, and KB data sources 190 to generate analysis results 126, 420. The datacenter 108 combines the processed and segmented image data and the 3D point cloud data to generate a 3D model of each plant 422. The datacenter 108 combines the geo-reference data 124, the 3D model of each plant, and the analysis results 126 to generate a 3D map 114 of the plant area 424. The dashboard 110 displays the 3D map 114 and the analysis results 126 to the user 426.”]) Gurzoni, Jr. et al. fail to disclose expressly using a semantic detector to produce a filter; and applying the filter [produced using the semantic detector] to the point cloud. Pertaining to analogous art, Gowda et al. disclose capturing a point cloud using a stereo camera; (Gowda et al., Figs. 4 & 6, Col. 1 Lines 51 - 59, Col. 2 Lines 17 - 29 and 58 - 60, Col. 6 Lines 24 - 34, Col. 9 Lines 9 - 19, Col. 10 Lines 4 - 16, Col. 10 Line 54 - Col. 11 Line 12, Col. 13 Line 47 - Col. 14 Line 6) differentiating, using a semantic detector, material components from a two-dimensional image to produce a filter; (Gowda et al., Abstract, Figs. 4, 5B & 6, Col. 2 Lines 1 - 16 and 30 - 39, Col. 3 Lines 13 - 31, Col. 6 Line 35 - Col. 7 Line 18, Col. 9 Line 20 - Col. 10 Line 16) and producing a filtered point cloud, after the capture of the point cloud, by (i) applying the filter to the point cloud. (Gowda et al., Figs. 5B & 6, Col. 6 Line 50 - Col. 7 Line 31, Col. 9 Lines 9 - 19, Col. 9 Line 51 - Col. 10 Line 16, Col. 13 Line 47 - Col. 14 Line 6, Col. 15 Lines 15 - 58) Gurzoni, Jr. et al. and Gowda et al. are combinable because they are both directed towards systems and methods that acquire, analyze, filter and segment point clouds. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Gurzoni, Jr. et al. with the teachings of Gowda et al. This modification would have been prompted in order to enhance the base device of Gurzoni, Jr. et al. with the well-known and applicable technique Gowda et al. applied to a similar device. Applying a filter produced using the semantic detector to the point cloud, as taught by Gowda et al., would enhance the base device of Gurzoni, Jr. et al. by improving its ability to accurately, reliably and efficiently identify which points of the point cloud(s) belong to its various regions of interest and are relevant for determining its various vegetation statistics and/or properties since extraneous and/or unimportant points of the point cloud(s) would be removed so as to reduce the number of points that need to be evaluated, analyzed and/or processed to identify the points that belong to its various regions of interest and/or are relevant for determining its various vegetation statistics and/or properties and thereby enhancing its overall operational speed and efficiency. Furthermore, this modification would enhance the base device of Gurzoni, Jr. et al. by improving its ability to generate the 3D map of the plant area and enhancing the quality of the 3D map of the plant area since extraneous and/or unimportant points of the point cloud(s) would not be included in the 3D map of the plant area. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a filter produced using the semantic detector would be applied to the point cloud when producing the filtered point cloud so as to improve the ability of the base device of Gurzoni, Jr. et al. to accurately, reliably and efficiently identify which points of the point cloud(s) belong to its various regions of interest and are relevant for determining its various vegetation statistics and/or properties by removing extraneous and/or unimportant points of the point cloud(s) and preventing them from undergoing any further processing. Therefore, it would have been obvious to combine Gurzoni, Jr. et al. with Gowda et al. to obtain the invention as specified in claim 1.
- With regards to claim 3, Gurzoni, Jr. et al. in view of Gowda et al. disclose the computer-implemented method of claim 1, wherein: the two-dimensional image is captured by the stereo camera. (Gurzoni et al., Abstract, Figs. 1, 2, 4 & 5, Pg. 1 ¶ 0005 and 0010, Pg. 3 ¶ 0033 - 0034, Pg. 4 ¶ 0036 and 0040 - 0041, Pg. 7 ¶ 0069 - 0070 and 0073 - 0077)
- With regards to claim 6, Gurzoni, Jr. et al. in view of Gowda et al. disclose the computer-implemented method of claim 1, further comprising: combining the material density estimate with a localization to create a density map. (Gurzoni, Jr. et al., Abstract, Figs. 1, 2, 4, 5 & 9 - 11, Pg. 1 ¶ 0006 and 0008 - 0010, Pg. 3 ¶ 0033 - 0034, Pg. 5 ¶ 0047 - 0049, Pg. 7 ¶ 0068 - 0070, Pg. 8 ¶ 0085 - 0086, Pg. 9 ¶ 0096 - Pg. 10 ¶ 0102)
- With regards to claim 10, Gurzoni, Jr. et al. in view of Gowda et al. disclose the computer-implemented method of claim 1, wherein: the material components are foliage components; (Gurzoni, Jr. et al., Pg. 1 ¶ 0006, Pg. 7 ¶ 0068 and 0073 - 0074, Pg. 7 ¶ 0076 - Pg. 8 ¶ 0080, Pg. 8 ¶ 0084 - Pg. 9 ¶ 0087, Pg. 9 ¶ 0090) and the material density estimate is a foliage density estimate. (Gurzoni, Jr. et al., Figs. 9 & 10, Pg. 1 ¶ 0006, Pg. 5 ¶ 0048 - 0049, Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 8 ¶ 0085, Pg. 9 ¶ 0096, Pg. 10 ¶ 0098 - 0102)
- With regards to claim 11, Gurzoni, Jr. et al. in view of Gowda et al. disclose the computer-implemented method of claim 10, wherein: the foliage components are leaves and petioles but not stems. (Gurzoni, Jr. et al., Pg. 5 ¶ 0048, Pg. 7 ¶ 0068 and 0073 - 0074, Pg. 8 ¶ 0085, Pg. 9 ¶ 0087 and 0090, Pg. 10 ¶ 00100 - 0102 [The Examiner asserts that petioles are components of leaves and thus the leaves class of components of Gurzoni, Jr. et al. includes petioles.])
- With regards to claim 12, Gurzoni, Jr. et al. in view of Gowda et al. disclose the computer-implemented method of claim 1, wherein: capturing the point cloud is based on the two-dimensional image. (Gurzoni et al., Abstract, Figs. 1, 2, 4 & 5, Pg. 1 ¶ 0005 and 0010, Pg. 3 ¶ 0033 - 0034, Pg. 4 ¶ 0036 and 0040 - 0041, Pg. 7 ¶ 0069 - 0070 and 0073 - 0077)
- With regards to claim 13, Gurzoni, Jr. et al. disclose one or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause the one or more processors to conduct a method (Gurzoni, Jr. et al., Pg. 2 ¶ 0027 - Pg. 3 ¶ 0031, Pg. 6 ¶ 0059 - 0060 and 0064 - 0066) comprising: capturing a point cloud using a stereo camera; (Gurzoni et al., Abstract, Figs. 1, 2, 4 & 5, Pg. 1 ¶ 0005 and 0010, Pg. 3 ¶ 0033 - 0034, Pg. 4 ¶ 0036 and 0040 - 0041, Pg. 7 ¶ 0069 - 0070 and 0073 - 0077 [“Alternative range sensors have also been used including ultrasound, structured light, and stereo vision”, “collecting, with the inspection system, area data comprising laser-scan data (e.g., LiDAR scan data, stereoscopic cameras, etc.)”, “a stereoscopic camera” and “laser scan data 120 may comprise range data collected from one or more 2D laser scanners, 3D scanners, other type of LiDAR (“Light Detection and Ranging”) scanners, or other type of range measurement devices. The range data may comprise 3D coordinate data (x, y, z coordinates) relative to the 3D scanner, or some other suitable point of origin”]) determining a region of interest using the point cloud; (Gurzoni, Jr. et al., Abstract, Fig. 11, Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 9 ¶ 0096, Pg. 10 ¶ 0099 - 0102) differentiating, using a semantic detector, material components from a two-dimensional image; (Gurzoni et al., Figs. 5, 7 & 11, Pg. 7 ¶ 0068, Pg. 7 ¶ 0073 - Pg. 8 ¶ 0078, Pg. 9 ¶ 0087, 0090 and 0096) producing a filtered point cloud, after the capture of the point cloud, by (i) filtering the point cloud using a filter; (Gurzoni et al., Fig. 11, Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 9 ¶ 0096, Pg. 10 ¶ 0099 - 0102) and (ii) excluding points from the point cloud using the region of interest; (Gurzoni et al., Fig. 11, Pg. 5 ¶ 0045 - 0047, Pg. 7 ¶ 0068 - 0070, Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 9 ¶ 0094 - 0097, Pg. 10 ¶ 0099 - 0102) and generating a material density estimate, for the material components, using the filtered point cloud. (Gurzoni et al., Figs. 9 & 10, Pg. 1 ¶ 0006, Pg. 5 ¶ 0047 - 0049, Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 8 ¶ 0085 - 0086, Pg. 10 ¶ 0098 - 0102 [“map 114 may be presented on a graphical user interface including graphical representations (e.g., 2D and/or 3D) graphical representations, indicating within the map 114 and/or modeling the various properties within the surveyed plant area. The various properties may include growth stage, estimated crop yield, estimated plant health, disease estimate, nutrient deficiency estimate, fruit quantity, flower quantity, fruit size, fruit diameter, tree or vine height, trunk or stem diameter, trunk or stem circumference, tree or vine volume, canopy volume, color, leaf color, leaf density… or other suitable properties” and “datacenter 108 uses the segmented image data, 3D point cloud data, and KB data sources 190 to generate analysis results 126, 420. The datacenter 108 combines the processed and segmented image data and the 3D point cloud data to generate a 3D model of each plant 422. The datacenter 108 combines the geo-reference data 124, the 3D model of each plant, and the analysis results 126 to generate a 3D map 114 of the plant area 424. The dashboard 110 displays the 3D map 114 and the analysis results 126 to the user 426.”]) Gurzoni, Jr. et al. fail to disclose expressly using a semantic detector to produce a filter; and filtering the point cloud using the filter [produced using the semantic detector]. Pertaining to analogous art, Gowda et al. disclose capturing a point cloud using a stereo camera; (Gowda et al., Figs. 4 & 6, Col. 1 Lines 51 - 59, Col. 2 Lines 17 - 29 and 58 - 60, Col. 6 Lines 24 - 34, Col. 9 Lines 9 - 19, Col. 10 Lines 4 - 16, Col. 10 Line 54 - Col. 11 Line 12, Col. 13 Line 47 - Col. 14 Line 6) differentiating, using a semantic detector, material components from a two-dimensional image to produce a filter; (Gowda et al., Abstract, Figs. 4, 5B & 6, Col. 2 Lines 1 - 16 and 30 - 39, Col. 3 Lines 13 - 31, Col. 6 Line 35 - Col. 7 Line 18, Col. 9 Line 20 - Col. 10 Line 16) and producing a filtered point cloud, after the capture of the point cloud, by (i) filtering the point cloud using the filter. (Gowda et al., Figs. 5B & 6, Col. 6 Line 50 - Col. 7 Line 31, Col. 9 Lines 9 - 19, Col. 9 Line 51 - Col. 10 Line 16, Col. 13 Line 47 - Col. 14 Line 6, Col. 15 Lines 15 - 58) Gurzoni, Jr. et al. and Gowda et al. are combinable because they are both directed towards systems and methods that acquire, analyze, filter and segment point clouds. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Gurzoni, Jr. et al. with the teachings of Gowda et al. This modification would have been prompted in order to enhance the base device of Gurzoni, Jr. et al. with the well-known and applicable technique Gowda et al. applied to a similar device. Applying a filter produced using the semantic detector to the point cloud, as taught by Gowda et al., would enhance the base device of Gurzoni, Jr. et al. by improving its ability to accurately, reliably and efficiently identify which points of the point cloud(s) belong to its various regions of interest and are relevant for determining its various vegetation statistics and/or properties since extraneous and/or unimportant points of the point cloud(s) would be removed so as to reduce the number of points that need to be evaluated, analyzed and/or processed to identify the points that belong to its various regions of interest and/or are relevant for determining its various vegetation statistics and/or properties and thereby enhancing its overall operational speed and efficiency. Furthermore, this modification would enhance the base device of Gurzoni, Jr. et al. by improving its ability to generate the 3D map of the plant area and enhancing the quality of the 3D map of the plant area since extraneous and/or unimportant points of the point cloud(s) would not be included in the 3D map of the plant area. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a filter produced using the semantic detector would be applied to the point cloud when producing the filtered point cloud so as to improve the ability of the base device of Gurzoni, Jr. et al. to accurately, reliably and efficiently identify which points of the point cloud(s) belong to its various regions of interest and are relevant for determining its various vegetation statistics and/or properties by removing extraneous and/or unimportant points of the point cloud(s) and preventing them from undergoing any further processing. Therefore, it would have been obvious to combine Gurzoni, Jr. et al. with Gowda et al. to obtain the invention as specified in claim 13.
- With regards to claim 17, Gurzoni, Jr. et al. in view of Gowda et al. disclose the one or more non-transitory computer-readable media of claim 13, the method further comprising: combining the material density estimate with a localization to create a density map. (Gurzoni, Jr. et al., Abstract, Figs. 1, 2, 4, 5 & 9 - 11, Pg. 1 ¶ 0006 and 0008 - 0010, Pg. 3 ¶ 0033 - 0034, Pg. 5 ¶ 0047 - 0049, Pg. 7 ¶ 0068 - 0070, Pg. 8 ¶ 0085 - 0086, Pg. 9 ¶ 0096 - Pg. 10 ¶ 0102)
- With regards to claim 20, Gurzoni, Jr. et al. disclose a stereo imaging system for detecting physical objects (Gurzoni, Jr. et al., Abstract, Pg. 1 ¶ 0005 - 0008 and 0010, Pg. 2 ¶ 0025 - 0028, Pg. 3 ¶ 0033 - Pg. 4 ¶ 0036, Pg. 4 ¶ 0040 - 0041, Pg. 6 ¶ 0056, Pg. 7 ¶ 0073 - 0074, Pg. 9 ¶ 0087 - 0092) comprising: a pair of imagers; (Gurzoni, Jr. et al., Abstract, Pg. 1 ¶ 0005 and 0010, Pg. 4 ¶ 0036 and 0040 - 0041, Pg. 6 ¶ 0056 [“collecting, with the inspection system, area data comprising laser-scan data (e.g., LiDAR scan data, stereoscopic cameras, etc.)”]) one or more processors; (Gurzoni, Jr. et al., Figs. 3 & 4, Pg. 2 ¶ 0025 - 0028, Pg. 6 ¶ 0058 - 0060 and 0064 - 0066) and one or more non-transitory computer-readable media storing instructions that, when executed by the one or more processors, cause the stereo imaging system to conduct a method (Gurzoni, Jr. et al., Abstract, Pg. 1 ¶ 0005 - 0008 and 0010, Pg. 2 ¶ 0025 - Pg. 3 ¶ 0031, Pg. 3 ¶ 0033 - Pg. 4 ¶ 0036, Pg. 4 ¶ 0040 - 0041, Pg. 6 ¶ 0056, 0059 - 0060 and 0064 - 0066, Pg. 7 ¶ 0073 - 0074, Pg. 9 ¶ 0087 - 0092) comprising: capturing a point cloud using a stereo camera; (Gurzoni et al., Abstract, Figs. 1, 2, 4 & 5, Pg. 1 ¶ 0005 and 0010, Pg. 3 ¶ 0033 - 0034, Pg. 4 ¶ 0036 and 0040 - 0041, Pg. 7 ¶ 0069 - 0070 and 0073 - 0077 [“Alternative range sensors have also been used including ultrasound, structured light, and stereo vision”, “collecting, with the inspection system, area data comprising laser-scan data (e.g., LiDAR scan data, stereoscopic cameras, etc.)”, “a stereoscopic camera” and “laser scan data 120 may comprise range data collected from one or more 2D laser scanners, 3D scanners, other type of LiDAR (“Light Detection and Ranging”) scanners, or other type of range measurement devices. The range data may comprise 3D coordinate data (x, y, z coordinates) relative to the 3D scanner, or some other suitable point of origin”]) determining a region of interest using the point cloud; (Gurzoni, Jr. et al., Abstract, Fig. 11, Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 9 ¶ 0096, Pg. 10 ¶ 0099 - 0102) differentiating, using a semantic detector, material components from a two- dimensional image; (Gurzoni et al., Figs. 5, 7 & 11, Pg. 7 ¶ 0068, Pg. 7 ¶ 0073 - Pg. 8 ¶ 0078, Pg. 9 ¶ 0087, 0090 and 0096) producing a filtered point cloud, after the capture of the point cloud, by (i) applying a filter to the point cloud; (Gurzoni et al., Fig. 11, Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 9 ¶ 0096, Pg. 10 ¶ 0099 - 0102) and (ii) excluding points from the point cloud using the region of interest; (Gurzoni et al., Fig. 11, Pg. 5 ¶ 0045 - 0047, Pg. 7 ¶ 0068 - 0070, Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 9 ¶ 0094 - 0097, Pg. 10 ¶ 0099 - 0102) and generating a material density estimate, for the material components, using the filtered point cloud. (Gurzoni et al., Figs. 9 & 10, Pg. 1 ¶ 0006, Pg. 5 ¶ 0047 - 0049, Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 8 ¶ 0085 - 0086, Pg. 10 ¶ 0098 - 0102 [“map 114 may be presented on a graphical user interface including graphical representations (e.g., 2D and/or 3D) graphical representations, indicating within the map 114 and/or modeling the various properties within the surveyed plant area. The various properties may include growth stage, estimated crop yield, estimated plant health, disease estimate, nutrient deficiency estimate, fruit quantity, flower quantity, fruit size, fruit diameter, tree or vine height, trunk or stem diameter, trunk or stem circumference, tree or vine volume, canopy volume, color, leaf color, leaf density… or other suitable properties” and “datacenter 108 uses the segmented image data, 3D point cloud data, and KB data sources 190 to generate analysis results 126, 420. The datacenter 108 combines the processed and segmented image data and the 3D point cloud data to generate a 3D model of each plant 422. The datacenter 108 combines the geo-reference data 124, the 3D model of each plant, and the analysis results 126 to generate a 3D map 114 of the plant area 424. The dashboard 110 displays the 3D map 114 and the analysis results 126 to the user 426.”]) Gurzoni, Jr. et al. fail to disclose expressly using a semantic detector to produce a filter; and applying the filter [produced using the semantic detector] to the point cloud. Pertaining to analogous art, Gowda et al. disclose capturing a point cloud using a stereo camera; (Gowda et al., Figs. 4 & 6, Col. 1 Lines 51 - 59, Col. 2 Lines 17 - 29 and 58 - 60, Col. 6 Lines 24 - 34, Col. 9 Lines 9 - 19, Col. 10 Lines 4 - 16, Col. 10 Line 54 - Col. 11 Line 12, Col. 13 Line 47 - Col. 14 Line 6) differentiating, using a semantic detector, material components from a two- dimensional image to produce a filter; (Gowda et al., Abstract, Figs. 4, 5B & 6, Col. 2 Lines 1 - 16 and 30 - 39, Col. 3 Lines 13 - 31, Col. 6 Line 35 - Col. 7 Line 18, Col. 9 Line 20 - Col. 10 Line 16) and producing a filtered point cloud, after the capture of the point cloud, by (i) applying the filter to the point cloud. (Gowda et al., Figs. 5B & 6, Col. 6 Line 50 - Col. 7 Line 31, Col. 9 Lines 9 - 19, Col. 9 Line 51 - Col. 10 Line 16, Col. 13 Line 47 - Col. 14 Line 6, Col. 15 Lines 15 - 58) Gurzoni, Jr. et al. and Gowda et al. are combinable because they are both directed towards systems and methods that acquire, analyze, filter and segment point clouds. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Gurzoni, Jr. et al. with the teachings of Gowda et al. This modification would have been prompted in order to enhance the base device of Gurzoni, Jr. et al. with the well-known and applicable technique Gowda et al. applied to a similar device. Applying a filter produced using the semantic detector to the point cloud, as taught by Gowda et al., would enhance the base device of Gurzoni, Jr. et al. by improving its ability to accurately, reliably and efficiently identify which points of the point cloud(s) belong to its various regions of interest and are relevant for determining its various vegetation statistics and/or properties since extraneous and/or unimportant points of the point cloud(s) would be removed so as to reduce the number of points that need to be evaluated, analyzed and/or processed to identify the points that belong to its various regions of interest and/or are relevant for determining its various vegetation statistics and/or properties and thereby enhancing its overall operational speed and efficiency. Furthermore, this modification would enhance the base device of Gurzoni, Jr. et al. by improving its ability to generate the 3D map of the plant area and enhancing the quality of the 3D map of the plant area since extraneous and/or unimportant points of the point cloud(s) would not be included in the 3D map of the plant area. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a filter produced using the semantic detector would be applied to the point cloud when producing the filtered point cloud so as to improve the ability of the base device of Gurzoni, Jr. et al. to accurately, reliably and efficiently identify which points of the point cloud(s) belong to its various regions of interest and are relevant for determining its various vegetation statistics and/or properties by removing extraneous and/or unimportant points of the point cloud(s) and preventing them from undergoing any further processing. Therefore, it would have been obvious to combine Gurzoni, Jr. et al. with Gowda et al. to obtain the invention as specified in claim 20.
Claims 2 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Gurzoni, Jr. et al. U.S. Publication No. 2021/0133443 A1 in view of Gowda et al. U.S. Patent No. 11,640,692 as applied to claims 1 and 13 above, and further in view of Rentz et al. U.S. Publication No. 2024/0112462 A1.
- With regards to claims 2 and 14, Gurzoni, Jr. et al. in view of Gowda et al. disclose the computer-implemented method and one or more non-transitory computer-readable media of claims 1 and 13, respectively, wherein: generating the material density estimate uses the filtered point cloud of the region of interest. (Gurzoni, Jr. et al., Figs. 9 & 10, Pg. 1 ¶ 0006, Pg. 5 ¶ 0047 - 0049, Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 8 ¶ 0085 - 0086, Pg. 10 ¶ 0098 - 0102) Gurzoni, Jr. et al. fail to disclose explicitly using a voxel map of the region of interest. Pertaining to analogous art, Rentz et al. disclose wherein: generating the material density estimate uses a voxel map of the region of interest. (Rentz et al., Pg. 1 ¶ 0013, Pg. 2 ¶ 0018 - 0019 and 0021 - 0022, Pg. 3 ¶ 0024, Pg. 4 ¶ 0040 [“density of the LiDAR point clouds (e.g., quantity of points per volume) can be employed to determine a particular species of vegetation. That is, volume of a vegetation can be used to determine that a particular instance of vegetation is a tree, whereas density can be used to determine the species of the tree (e.g., conifer, palm, pine).”]) Gurzoni, Jr. et al. in view of Gowda et al. and Rentz et al. are combinable because they are all directed towards systems and methods that acquire and analyze point clouds and, similar to Gurzoni, Jr. et al., Rentz et al. is also directed towards analyzing point clouds of vegetation to determine various statistics, properties, characteristics and/or attributes of the vegetation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Gurzoni, Jr. et al. in view of Gowda et al. with the teachings of Rentz et al. This modification would have been prompted in order to substitute the density estimation process of Gurzoni, Jr. et al. for the volume, i.e., voxel map, based density estimation technique of Rentz et al. The volume, i.e., voxel map, based density estimation technique of Rentz et al. could be substituted in place of the density estimation process of Gurzoni, Jr. et al. using well-known techniques in the art and would likely yield predictable results, in that, in the combination, the volume, i.e., voxel map, based density estimation technique of Rentz et al. would be utilized to generate the material density estimate of the combined base device. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a volume, i.e., voxel map, based density estimation technique would be utilized to generate the material density estimate of the combined base device. Therefore, it would have been obvious to combine Gurzoni, Jr. et al. in view of Gowda et al. with Rentz et al. to obtain the invention as specified in claims 2 and 14.
Claims 4, 5, 9, 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Gurzoni, Jr. et al. U.S. Publication No. 2021/0133443 A1 in view of Gowda et al. U.S. Patent No. 11,640,692 as applied to claims 1 and 13 above, and further in view of Tham et al. U.S. Publication No. 2020/0066034 A1.
- With regards to claims 4 and 15, Gurzoni, Jr. et al. in view of Gowda et al. disclose the computer-implemented method and one or more non-transitory computer-readable media of claims 1 and 13, respectively. Gurzoni, Jr. et al. fail to disclose explicitly converting the point cloud to a height map; wherein the determining of the region of interest uses the height map. Pertaining to analogous art, Tham et al. disclose converting the point cloud to a height map; (Tham et al., Figs. 5 - 7, Pg. 5 ¶ 0082 - 0083 and 0086 - 0088, Pg. 7 ¶ 0108 [“A height map is generated 550, in one embodiment by dividing the point cloud into 2D cells along a plane, and finding a median distance from each point cloud point in the corresponding cell to the plane”]) wherein the determining of the region of interest uses the height map. (Tham et al., Figs. 3, 5 - 7 & 11, Pg. 5 ¶ 0082 - 0083, Pg. 5 ¶ 0086 - Pg. 6 ¶ 0090, Pg. 7 ¶ 0106 - 0111, Pg. 9 ¶ 0148 - 0152, Pg. 12 ¶ 0197 - 0198 and 0202) Gurzoni, Jr. et al. in view of Gowda et al. and Tham et al. are combinable because they are all directed towards systems and methods that acquire and analyze point clouds and, similar to Gurzoni, Jr. et al., Rentz et al. is also directed towards analyzing point clouds of vegetation to determine various statistics, properties, characteristics and/or attributes of the vegetation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Gurzoni, Jr. et al. in view of Gowda et al. with the teachings of Tham et al. This modification would have been prompted in order to enhance the combined base device of Gurzoni, Jr. et al. in view of Gowda et al. with the well-known and applicable technique Tham et al. applied to a comparable device. Converting the point cloud to a height map and using the height map when determining the region of interest, as taught by Tham et al., would enhance the combined base device by improving its ability to accurately and reliably identify which points of the point cloud(s) belong to its various regions of interest, such as plant, flower, tree, leaf, fruit and canopy regions, and are relevant for determining its various vegetation statistics and/or properties, such as fruit size, tree or vine height and/or volume, and/or canopy volume, since it will be able to utilize the height map to easily and efficiently determine the heights of various vegetation features of interest. Furthermore, this modification would have been prompted by the teachings and suggestions of Gurzoni, Jr. et al. that the various vegetation properties that may be determined include tree or vine height and tree or vine volume, see at least page 8 paragraph 0085 of Gurzoni, Jr. et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the point cloud would be converted to a height map and the height map would be used when determining the region of interest so as to improve the ability of the combined base device to accurately and reliably identify which points of the point cloud(s) belong to its various regions of interest and are relevant for determining its various vegetation statistics and/or properties since it would be able to utilize the height map to easily and efficiently determine heights of various vegetation features of interest. Therefore, it would have been obvious to combine Gurzoni, Jr. et al. in view of Gowda et al. with Tham et al. to obtain the invention as specified in claims 4 and 15.
- With regards to claims 5 and 16, Gurzoni, Jr. et al. in view of Gowda et al. disclose the computer-implemented method and one or more non-transitory computer-readable media of claims 1 and 13, respectively, further comprising: differentiating, using a second semantic detector, region of interest components from the two-dimensional image to identify region of interest points in the two-dimensional image; (Gurzoni et al., Figs. 5, 7 & 11, Pg. 7 ¶ 0068 and 0073 - 0076, Pg. 9 ¶ 0087, 0090 - 0092 and 0096) and projecting the region of interest points into the point cloud to produce a set of projected region of interest points; (Gurzoni et al., Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 8 ¶ 0084 - 0085, Pg. 9 ¶ 0087, 0090 - 0092 and 0096, Pg. 10 ¶ 0099 - 0102) wherein the determining of the region of interest uses the set of projected region of interest points. (Gurzoni et al., Pg. 7 ¶ 0076 - Pg. 8 ¶ 0078, Pg. 8 ¶ 0084 - 0085, Pg. 9 ¶ 0087, 0090 - 0092 and 0096, Pg. 10 ¶ 0099 - 0102) Gurzoni, Jr. et al. fail to disclose explicitly extending the region of interest from the ground up towards the set of projected region of interest points. Pertaining to analogous art, Tham et al. disclose differentiating, using a second semantic detector, region of interest components from the two-dimensional image to identify region of interest points in the two-dimensional image; (Tham et al., Pg. 4 ¶ 0069 - 0070, Pg. 6 ¶ 0090 - 0091, Pg. 7 ¶ 0108 and 0116, Pg. 12 ¶ 0197 - 0202) and projecting the region of interest points into the point cloud to produce a set of projected region of interest points; (Tham et al., Pg. 5 ¶ 0088 - Pg. 6 ¶ 0091, Pg. 7 ¶ 0108, Pg. 12 ¶ 0197 - 0202) wherein the determining of the region of interest uses the set of projected region of interest points (Tham et al., Pg. 5 ¶ 0088 - Pg. 6 ¶ 0091, Pg. 7 ¶ 0108, Pg. 12 ¶ 0197 - 0202) and involves extending the region of interest from the ground up towards the set of projected region of interest points. (Tham et al., Pg. 5 ¶ 0086 - Pg. 6 ¶ 0091, Pg. 7 ¶ 0108, Pg. 9 ¶ 0148 - 0151, Pg. 10 ¶ 0155, Pg. 12 ¶ 0197 - 0202 [“The height of a tree can then be determined as the point on an infinite cylinder where it touches the height map up until the point in any image with a known camera position where a slice in the cylinder as projected in said image only contains sky pixels.”]) Gurzoni, Jr. et al. in view of Gowda et al. and Tham et al. are combinable because they are all directed towards systems and methods that acquire and analyze point clouds and, similar to Gurzoni, Jr. et al., Rentz et al. is also directed towards analyzing point clouds of vegetation to determine various statistics, properties, characteristics and/or attributes of the vegetation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Gurzoni, Jr. et al. in view of Gowda et al. with the teachings of Tham et al. This modification would have been prompted in order to enhance the combined base device of Gurzoni, Jr. et al. in view of Gowda et al. with the well-known and applicable technique Tham et al. applied to a comparable device. Extending the region of interest from the ground up towards the set of projected region of interest points, as taught by Tham et al., would enhance the combined base device by improving its ability to accurately and reliably identify which points of the point cloud(s) belong to its various regions of interest and are relevant for determining its various vegetation statistics and/or properties, such as fruit size, tree or vine height and/or volume, and/or canopy volume, see at least page 8 paragraph 0085 of Gurzoni, Jr. et al, since regions of interest that completely encompass its various vegetation features of interest from the ground to their respective tops would be determined thereby allowing for height and/or volume statistics and/or properties of its various vegetation features of interest to be easily, efficiently and accurately determined. Furthermore, this modification would have been prompted by the teachings and suggestions of Gurzoni, Jr. et al. that 3D image patches of the various relevant objects of interest may be extracted, that the various vegetation properties that may be determined include canopy volume, tree or vine height and tree or vine volume and that 3D tree images, 3D vine images and 3D fruit images may be output, see at least page 7 paragraph 0074 - page 8 paragraph 0078, page 8 paragraph 0085 and page 10 paragraphs 0099 - 0101 of Gurzoni, Jr. et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the region of interest would be extended from the ground up towards the set of projected region of interest points so as to improve the ability of the combined base device to accurately and reliably identify which points of the point cloud(s) belong to its various regions of interest and are relevant for determining its various vegetation statistics and/or properties since regions of interest that completely encompass its various vegetation features of interest from the ground to their respective tops would be determined thereby allowing for height and/or volume statistics and/or properties of its various vegetation features of interest to be easily, efficiently and accurately determined. Therefore, it would have been obvious to combine Gurzoni, Jr. et al. in view of Gowda et al. with Tham et al. to obtain the invention as specified in claims 5 and 16.
- With regards to claim 9, Gurzoni, Jr. et al. in view of Gowda et al. disclose the computer-implemented method of claim 1. Gurzoni, Jr. et al. fail to disclose explicitly converting the point cloud to a height map; wherein determining the region of interest comprises determining where entries in the height map exceed a threshold. Pertaining to analogous art, Tham et al. disclose converting the point cloud to a height map; (Tham et al., Figs. 5 - 7, Pg. 5 ¶ 0082 - 0083 and 0086 - 0088, Pg. 7 ¶ 0108 [“A height map is generated 550, in one embodiment by dividing the point cloud into 2D cells along a plane, and finding a median distance from each point cloud point in the corresponding cell to the plane”]) wherein determining the region of interest comprises determining where entries in the height map exceed a threshold. (Tham et al., Figs. 5 - 7 & 11, Pg. 5 ¶ 0086 - Pg. 6 ¶ 0090, Pg. 7 ¶ 0106 - 0111, Pg. 9 ¶ 0150 - 0152, Pg. 12 ¶ 0202) Gurzoni, Jr. et al. in view of Gowda et al. and Tham et al. are combinable because they are all directed towards systems and methods that acquire and analyze point clouds and, similar to Gurzoni, Jr. et al., Rentz et al. is also directed towards analyzing point clouds of vegetation to determine various statistics, properties, characteristics and/or attributes of the vegetation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Gurzoni, Jr. et al. in view of Gowda et al. with the teachings of Tham et al. This modification would have been prompted in order to enhance the combined base device of Gurzoni, Jr. et al. in view of Gowda et al. with the well-known and applicable technique Tham et al. applied to a comparable device. Converting the point cloud to a height map and determining where entries in the height map exceed a threshold when determining the region of interest, as taught by Tham et al., would enhance the combined base device by improving its ability to accurately and reliably identify which points of the point cloud(s) belong to its various regions of interest, such as plant, flower, tree, leaf, fruit and canopy regions, and are relevant for determining its various vegetation statistics and/or properties, such as fruit size, tree or vine height and/or volume, and/or canopy volume, since it will be able to utilize the height map and threshold to easily and efficiently identify canopy regions. Furthermore, this modification would have been prompted by the teachings and suggestions of Gurzoni, Jr. et al. that the various vegetation properties that may be determined include tree or vine height and tree or vine volume, see at least page 8 paragraph 0085 of Gurzoni, Jr. et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the point cloud would be converted to a height map and entries in the height map that exceed a threshold would be utilized when determining the region of interest so as to improve the ability of the combined base device to accurately and reliably identify which points of the point cloud(s) belong to its various regions of interest and are relevant for determining its various vegetation statistics and/or properties since it would be able to utilize the height map and threshold to easily and efficiently identify canopy regions. Therefore, it would have been obvious to combine Gurzoni, Jr. et al. in view of Gowda et al. with Tham et al. to obtain the invention as specified in claim 9.
Claims 7, 8, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Gurzoni, Jr. et al. U.S. Publication No. 2021/0133443 A1 in view of Gowda et al. U.S. Patent No. 11,640,692 as applied to claims 1 and 13 above, and further in view of Ampatzidis et al. U.S. Publication No. 2022/0250108 A1.
- With regards to claims 7 and 18, Gurzoni, Jr. et al. in view of Gowda et al. disclose the computer-implemented method and one or more non-transitory computer-readable media of claims 1 and 13, respectively. Gurzoni, Jr. et al. fail to disclose explicitly sending a control signal to an actuator based on the material density estimate. Pertaining to analogous art, Ampatzidis et al. disclose sending a control signal to an actuator based on the material density estimate. (Ampatzidis et al., Abstract, Figs. 2, 5A, 13, 16, 17, 22 & 24A - 25B, Pg. 1 ¶ 0005, Pg. 11 ¶ 0099 and 0103, Pg. 12 ¶ 0106, 0108 and 0111 - 0112, Pg. 13 ¶ 0118 - 0119, Pg. 14 ¶ 0128 - Pg. 15 ¶ 0130, Pg. 16 ¶ 0135 - 0138 [“in particular embodiments, the process flow 2400 involves the tree health status module grading the health of the tree based at least in part on height, canopy size, canopy (leaf) density, canopy color, and/or the like. This grade is then used in some embodiments to control the spraying flow” and “the tree health classification AI may be configured to process a color analysis of the canopy and/or a height for the tree, along with the tree leaf density and/or size and classification of leaves, to generate the tree health status 255 (may also be referred to as tree health grade). As previously noted, the tree health status 255 may be used in particular embodiments to generate a tree health classification for the tree, as well as control the spraying flow for spraying the tree”]) Gurzoni, Jr. et al. in view of Gowda et al. and Ampatzidis et al. are combinable because they are all directed towards systems and methods that acquire and analyze point clouds and, similar to Gurzoni, Jr. et al., Rentz et al. is also directed towards analyzing point clouds of vegetation to determine various statistics, properties, characteristics and/or attributes of the vegetation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Gurzoni, Jr. et al. in view of Gowda et al. with the teachings of Ampatzidis et al. This modification would have been prompted in order to enhance the combined base device of Gurzoni, Jr. et al. in view of Gowda et al. with the well-known and applicable technique Ampatzidis et al. applied to a comparable device. Sending a control signal to an actuator to perform an agricultural action based on the material density estimate, as taught by Ampatzidis et al., would enhance the combined base device by allowing for it to utilize the vegetation statistics and/or properties it determined to perform further agricultural related functions and/or operations in order to expand the number and variety of applications in which it may be utilized and increase its overall appeal and usefulness to potential end-users. Furthermore, this modification would have been prompted by the teachings and suggestions of Gurzoni, Jr. et al. that various recommendations, such as adjustments to amounts of pesticides, fertilizers, water, and/or other suitable adjustments, may be generated and displayed for a user based on analyzing the determined properties of the vegetation, see at least page 3 paragraph 0034, page 8 paragraphs 0084 - 0085, page 9 paragraph 0096 and page 10 paragraph 0103 of Gurzoni, Jr. et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a control signal to perform an agricultural action would be sent to an actuator based on the material density estimate so as to enable the combined base device to utilize the vegetation statistics and/or properties, such as the material density estimate, it determined to perform additional agricultural related functions and/or operations in order to expand the number and variety of applications in which it may be utilized and increase its overall appeal and usefulness to potential end-users. Therefore, it would have been obvious to combine Gurzoni, Jr. et al. in view of Gowda et al. with Ampatzidis et al. to obtain the invention as specified in claims 7 and 18.
- With regards to claims 8 and 19, Gurzoni, Jr. et al. in view of Gowda et al. in view of Ampatzidis et al. disclose the computer-implemented method and one or more non-transitory computer-readable media of claims 7 and 18, respectively. Gurzoni, Jr. et al. fail to disclose explicitly wherein: the actuator performs an agricultural action in response to the control signal. Pertaining to analogous art, Ampatzidis et al. disclose wherein: the actuator performs an agricultural action in response to the control signal. (Ampatzidis et al., Figs. 2, 5A, 13, 16, 17, 24A - 24C, 25A & 25B, Pg. 1 ¶ 0002 - 0005, Pg. 11 ¶ 0099 and 0103, Pg. 12 ¶ 0106, 0108 - 0109 and 0111 - 0112, Pg. 13 ¶ 0118 - 0121, Pg. 14 ¶ 0128, Pg. 15 ¶ 0130, Pg. 16 ¶ 0136 - 0139)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lakshmi Narayanan et al. U.S. Publication No. 2019/0287254 A1; which is directed towards a system and method for filtering out noise from a point cloud, wherein a two-dimensional image and a point cloud are obtained, semantic segmentation is performed on the two-dimensional image and the point cloud is projected onto the semantically segmented image to determine points of the point cloud that correspond to noise and are to be removed from the point cloud.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC RUSH whose telephone number is (571) 270-3017. The examiner can normally be reached 9am - 5pm Monday - Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270 - 5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ERIC RUSH/Primary Examiner, Art Unit 2677