Prosecution Insights
Last updated: April 19, 2026
Application No. 18/232,965

AUTONOMOUS SOLAR INSTALLATION USING ARTIFICIAL INTELLIGENCE

Final Rejection §103
Filed
Aug 11, 2023
Examiner
ABUELHAWA, MOHAMMED YOUSEF
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
The AES Corporation
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
54 granted / 67 resolved
+28.6% vs TC avg
Strong +20% interview lift
Without
With
+20.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
37 currently pending
Career history
104
Total Applications
across all art units

Statute-Specific Performance

§101
6.4%
-33.6% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 67 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 08/04/2025 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment The amendment filed on 11/03/2025, has been received and made of record. In response to the Non-Final Office Action, dated on 07/01/2025. Claims 1, 6, 8-11 and 19-32 are pending in the current application. Claims 2-5, 7, and 12-18 have been cancelled. Claims 21-32 have been newly added. Response to Arguments Applicant’s arguments filed on 11/03/2025, have been fully considered. In the Arguments/Remarks: Re: Rejection of the Claims Under 35 U.S.C. 101 Rejection of the claims under 35 U.S.C. 101 has been withdrawn in view of applicant’s amendments. Re: Rejection of the Claims Under 35 U.S.C. 103 Applicant’s arguments are directed to the newly amended limitations and features within the claims. Examiner has augmented the most current rejection in light of the applicant’s amendments (see rationale below). Claim Objections Claims 1 and 19 are objected to because of the following informalities: Claim 1 recites “wherein the the first set of sensor data” in lines 7-8. Claim 19 recites “wherein the the first set of sensor data” in line 9. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 6, 8-11, 19, 21, 24-29, and 31-32 are rejected under 35 U.S.C. 103 as being unpatentable over Schneider (US 2021/0379757 A1) in view of Tighe (US 10,157,452 B2). Regarding claim 1, Schneider teaches a method for autonomous solar panel installation of a respective solar panel by iteratively generating a plurality of panel poses of an installation structure and one or more solar panels already installed on the installation structure based on a plurality of different sets of sensor data obtained as approaching the installation structure, the method comprising: obtaining a first set of sensor data, of the plurality of different sets of sensor data, that is obtained at a first position during installation, wherein the the first set of sensor data comprises at least one image of the one or more solar panels already installed on the installation structure and the installation structure [(see at least paragraphs 44-50) As in 47 “Within system 200, the sensors 260 operate as perception sensors to assist one of the various vehicles and the robotic manipulator in delivering and installing the PV modules on racking systems. The system 200, for example one of the control systems 216 and 232, will receive and process a combination of 2D and 31) sensor data to map the environment. For 2D sensing high resolution color imagery can be captured with an optical camera, such as optical camera 262. The 2D data can then be correlated with concurrently captured 3D depth and point cloud data, captured from sensors such as multiple IR cameras 263, a lidar sensor 266 and/or rangefinder sensors 267 (such as time-of-flight sensors). The 3D data can also include optical camera data captured by a pair of stereoscopic optical cameras, which allows for a processors to triangulate objects within the scene. As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed). This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230, how to pose in order to place the panel correctly with respect to the racking system orientation.”]; Schneider teaches detecting the one or more solar panels by inputting the at least one image into one or more networks that are configured to detect solar panels [(see at least paragraph 25) “O-AMPP—The inventors' solution to the problems discussed herein, titled “Outdoor Autonomous Manipulation of Photovoltaic Panels (O-AMPP)”, fuses computer vision and machine learning algorithms, a customized sensor solution, customized End-Of-Arm Tooling (EOAT), industry-leading power-dense, outdoor-rated robotic arm technology, and best-in-class outdoor unmanned ground vehicles (UGVs), to produce an innovative solution that will reduce the cost and time for producing a new solar plant.”]; a first post-processing to compute a first iteration of a panel pose of the one or more solar panels already installed based on an output of the one or more networks [(see at least Fig.3B, paragraphs 25,47) As in 47 “The 3D data can also include optical camera data captured by a pair of stereoscopic optical cameras, which allows for a processors to triangulate objects within the scene. As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed).”]; obtaining a second set of sensor data, of the plurality of different sets of sensor data, that is obtained during installation at a second position different from the first position [(see at least paragraph 44) “In this example, the robotic manipulator 230 can include a control system 232 that can also be in communication with the linear slide 220 and the EOAT 240 to coordinate and control movement of these devices as a unit. The control system 232 can also access or include at least some of sensors 260 to evaluate the environment and manipulate the PV modules. In an example, the control system 232 accesses sensors 260, such as optical cameras 262 and IR cameras 263 to locate PV modules delivered by the ADV 130. The control system 232 can received a two dimensional (2D) image from an optical camera 262 for processing to roughly locate a PV module using common machine vision techniques such as edge detection or blob analysis. The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262, or IR cameras, such as IR cameras 263. The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein. The perception system (e.g., control system 232 in this example) can access other sensors, such as a force sensor 264, a lidar sensor 266 and/or a rangefinder sensor 267 to provide additional 2D and 3D information about the surroundings and operation of the linear slide 220, EMT 240, and the robotic manipulator 230.”]; using the second set of sensor data in order to generate a second iteration of the panel pose of the one or more solar panels already installed [(see at least paragraph 47) “The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed). This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230, how to pose in order to place the panel correctly with respect to the racking system orientation.”]; generating control signals, based on the second iteration of the panel pose, for operating a robotic controller for installing the respective solar panel [(see at least Fig.3B, paragraphs 45-50) As in 47 “The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed). This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230, how to pose in order to place the panel correctly with respect to the racking system orientation.”]; and operating, using the control signals, at least one robotic system in order to install the respective solar panel relative to the one or more solar panels already installed [(see at least Fig.3A, paragraph 50) “FIG. 3A illustrates the PV installation system 200 retrieving PV module 120A from a stack of PV modules 150 delivered by ADV 130A. The EOAT 240 is using a series of vacuum cups to grab the PV module 120A. In this example, the robotic arm 230 has been positioned over the ADV 130A by the AWP 210. The AWP 210 roughly positions the linear slide 220 and robotic arm 230 using a combination of macro movements of the entire device coupled with positioning of the articulated arm 212 and parallel lift mechanism 214. Depending on the racking system configuration, the AWP 210 will either remain in a fixed position while the linear slide 220 and robotic arm 230 components install the PV module 120A, or the AWP 210 can manipulate the articulated arm 212 and/or parallel lift mechanism 214 in coordination with movements of the linear slide 220 and robotic arm 230. For example, in situations where the racking system is designed to hold multiple rows of PV modules, the AWP 210 may need to manipulate the articulated arm 212 and/or parallel lift mechanism 214 to provide additional vertical and/or horizontal reach for the system.”] Schneider does not explicitly teach pre-processing the at least one image including one or more of compensating for camera intrinsics or distortions, rectifying the at least one, and determining depth information. However, Tighe teaches pre-processing the at least one image including one or more of compensating for camera intrinsics or distortions, rectifying the at least one, and determining depth information [(see at least Col.3 lines 29-41) “The image rectification process may utilize perspective transformation data to map or associate coordinates of pixels in the acquired image data to new coordinates in the rectified image data. In some implementations, the perspective transformation data may comprise a 3×3 homography matrix. The image rectification process may be dependent on a relative height of the items, or a distance between the camera and the items in the acquired image data. For example, if the camera is very close to the tops of the items at the inventory location, the image will appear different from when the camera is farther from the tops of the items due to perspective effects. The relative height of the items may be described using an item plane.”] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Schneider to incorporate the teachings of Tighe of pre-processing the at least one image including one or more of compensating for camera intrinsics or distortions, rectifying the at least one, and determining depth information in order to use the rectified image data to determine output data such as a count/position of items at an inventory location. [(Tighe Col.1 line 41)] Regarding claim 6, In view of the above combination of references, Schneider further teaches wherein the output of the one or more networks comprises panel segmentation. [(see at least paragraphs 25,44) As in 25 “O-AMPP—The inventors' solution to the problems discussed herein, titled “Outdoor Autonomous Manipulation of Photovoltaic Panels (O-AMPP)”, fuses computer vision and machine learning algorithms, a customized sensor solution, customized End-Of-Arm Tooling (EOAT), industry-leading power-dense, outdoor-rated robotic arm technology, and best-in-class outdoor unmanned ground vehicles (UGVs), to produce an innovative solution that will reduce the cost and time for producing a new solar plant.” As in 44 “The control system 232 can also access or include at least some of sensors 260 to evaluate the environment and manipulate the PV modules. In an example, the control system 232 accesses sensors 260, such as optical cameras 262 and IR cameras 263 to locate PV modules delivered by the ADV 130. The control system 232 can received a two dimensional (2D) image from an optical camera 262 for processing to roughly locate a PV module using common machine vision techniques such as edge detection or blob analysis. The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262, or IR cameras, such as IR cameras 263. The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein.”] Regarding claim 8, In view of the above combination of references, Schneider further teaches wherein the one or more networks is trained to output bounding boxes, segmentation, keypoints, depth and/or a 6DoF pose. [(see at least paragraphs 25, 47) As in 47 “The system 200, for example one of the control systems 216 and 232, will receive and process a combination of 2D and 31) sensor data to map the environment. For 2D sensing high resolution color imagery can be captured with an optical camera, such as optical camera 262. The 2D data can then be correlated with concurrently captured 3D depth and point cloud data, captured from sensors such as multiple IR cameras 263, a lidar sensor 266 and/or rangefinder sensors 267 (such as time-of-flight sensors).”] Regarding claim 9, Modified Schneider has all of the elements of claim 1 as discussed above. Schneider does not explicitly teach wherein the pre-processing comprises compensating for a camera distortion, rectifying the image, and/or determining depth information based on a single-baseline stereo camera, a multi-baseline stereo camera, a time-of-flight sensor, or a LiDAR sensor. However, Tighe teaches wherein the pre-processing comprises compensating for a camera distortion, rectifying the image, and/or determining depth information based on a single-baseline stereo camera, a multi-baseline stereo camera, a time-of-flight sensor, or a LiDAR sensor. [(see at least Col.17 lines 26-41) “One or more depth sensors 108(2) may also be included in the sensors 108. The depth sensors 108(2) are configured to acquire spatial or three-dimensional (3D) data, such as depth information, about objects within a FOV 202. The depth sensors 108(2) may include range cameras, lidar systems, sonar systems, radar systems, structured light systems, stereo vision systems, optical interferometry systems, and so forth. The inventory management system 110 may use the 3D data acquired by the depth sensors 108(2) to identify objects, determine a location of an object in 3D real space, and so forth. In some implementations, the depth sensors 108(2) may provide data that is used to generate or select the perspective transformation data 120. For example, the 3D data may be used to determine the height Z1 of the items 106.”] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of modified Schneider to further incorporate the teachings of Tighe of wherein the pre-processing comprises compensating for a camera distortion, rectifying the image, and/or determining depth information based on a single-baseline stereo camera, a multi-baseline stereo camera, a time-of-flight sensor, or a LiDAR sensor in order to identify objects, determine a location of an object in 3D real space, and so forth. [(Tighe Col.17 line 36)] Regarding claim 10, In view of the above combination of references, Schneider further teaches wherein the first post-processing comprises one or more computer vision algorithms for processing the output of the one or more networks based on invariant structures in the at least one image to determine locations of panel keypoints. [(see at least paragraphs 25, 44) As in 44 “The control system 232 can received a two dimensional (2D) image from an optical camera 262 for processing to roughly locate a PV module using common machine vision techniques such as edge detection or blob analysis. The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262, or IR cameras, such as IR cameras 263. The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein. The perception system (e.g., control system 232 in this example) can access other sensors, such as a force sensor 264, a lidar sensor 266 and/or a rangefinder sensor 267 to provide additional 2D and 3D information about the surroundings and operation of the linear slide 220, EMT 240, and the robotic manipulator 230.”] Regarding claim 11, In view of the above combination of references, Schneider further teaches wherein the first post-processing further comprises solving for Perspective-n-Point based on panel dimensions and panel keypoints. [(see at least paragraph 47) “The 3D data can also include optical camera data captured by a pair of stereoscopic optical cameras, which allows for a processors to triangulate objects within the scene. As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed).”] Regarding claim 19, Schneider teaches a system for installing a respective solar panel by iteratively generating a plurality of panel poses of an installation structure and one or more solar panels already installed on the installation structure based on a plurality of different sets of sensor data obtained as approaching the installation structure, the system comprising: at least one sensor configured to generate a first set of sensor data, of the plurality of different sets of sensor data, that is obtained during installation at a first position and a second set of sensor data, of the plurality of different sets of sensor data, that is obtained during installation at a second position different from the first position, wherein the the first set of sensor data comprises at least one image of the one or more solar panels already installed on the installation structure and the installation structure [(see at least paragraphs 44-50) As in 47 “Within system 200, the sensors 260 operate as perception sensors to assist one of the various vehicles and the robotic manipulator in delivering and installing the PV modules on racking systems. The system 200, for example one of the control systems 216 and 232, will receive and process a combination of 2D and 31) sensor data to map the environment. For 2D sensing high resolution color imagery can be captured with an optical camera, such as optical camera 262. The 2D data can then be correlated with concurrently captured 3D depth and point cloud data, captured from sensors such as multiple IR cameras 263, a lidar sensor 266 and/or rangefinder sensors 267 (such as time-of-flight sensors). The 3D data can also include optical camera data captured by a pair of stereoscopic optical cameras, which allows for a processors to triangulate objects within the scene. As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed). This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230, how to pose in order to place the panel correctly with respect to the racking system orientation.”]; Schneider teaches estimating panel poses for the one or more solar panels, based on the solar panel segments [(see at least paragraph 47) “As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed). This allows determination of the 3D position and orientation of the structure with respect to the sensor”]; (ii) detecting the one or more solar panels based on the one or more images [(see at least paragraph 25) “O-AMPP—The inventors' solution to the problems discussed herein, titled “Outdoor Autonomous Manipulation of Photovoltaic Panels (O-AMPP)”, fuses computer vision and machine learning algorithms, a customized sensor solution, customized End-Of-Arm Tooling (EOAT), industry-leading power-dense, outdoor-rated robotic arm technology, and best-in-class outdoor unmanned ground vehicles (UGVs), to produce an innovative solution that will reduce the cost and time for producing a new solar plant.”]; and (iii) a first post-processing to compute a first iteration of a panel pose of the one or more solar panels already installed based on an output of one or more networks [(see at least Fig.3B, paragraphs 25,47) As in 47 “The 3D data can also include optical camera data captured by a pair of stereoscopic optical cameras, which allows for a processors to triangulate objects within the scene. As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed).”]; (iv) a second post-processing to use the second set of sensor data in order to generate a second iteration of the panel pose of the one or more solar panels already installed [(see at least paragraph 47) “The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed). This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230, how to pose in order to place the panel correctly with respect to the racking system orientation.”]; and a controller for generating control signals, based on the second iteration of the panel pose, for operating a robotic controller for installing the respective solar panel [(see at least Fig.3B, paragraphs 45-50) As in 47 “The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed). This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230, how to pose in order to place the panel correctly with respect to the racking system orientation.”] Schneider does not explicitly teach one or more devices for: (i) pre-processing the one or more images including one or more of compensating for camera intrinsics or distortions, rectifying the at least one image, and determining depth information. However, Tighe teaches one or more devices for: (i) pre-processing the one or more images including one or more of compensating for camera intrinsics or distortions, rectifying the at least one image, and determining depth information [(see at least Col.3 lines 29-41) “The image rectification process may utilize perspective transformation data to map or associate coordinates of pixels in the acquired image data to new coordinates in the rectified image data. In some implementations, the perspective transformation data may comprise a 3×3 homography matrix. The image rectification process may be dependent on a relative height of the items, or a distance between the camera and the items in the acquired image data. For example, if the camera is very close to the tops of the items at the inventory location, the image will appear different from when the camera is farther from the tops of the items due to perspective effects. The relative height of the items may be described using an item plane.”] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Schneider to incorporate the teachings of Tighe of pre-processing the one or more images including one or more of compensating for camera intrinsics or distortions, rectifying the at least one image, and determining depth information in order to use the rectified image data to determine output data such as a count/position of items at an inventory location. [(Tighe Col.1 line 41)] Regarding claim 21, In view of the above combination of references, Schneider further teaches wherein the second iteration of the panel pose corrects the first iteration of the panel pose. [(see at least paragraph 47) “The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed). This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230, how to pose in order to place the panel correctly with respect to the racking system orientation.”] Regarding claim 24, In view of the above combination of references, Schneider further teaches wherein the at least one robotic system is controlled using control signals based on both the first iteration of the panel pose and the second iteration of the panel pose. [(see at least paragraph 44) “The control system 232 can received a two dimensional (2D) image from an optical camera 262 for processing to roughly locate a PV module using common machine vision techniques such as edge detection or blob analysis. The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262, or IR cameras, such as IR cameras 263. The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein. The perception system (e.g., control system 232 in this example) can access other sensors, such as a force sensor 264, a lidar sensor 266 and/or a rangefinder sensor 267 to provide additional 2D and 3D information about the surroundings and operation of the linear slide 220, EMT 240, and the robotic manipulator 230.”] Regarding claim 25, In view of the above combination of references, Schneider further teaches wherein at least the first iteration of the panel pose, the second iteration of the panel pose and a third iteration of the panel pose for the one or more solar panels already installed are generated in order to install the respective solar panel relative to the one or more solar panels already installed. [(see at least Fig.4, paragraphs 44-47 and 53-57) As in 44 “The control system 232 can received a two dimensional (2D) image from an optical camera 262 for processing to roughly locate a PV module using common machine vision techniques such as edge detection or blob analysis. The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262, or IR cameras, such as IR cameras 263. The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein. The perception system (e.g., control system 232 in this example) can access other sensors, such as a force sensor 264, a lidar sensor 266 and/or a rangefinder sensor 267 to provide additional 2D and 3D information about the surroundings and operation of the linear slide 220, EMT 240, and the robotic manipulator 230.” As in 47 “The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed). This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230, how to pose in order to place the panel correctly with respect to the racking system orientation.”] Regarding claim 26, In view of the above combination of references, Schneider further teaches wherein the second iteration of the panel pose is based on the first iteration of the panel pose. [(see at least Fig.4, paragraphs 47,53) As in 47 “The 3D data can also include optical camera data captured by a pair of stereoscopic optical cameras, which allows for a processors to triangulate objects within the scene. As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed). This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230, how to pose in order to place the panel correctly with respect to the racking system orientation.” As in 53 “In this example, the technique 400 can include operations performed by a PV module installation system such as positioning an AWP at 410, delivering PV modules at 420, picking up a PV module at 430, and placing the PV module at 440. The system can include a perception system performing related and/or supporting operations 450, the operations can include receiving sensor data at 451, identifying a target object at 452, extracting features of a target object at 453, determining a location at 454, and determining an orientation at 455. In some examples, the target object can include a PV module, a racking system, and an installation location on the racking system.”] Regarding claim 27, In view of the above combination of references, Schneider further teaches wherein the first iteration of the panel pose and the second iteration of the panel pose determine one or more keypoints of the one or more solar panels already installed; and wherein the one or more keypoints comprise a center point of the one or more solar panels already installed. [(see at least paragraph 54) “The technique 400 can include various optional additional operations associated with operation 410. For example, operation 410 can include the AWP receiving GPS location data at 411 to assist in navigating into position. The GPS data can include position data that can then be compared against waypoints or other navigation aids. At 412, the technique 400 can include the AWP 110, 210 navigating the solar generation plant. At 413, the AWP 110, 210 can utilize sensors 260 to locate and orient one or more racking systems. Finally, positioning the AWP at 410 can include positioning the articulating boom 212 at 414. In some examples, the AWP positioning adjacent a racking system at 410 can involve a perception system software operating on a controller, such as control system 216) performing some or all of the operations 450. For example, the AWP will receive sensor data at 451, such as GPS data to assist in navigation. The AWP can identity a target option at 452, such as identifying an open position on a PV racking system, or identifying the ADV loaded with PV modules for mounting. The AWP can extract features of a structure at 453, such as the structure of a mounting position on a racking system. The AWP may also perform operations 454 and 455 to determine location and orientation of an object involved in the positioning of the AWP.”] Regarding claim 28, In view of the above combination of references, Schneider further teaches wherein the first set of sensor data and the second set of sensor data are obtained from different sensors. [(see at least paragraph 42) “The system 200 can also access a variety of sensors, such as sensors 260. In an example, sensors 260 can optionally include sensors such as a global positioning sensor (GPS) 261, an optical camera 262, an infra-red (IIS) camera 263, a pressure or force sensor 264, an inertia measurement unit (IMU) 265, a lidar sensor 266, and/or a rangefinder sensor 267. In some examples, each of the AWP 210, the robotic manipulator 230, the system controller 250, and the ADV 130 can include some or all of sensors 260. In certain examples, some or all of the sensors 260 are embedded into each of the AWP 210, the robotic manipulator 230, and the ADV 130 with the system controller 250 able to communicate with each vehicle and receive some or all of the data from sensors 260.”] Regarding claim 29, In view of the above combination of references, Schneider further teaches wherein the second iteration of the panel pose is configured to correct the first iteration of the panel pose. [(see at least paragraph 47) “The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed). This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230, how to pose in order to place the panel correctly with respect to the racking system orientation.”] Regarding claim 31, In view of the above combination of references, Schneider further teaches wherein one or more devices are configured to generate at least the first iteration of the panel pose, the second iteration of the panel pose and a third iteration of the panel pose for the one or more solar panels already installed in order to install the respective solar panel relative to the one or more solar panels already installed. [(see at least Fig.4, paragraphs 44-47 and 53-57) As in 44 “The control system 232 can received a two dimensional (2D) image from an optical camera 262 for processing to roughly locate a PV module using common machine vision techniques such as edge detection or blob analysis. The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262, or IR cameras, such as IR cameras 263. The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein. The perception system (e.g., control system 232 in this example) can access other sensors, such as a force sensor 264, a lidar sensor 266 and/or a rangefinder sensor 267 to provide additional 2D and 3D information about the surroundings and operation of the linear slide 220, EMT 240, and the robotic manipulator 230.” As in 47 “The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed). This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230, how to pose in order to place the panel correctly with respect to the racking system orientation.”] Regarding claim 32, In view of the above combination of references, Schneider further teaches wherein at least one sensor comprises different sensors configured to generate the first set of sensor data and the second set of sensor data. [(see at least paragraph 42) “The system 200 can also access a variety of sensors, such as sensors 260. In an example, sensors 260 can optionally include sensors such as a global positioning sensor (GPS) 261, an optical camera 262, an infra-red (IIS) camera 263, a pressure or force sensor 264, an inertia measurement unit (IMU) 265, a lidar sensor 266, and/or a rangefinder sensor 267. In some examples, each of the AWP 210, the robotic manipulator 230, the system controller 250, and the ADV 130 can include some or all of sensors 260. In certain examples, some or all of the sensors 260 are embedded into each of the AWP 210, the robotic manipulator 230, and the ADV 130 with the system controller 250 able to communicate with each vehicle and receive some or all of the data from sensors 260.”] Allowable Subject Matter Claims 20, 22, 23 and 30 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Examiner notes that the prior art of record fails to disclose “wherein the one or more devices for a second post-processing comprises one or more homography transforms to obtain a second panel pose for the one or more solar panels, based on the first panel pose, wherein the second post-processing compensates or corrects for inaccuracies in the first panel pose based on visual patterns or fiducials on a solar panel”. Schneider (US 2021/0379757 A1) discloses a robotic PV module installation system for populating a solar generation plant with PV modules with minimal human intervention. The PV module installation system can include an aerial work platform (AWP), a linear slide, and a robotic arm. Tighe (US 10,157,452 B1) discloses techniques to use rectified images for further processing such as object detection, object identification, and so forth. In one implementation, an image from a camera is processed to produce a rectified image. However, the prior art of record fails to disclose further comprising: a second post-processing comprising one or more homography transforms to obtain a second panel pose for the one or more solar panels, based on the first panel pose, wherein the second post-processing compensates or corrects for inaccuracies in the first panel pose based on visual patterns or fiducials on a solar panel”. The references fail to disclose and teach all of the features AND a suitable motivation to combine and add these missing features. The Examiner has cited particular paragraphs or columns and line numbers in the references applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the Applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2141.02 [R-07.2015] VI. A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed Invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED YOUSEF ABUELHAWA whose telephone number is (571)272-3219. The examiner can normally be reached Monday-Friday 8:30-5:00 with flex. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at 571-270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMMED YOUSEF ABUELHAWA/Examiner, Art Unit 3656 /WADE MILES/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Aug 11, 2023
Application Filed
Jun 27, 2025
Non-Final Rejection — §103
Nov 03, 2025
Response Filed
Jan 31, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598706
Method of inserting an electronic components in through-hole technology, THT, into a printed circuit board, PCB, by an industrial robot
2y 5m to grant Granted Apr 07, 2026
Patent 12558786
RESTRICTING MOVEMENT OF A MOBILE ROBOT
2y 5m to grant Granted Feb 24, 2026
Patent 12552031
WORK MANAGEMENT SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12533813
ROBOT, SYSTEM COMPRISING ROBOT AND USER DEVICE AND CONTROLLING METHOD THEREOF
2y 5m to grant Granted Jan 27, 2026
Patent 12472641
GENERATING REFERENCES FOR ROBOT-CARRIED OBJECTS AND RELATED TECHNOLOGY
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+20.1%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 67 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month