DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed on 1/26/2026 regarding claims 1, 6, 9, 11, 16, 19, 21, 31-44 have been fully considered but they are not persuasive or moot in view of new ground of rejection provided below which was necessitated based on Applicant’s amendments to the claims. The new ground of rejection for independent claims are based on Schneider in view of Janthori.
The same reasoning as applied to the independent claims above also apply to their corresponding dependent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Schneider et al. (US 2021/0379757 A1) (Hereinafter Schneider) in view of Janthori et al. (T. Janthori, B. Lertpornsuksawat and T. Sapsaman, "Automatic Data Collection for Ariel Thermographic Inspections of Photovoltaic Modules," 2022 Research, Invention, and Innovation Congress: Innovative Electricals and Electronics (RI2C), Bangkok, Thailand, 2022, pp. 278-283 (Hereinafter Janthori).
Regarding Claim 1, Schneider teaches a method for an at least partly autonomous solar installation (See at least Para [0002] “This document pertains generally, but not by way of limitation, to robotically assisted assembly of photovoltaic (PV) modules for solar power generation plant. More particularly, this disclosure relates to, but not by way of limitation, a mobile robotic manipulator for PV modules…”, Para [0015] “…the present application discusses an autonomous working vehicle with a robotic arm for installing PV modules…”), the method comprising:
automatically obtaining one or more images of an in-progress solar installation, wherein the one or more images include one or more solar panels and installation structure (Para [0043] “… The control system 216 can operate the AWP 210 autonomously or receive inputs from a manual control interface to allow an operate to manually manipulate the AWP 210.”, Para [0054] “…The AWP can extract features of a structure at 453, such as the structure of a mounting position on a racking system. The AWP may also perform operations 454 and 455 to determine location and orientation of an object involved in the positioning of the AWP. “, Para [0044] “…The control system 232 can received a two dimensional (2D) image from an optical camera 262 for processing to roughly locate a PV module using common machine vision techniques such as edge detection or blob analysis. The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262, or IR cameras, such as IR cameras 263 . The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein. The perception system (e.g., control system 232 in this example) can access other sensors, such as a force sensor 264 , a lidar sensor 266 and/or a rangefinder sensor 267 to provide additional 2D and 3D information about the surroundings and operation of the linear slide 220 , EMT 240 , and the robotic manipulator 230 …”, discloses additional 2D and 3D information about the surroundings which is construed as image of torque tubes, Para [0081] “In example 6, the subject matter of any one of examples 1-5 includes, wherein the end of arm tooling includes a device used to secure a PV module to the racking system. In some examples, the device is a torquing device.”, discloses torquing device which is considered as a torque tube);
automatically detecting, the one or more solar panels within the one or more images (See at least Para [0047] “… As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed)…”, Para [0025] “O-AMPP—The inventors' solution to the problems discussed herein, titled “Outdoor Autonomous Manipulation of Photovoltaic Panels (O-AMPP)”, fuses computer vision and machine learning algorithms, a customized sensor solution, customized End-Of-Arm Tooling (EOAT), industry-leading power-dense, outdoor-rated robotic arm technology, and best-in-class outdoor unmanned ground vehicles (UGVs), to produce an innovative solution that will reduce the cost and time for producing a new solar plant.”);
responsive to automatically detecting the one or more solar panels, automatically making an initial determination using at least one image as to at least one aspect of the one or more solar panels (See at least Para [0047] “… As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed)…”); …
automatically estimating based on the subsequent determination as to the at least one aspect, one or more panel poses for the one or more solar panels (See at least Para [0025] “O-AMPP—The inventors' solution to the problems discussed herein, titled “Outdoor Autonomous Manipulation of Photovoltaic Panels (O-AMPP)”, fuses computer vision and machine learning algorithms, a customized sensor solution, customized End-Of-Arm Tooling (EOAT), industry-leading power-dense, outdoor-rated robotic arm technology, and best-in-class outdoor unmanned ground vehicles (UGVs), to produce an innovative solution that will reduce the cost and time for producing a new solar plant.”, Para [0044] “... The control system 232 can also access or include at least some of sensors 260 to evaluate the environment and manipulate the PV modules. In an example, the control system 232 accesses sensors 260 , such as optical cameras 262 and IR cameras 263 to locate PV modules delivered by the ADV 130 . The control system 232 can received a two dimensional (2D) image from an optical camera 262 for processing to roughly locate a PV module using common machine vision techniques such as edge detection or blob analysis. The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262 , or IR cameras, such as IR cameras 263 . The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein…”, discloses 3D data can be analyzed to map precise location and orientation of the PV modules which is construed as estimating panel poses for the one or more solar panels, Para [0033] “… To pick up PV modules first perception sensors and computer vision is used to detect and locate key objects of interest including the modules and the racking system 160 A … When that motion is complete, fine manipulator arm motion is used to operate the robotic arm 230 to grasp PV modules, such as PV module 120 A, with the EOAT 240 . The process continues in reference to FIGS. 3B and 3C, with the robotic arm 230 positioning and installing the PV module 120 A onto racking system 160 A”, Fig. 3B shows end of arm tooling 240 handling solar panel segment 120 A); and
automatically operating at least one robot using one or more control signals generated based on the estimated one or more panel poses in order to position the one or more solar panels for installation (See at least Para [0044] “In this example, the robotic manipulator 230 can include a control system 232 that can also be in communication with the linear slide 220 and the EOAT 240 to coordinate and control movement of these devices as a unit. The control system 232 can also access or include at least some of sensors 260 to evaluate the environment and manipulate the PV modules. In an example, the control system 232 accesses sensors 260 , such as optical cameras 262 and IR cameras 263 to locate PV modules delivered by the ADV 130 . The control system 232 can received a two dimensional (2D) image from an optical camera 262 for processing to roughly locate a PV module using common machine vision techniques such as edge detection or blob analysis. The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262, or IR cameras, such as IR cameras 263. The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein…”).
However, Schneider does not explicitly spell out …
responsive to making the initial determination as to the at least one aspect of the one or more solar panels, automatically making a subsequent determination using at least one different image as to the at least one aspect of the one or more solar panels in order to reduce error, wherein the at least one image is different from the at least one different image and wherein the at least one image and the at least one different image are taken as part of the in-progress solar installation.
Janthori teaches …
responsive to making the initial determination as to the at least one aspect of the one or more solar panels, automatically making a subsequent determination using at least one different image as to the at least one aspect of the one or more solar panels in order to reduce error, wherein the at least one image is different from the at least one different image and wherein the at least one image and the at least one different image are taken as part of the in-progress solar installation (See at least Fig. 1 shows pattern is saved from image processing which is construed as determination of the aspect through different images as shown in Fig. 4, Page 279 Col 1 “A. Capturing - An image is captured in every time interval instead of a video to reduce memory and processing power requirement from the SBC. Sample images are shown in Fig. 4.”, Fig. 9 The PV array detection result, Fig. 4 shows captured different images).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Schneider with the teachings of Janthori and include the feature of responsive to making the initial determination as to the at least one aspect of the one or more solar panels, automatically making a subsequent determination using at least one different image as to the at least one aspect of the one or more solar panels in order to reduce error, wherein the at least one image is different from the at least one different image and wherein the at least one image and the at least one different image are taken as part of the in-progress solar installation, thereby provide PV detection accuracy and improve the overall quality control of photovoltaic installations (See at least Page 283 “VI. CONCLUSION … Moreover, it is possible to reduce the labor, cost, and time of the inspection compared to the previous inspection method and improve the overall quality control of photovoltaic installations.”).
Regarding Claim 11, Schneider teaches a system for installing solar panels (See at least Para [0002] “This document pertains generally, but not by way of limitation, to robotically assisted assembly of photovoltaic (PV) modules for solar power generation plant. More particularly, this disclosure relates to, but not by way of limitation, a mobile robotic manipulator for PV modules…”, Para [0015] “…the present application discusses an autonomous working vehicle with a robotic arm for installing PV modules…”), the system comprising:
a camera system configured to automatically obtain one or more images of an in-progress solar installation, wherein the one or more images are of one or more solar panels and installation structure (Para [0043] “… The control system 216 can operate the AWP 210 autonomously or receive inputs from a manual control interface to allow an operate to manually manipulate the AWP 210.”, Para [0054] “…The AWP can extract features of a structure at 453, such as the structure of a mounting position on a racking system. The AWP may also perform operations 454 and 455 to determine location and orientation of an object involved in the positioning of the AWP. “, Para [0044] “…The control system 232 can received a two dimensional (2D) image from an optical camera 262 for processing to roughly locate a PV module using common machine vision techniques such as edge detection or blob analysis. The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262, or IR cameras, such as IR cameras 263 . The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein. The perception system (e.g., control system 232 in this example) can access other sensors, such as a force sensor 264 , a lidar sensor 266 and/or a rangefinder sensor 267 to provide additional 2D and 3D information about the surroundings and operation of the linear slide 220 , EMT 240 , and the robotic manipulator 230 …”, discloses additional 2D and 3D information about the surroundings which is construed as image of torque tubes, Para [0081] “In example 6, the subject matter of any one of examples 1-5 includes, wherein the end of arm tooling includes a device used to secure a PV module to the racking system. In some examples, the device is a torquing device.”, discloses torquing device which is considered as a torque tube);
at least one robot (See at least Para [0016] “Manipulator a robotic arm or device used to move objects in space.”); and
at least one controller (See at least Para [0035] discloses using controller to coordinate movements of the entire PV panel installation system) configured to:
automatically detecting, the one or more solar panels within the one or more images (See at least Para [0047] “… As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed)…”, Para [0025] “O-AMPP—The inventors' solution to the problems discussed herein, titled “Outdoor Autonomous Manipulation of Photovoltaic Panels (O-AMPP)”, fuses computer vision and machine learning algorithms, a customized sensor solution, customized End-Of-Arm Tooling (EOAT), industry-leading power-dense, outdoor-rated robotic arm technology, and best-in-class outdoor unmanned ground vehicles (UGVs), to produce an innovative solution that will reduce the cost and time for producing a new solar plant.”);
responsive to automatically detecting the one or more solar panels, automatically making an initial determination using at least one image as to at least one aspect of the one or more solar panels (See at least Para [0047] “… As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed)…”); …
automatically estimate, based on the subsequent determination as to the at least one aspect, one or more panel poses for the one or more solar panels (See at least Para [0025] “O-AMPP—The inventors' solution to the problems discussed herein, titled “Outdoor Autonomous Manipulation of Photovoltaic Panels (O-AMPP)”, fuses computer vision and machine learning algorithms, a customized sensor solution, customized End-Of-Arm Tooling (EOAT), industry-leading power-dense, outdoor-rated robotic arm technology, and best-in-class outdoor unmanned ground vehicles (UGVs), to produce an innovative solution that will reduce the cost and time for producing a new solar plant.”, Para [0044] “... The control system 232 can also access or include at least some of sensors 260 to evaluate the environment and manipulate the PV modules. In an example, the control system 232 accesses sensors 260 , such as optical cameras 262 and IR cameras 263 to locate PV modules delivered by the ADV 130 . The control system 232 can received a two dimensional (2D) image from an optical camera 262 for processing to roughly locate a PV module using common machine vision techniques such as edge detection or blob analysis. The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262 , or IR cameras, such as IR cameras 263 . The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein…”, discloses 3D data can be analyzed to map precise location and orientation of the PV modules which is construed as estimating panel poses for the one or more solar panels, Para [0033] “… To pick up PV modules first perception sensors and computer vision is used to detect and locate key objects of interest including the modules and the racking system 160 A … When that motion is complete, fine manipulator arm motion is used to operate the robotic arm 230 to grasp PV modules, such as PV module 120 A, with the EOAT 240 . The process continues in reference to FIGS. 3B and 3C, with the robotic arm 230 positioning and installing the PV module 120 A onto racking system 160 A”, Fig. 3B shows end of arm tooling 240 handling solar panel segment 120 A); and
based on the estimated one or more panel poses, generate one or more control signals to control the at least one robot in order to position the one or more solar panels for installation (See at least Para [0044] “In this example, the robotic manipulator 230 can include a control system 232 that can also be in communication with the linear slide 220 and the EOAT 240 to coordinate and control movement of these devices as a unit. The control system 232 can also access or include at least some of sensors 260 to evaluate the environment and manipulate the PV modules. In an example, the control system 232 accesses sensors 260, such as optical cameras 262 and IR cameras 263 to locate PV modules delivered by the ADV 130. The control system 232 can received a two dimensional (2D) image from an optical camera 262 for processing to roughly locate a PV module using common machine vision techniques such as edge detection or blob analysis. The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262 , or IR cameras, such as IR cameras 263 . The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein…”); and
wherein, responsive to receiving the one or more control signals, the at least one robot is configured to automatically operate in order to position the one or more solar panels for installation (See at least Para [0044] “In this example, the robotic manipulator 230 can include a control system 232 that can also be in communication with the linear slide 220 and the EOAT 240 to coordinate and control movement of these devices as a unit. The control system 232 can also access or include at least some of sensors 260 to evaluate the environment and manipulate the PV modules. In an example, the control system 232 accesses sensors 260, such as optical cameras 262 and IR cameras 263 to locate PV modules delivered by the ADV 130. The control system 232 can received a two dimensional (2D) image from an optical camera 262 for processing to roughly locate a PV module using common machine vision techniques such as edge detection or blob analysis. The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262 , or IR cameras, such as IR cameras 263 . The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein…”).
However, Schneider does not explicitly spell out …
responsive to making the initial determination as to the at least one aspect of the one or more solar panels, automatically make a subsequent determination using at least one different image as to the at least one aspect of the one or more solar panels in order to reduce error, wherein the at least one image is different from the at least one different image;
Janthori teaches …
responsive to making the initial determination as to the at least one aspect of the one or more solar panels, automatically make a subsequent determination using at least one different image as to the at least one aspect of the one or more solar panels in order to reduce error, wherein the at least one image is different from the at least one different image (See at least Fig. 1 shows pattern is saved from image processing which is construed as determination of the aspect through different images as shown in Fig. 4, Page 279 Col 1 “A. Capturing - An image is captured in every time interval instead of a video to reduce memory and processing power requirement from the SBC. Sample images are shown in Fig. 4.”, Fig. 9 The PV array detection result, Fig. 4 shows captured different images).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Schneider with the teachings of Janthori and include the feature of responsive to making the initial determination as to the at least one aspect of the one or more solar panels, automatically make a subsequent determination using at least one different image as to the at least one aspect of the one or more solar panels in order to reduce error, wherein the at least one image is different from the at least one different image, thereby provide PV detection accuracy and improve the overall quality control of photovoltaic installations (See at least Page 283 “VI. CONCLUSION … Moreover, it is possible to reduce the labor, cost, and time of the inspection compared to the previous inspection method and improve the overall quality control of photovoltaic installations.”).
Claim(s) 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Schneider et al. (US2021/0379757A1) (Hereinafter Schneider) in view of Janthori et al. (T. Janthori, B. Lertpornsuksawat and T. Sapsaman, "Automatic Data Collection for Ariel Thermographic Inspections of Photovoltaic Modules," 2022 Research, Invention, and Innovation Congress: Innovative Electricals and Electronics (RI2C), Bangkok, Thailand, 2022, pp. 278-283 (Hereinafter Janthori), and further in view of Takai et al. (US 2008/0096291 A1) (Hereinafter Takai).
Regarding Claim 6, modified Schneider teaches all the elements of claim 1.
However, Schneider does not explicitly spell out the method of claim 1, wherein the obtaining the one or more images includes using one or more filters for avoiding direct sun glare for detecting End-of-Arm Tooling (EOAT).
Takai teaches the method of claim 1, wherein the obtaining the one or more images includes using one or more filters for avoiding direct sun glare for detecting End-of-Arm Tooling (EOAT) (See at least Para [0106] “FIG. 9 shows a measurement system used to examine formation of polysilane in exhaust gas. FIG. 9 shows a CCD camera 901, a bandpass filter 902…”, Fig 9).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Schneider with the teachings of Takai and include the feature of obtaining the one or more images includes using one or more filters for avoiding direct sun glare for detecting End-of-Arm Tooling (EOAT), thereby precisely detect End-of-Arm Tooling (EOAT) and reduce cost (See at least Para [0043] “Another object of the present invention is to reduce manufacturing costs of semiconductor devices…”).
Regarding Claim 16, modified Schneider teaches all the elements of claim 11.
However, Schneider does not explicitly spell out the system of claim 11, wherein the camera system is configured to obtain the one or more images using one or more filters for avoiding direct sun glare for detecting End-of-Arm Tooling (EOAT).
Takai teaches the system of claim 11, wherein the camera system is configured to obtain the one or more images using one or more filters for avoiding direct sun glare for detecting End-of-Arm Tooling (EOAT) (See at least Para [0106] “FIG. 9 shows a measurement system used to examine formation of polysilane in exhaust gas. FIG. 9 shows a CCD camera 901, a bandpass filter 902…”, Fig 9).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Schneider with the teachings of Takai and include the feature of obtaining the one or more images includes using one or more filters for avoiding direct sun glare for detecting End-of-Arm Tooling (EOAT), thereby precisely detect End-of-Arm Tooling (EOAT) and reduce cost (See at least Para [0043] “Another object of the present invention is to reduce manufacturing costs of semiconductor devices…”).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Schneider et al. (US2021/0379757A1) (Hereinafter Schneider) in view of of Janthori et al. (T. Janthori, B. Lertpornsuksawat and T. Sapsaman, "Automatic Data Collection for Ariel Thermographic Inspections of Photovoltaic Modules," 2022 Research, Invention, and Innovation Congress: Innovative Electricals and Electronics (RI2C), Bangkok, Thailand, 2022, pp. 278-283 (Hereinafter Janthori), Takai et al. (US 2008/0096291 A1) (Hereinafter Takai), and further in view of Tokiwa (US 6247999 B1).
Regarding Claim 9, modified Schneider has all the elements of claim 1. Schneider further
discloses … and panel pose estimation using predetermined 3D panel geometry and corner locations (See at least Para [0096] “In example 22, the subject matter of any one of examples 8-21 includes identifying the PV module by detecting corners of the PV module within a 2D image or a 3D image.”, [0047] “…As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module…”) …
wherein the installation structure comprises one or more torque tubes (See at least Para [0034] “… Securing the PV module can include operating a tool to torque clamps into a secure position….”, Para [0081] “In example 6, the subject matter of any one of examples 1-5 includes, wherein the end of arm tooling includes a device used to secure a PV module to the racking system. In some examples, the device is a torquing device.”);
wherein obtaining the one or more images includes using a high-resolution camera with laser line generation for identifying the one or more torque tubes and/or a clamp position, and the computer vision pipeline locates the one or more torque tubes and/or the clamp position to estimate the panel poses (See at least Para [0038] “…The perception system can include sensors such as, optical cameras (monocular and stereo), infrared (IR) sensors, pressure sensors, inertial measurement units (IMUS), LIDAR cameras, and rangefinder sensors (sonar, laser, structured light), among others…”, Para [0047] “…The system 200 , for example one of the control systems 216 and 232 , will receive and process a combination of 2D and 31) sensor data to map the environment. For 2D sensing high resolution color imagery can be captured with an optical camera, such as optical camera 262. The 2D data can then be correlated with concurrently captured 3D depth and point cloud data, captured from sensors such as multiple IR cameras 263, a lidar sensor 266 and/or rangefinder sensors 267 (such as time-of-flight sensors)… This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230 , how to pose in order to place the panel correctly with respect to the racking system orientation.”, Para [0053] “…The system can include a perception system performing related and/or supporting operations 450 , the operations can include receiving sensor data at 451 , identifying a target object at 452 , extracting features of a target object at 453 , determining a location at 454 , and determining an orientation at 455 . In some examples, the target object can include a PV module, a racking system, and an installation location on the racking system.”, Para [0098] “In example 24, the subject matter of any one of examples 8-23 includes identifying the installation location by detecting a position and orientation of a structural feature of a PV module racking system.”), and …
However, Schneider does not explicitly spell out the method of claim 1, wherein automatically detecting uses a trained neural network comprises (i) a model for semantic segmentation for identifying a solar panel segment, and (ii) a model for instance segmentation for identifying a plurality of solar panel segments,
wherein the trained neural network uses a Mask R-CNN framework for instance segmentation,
wherein the initial determination and the subsequent determination use computer vision that comprises one or more computer vision algorithms for post-processing, Hough transform, filtering and segmentation of Hough lines, finding horizontal and/or vertical Hough line intersections, …
wherein the one or more images is of a clamp and/or a center structure for the in- progress solar installation, and the computer vision locates the clamps and/or the center structures to estimate the panel poses,
wherein the obtaining the one or more image includes using one or more filters for avoiding direct sun glare for detecting End-of-Arm Tooling (EOAT),
…
wherein obtaining the one or more images includes using a ring light for locating a nut, and the
computer vision locates the nut.
Janthori teaches … the method of claim 1, wherein automatically detecting uses a trained neural network comprises (i) a model for semantic segmentation for identifying a solar panel segment (See at least Page 280 Col 1 “D. Panel Segmentation Most industrial photovoltaic panels are designed with a frame, commonly in white color. Taking advantage of this characteristic, the lines of frames can be detected from the images using Hough Line Transformation…”), and (ii) a model for instance segmentation for identifying a plurality of solar panel segments (See at least Page 280 Col 1 “C. Array Segmentation Utilizing mask region-based Convolutional Neural Network (Mask-R-CNN) [4], the photovoltaic panel array, Fig 6, is segmented from the background for further process.”)
wherein the trained neural network uses a Mask R-CNN framework for instance segmentation (See at least Page 280 Col 1 “C. Array Segmentation Utilizing mask region-based Convolutional Neural Network (Mask-R-CNN) [4], the photovoltaic panel array, Fig 6, is segmented from the background for further process.”),
wherein the initial determination and the subsequent determination use computer vision that comprises one or more computer vision algorithms for post-processing, Hough transform, filtering and segmentation of Hough lines, finding horizontal and/or vertical Hough line intersections segment (See at least Page 280 Col 1 “D. Panel Segmentation Most industrial photovoltaic panels are designed with a frame, commonly in white color. Taking advantage of this characteristic, the lines of frames can be detected from the images using Hough Line Transformation…”, Fig. 1 shows using computer vision for image processing during automatic data collection process), …
wherein the one or more images is of a clamp and/or a center structure for the in- progress solar installation, and the computer vision locates the clamps and/or the center structures to estimate the panel poses (See at least Page 280 Col 2 “B. Photovoltaic Array Detection The object detection, YOLOv5 [5], is used for detection of the photovoltaic array. The detection visualizes as a region of interest of the object, Fig. 9, that can be modify for other usage, such as finding area of the region or position of the center of contour.”),
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Schneider with the teachings of Janthori and include the feature of automatically detecting uses a trained neural network comprises (i) a model for semantic segmentation for identifying a solar panel segment and a model for instance segmentation for identifying a plurality of solar panel segments wherein the trained neural network uses a Mask R-CNN framework for instance segmentation, wherein the initial determination and the subsequent determination use computer vision that comprises one or more computer vision algorithms for post-processing, Hough transform, filtering and segmentation of Hough lines, finding horizontal and/or vertical Hough line intersections segment, and wherein the one or more images is of a clamp and/or a center structure for the in- progress solar installation, and the computer vision locates the clamps and/or the center structures to estimate the panel poses, thereby provide solar panel detection precisely which will reduce labor cost and installation time and improve the overall quality control of photovoltaic installations (See at least Page 283 Col 2 “VI. CONCLUSION - … reduce the labor, cost, and time of the inspection compared to the previous inspection method and improve the overall quality control of photovoltaic installations.”).
Takai teaches, wherein the obtaining the one or more images includes using one or more filters for avoiding direct sun glare for detecting End-of-Arm Tooling (EOAT) (See at least Para [0106] “FIG. 9 shows a measurement system used to examine formation of polysilane in exhaust gas. FIG. 9 shows a CCD camera 901, a bandpass filter 902…”, Fig 9).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Schneider with the teachings of Takai and include the feature of obtaining the image includes using one or more filters for avoiding direct sun glare for detecting End-of-Arm Tooling (EOAT), thereby precisely detect End-of-Arm Tooling (EOAT) and reduce cost (See at least Para [0043] “Another object of the present invention is to reduce manufacturing costs of semiconductor devices…”).
Tokiwa wherein obtaining the one or more images includes using a ring light for locating a nut, and the computer vision locates the nut (See at least Col 11 Lines 37-42 “…The tool approach confirmation point P.sub.3 is provided to confirm whether or not the center of the fastening nut 28 has been located on the light path L of light beams of the light sensors 48 and 49 and the arm (the ring holder) of the robot has properly entered a predetermined position in the tool-exchange process.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Schneider with the teachings of Tokiwa and include the feature of obtaining the image including using a ring light for locating a nut, thereby providing precision and accuracy during solar installation process and increase productivity (See at least Col 15 Lines 21-23 “Thus, a process of automatically exchanging polishing tools can be performed without difficulty, and the productivity of the mold producing operation is enhanced.”).
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Schneider et al. (US2021/0379757A1) (Hereinafter Schneider) in view of Janthori et al. (T. Janthori, B. Lertpornsuksawat and T. Sapsaman, "Automatic Data Collection for Ariel Thermographic Inspections of Photovoltaic Modules," 2022 Research, Invention, and Innovation Congress: Innovative Electricals and Electronics (RI2C), Bangkok, Thailand, 2022, pp. 278-283 (Hereinafter Janthori), Nakamoto et al. (US 2017/0106534 A1) (Hereinafter Nakamoto), and further in view of Lin et al. (US 10011022 B1) (Hereinafter Lin).
Regarding Claim 19, modified Schneider teaches all the elements of claim 11. Schneider further teaches the system of claim 11, wherein at least one robot comprises a first assembly moving robot including a first end-of-arm assembly tool that includes … and a plurality of attachment devices (See at least Para [0050] “…The EOAT 240 is using a series of vacuum cups to grab the PV module 120 A. In this example, the robotic arm 230 has been positioned over the ADV 130 A by the AWP 210 …”)…, and wherein the first assembly moving robot is configured to position the first end-of-arm assembly tool relative to the installation structure (See at least Para [0052] “FIG. 3C illustrates the PV installation system 200 performing the final positioning of the PV module 120 A over the racking system 160 A…”, Fig 3C); and …
However, Schneider does not explicitly spell out … a frame … coupled to the frame…
a second assembly moving robot including a second end-of-arm assembly tool that includes a clamp interface structure and a clamp tightening structure having a pivot socket and a forward biasing assembly, and the second assembly moving robot is configured to position the second end-of-arm assembly tool relative to the installation structure.
Nakamoto teaches … a frame (See at least Para [0037] “As shown in FIG. 2, the end effector 13 of this embodiment includes a frame 21, a suction unit 22, a joint 23, and an actuator 24.”) … coupled to the frame (See at least Para [0037] “As shown in FIG. 2, the end effector 13 of this embodiment includes a frame 21, a suction unit 22, a joint 23, and an actuator 24.”) …
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Schneider with the teachings of Nakamoto and include the feature of frame that holds plurality of attachment devices, thereby making it easy to hold the attachment devices which makes assembly process accurate and efficient.
Lin teaches …
a second assembly moving robot including a second end-of-arm assembly tool that includes a clamp interface structure and a clamp tightening structure having a pivot socket and a forward biasing assembly (See at least col 2 Lines 11-19 “A swing lock assembly for an end-effector includes a clamp configured to movably secure a swing arm to a branch rail. The clamp includes an arm portion and a base portion having a protrusion on an end face thereof. The swing arm has a recess correspondingly shaped with the protrusion. A pivot shaft extends through the arm and base portions of the clamp and is configured to rotationally secure the swing arm to the clamp. A swing plate is arranged on the arm portion of the clamp and is keyed to the pivot shaft.”, Col 9 Lines 3-17 “The locking fastener 112 further includes a shaft 126 and a head 128 coupled to the shaft 126. The head 128 is arranged to protrude from the swing plate 108, while the shaft 126 is partially disposed in a blind hole 130 extending through the pivot shaft 104. The blind hole 130 of the pivot shaft 104 is internally threaded and is configured, shaped, and sized to mate with an external thread of the shaft 126. As a result, rotating the locking fastener 112 causes the split flange base 114 and the split flange arm 116 to move together or apart, thereby tightening or loosening the wrap-around clamp 106 with respect to the branch rail 54 and the swing arm 68, concurrently. Specifically, rotating the locking fastener 112 in a first rotational direction (e.g., clockwise) threads the locking fastener 112 into the pivot shaft 104 and locks the wrap-around clamp 106…”), and the second assembly moving robot is configured to position the second end-of-arm assembly tool relative to the installation structure (See at least Col 4 Lines 45-49 “…The controller 18 generates or receives input signals (arrow 30) informing the controller 18 as to the required work task(s) to perform on the corresponding workpiece(s) and outputs control signals (arrow 32) to the robot 12 to command the required actions from the robot 12.”, Col 5 Lines 64-67 – Col 6 Lines 1-11 “As described below with particular reference to FIGS. 2 and 3, the branch rails 54 with attached tool modules 56 are automatically repositionable by the robot 12 using the configuration tool 14 and instructions executed by the controller 18. Accordingly, the tool branches 52 may be arranged as desired to permit the tool modules 56, or more precisely, the individual end tools 66 of the tool modules 56, to attach to or otherwise interact with a given workpiece. In a non-limiting body panel example, the corresponding end tools 66 as shown in the various figures are configured as pneumatic suction cups or grippers of the type commonly used to secure and move automotive or other body panels without marring cosmetic show surfaces. However, other end tools 66, such as pinchers, clamps, spray nozzles, may be used...”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Schneider with the teachings of Lin and include the feature of a second assembly moving robot including a second end-of-arm assembly tool that includes a clamp interface structure and a clamp tightening structure having a pivot socket and a forward biasing assembly and the second assembly moving robot being configured to position the second end-of-arm assembly tool relative to the installation structure, thereby enhance installation efficiency (See at least Col 7 Lines 1-3 “In this way, manufacturing flexibility and efficiency can be enhanced, while reducing tooling costs and system downtime.”).
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Schneider et al. (US2021/0379757A1) (Hereinafter Schneider) in view of of Janthori et al. (T. Janthori, B. Lertpornsuksawat and T. Sapsaman, "Automatic Data Collection for Ariel Thermographic Inspections of Photovoltaic Modules," 2022 Research, Invention, and Innovation Congress: Innovative Electricals and Electronics (RI2C), Bangkok, Thailand, 2022, pp. 278-283 (Hereinafter Janthori), Takai et al. (US 2008/0096291 A1) (Hereinafter Takai), (US 6247999 B1) Nakamoto et al. (US 2017/0106534 A1) (Hereinafter Nakamoto), and further in view of Lin et al. (US 10011022 B1) (Hereinafter Lin).
Regarding Claim 21, modified Schneider has all the elements of claim 11. Schneider further
discloses … and panel pose estimation using predetermined 3D panel geometry and corner locations (See at least Para [0096] “In example 22, the subject matter of any one of examples 8-21 includes identifying the PV module by detecting corners of the PV module within a 2D image or a 3D image.”, [0047] “…As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module…”) …
wherein the installation structure comprises one or more torque tubes (See at least Para [0034] “… Securing the PV module can include operating a tool to torque clamps into a secure position….”, Para [0081] “In example 6, the subject matter of any one of examples 1-5 includes, wherein the end of arm tooling includes a device used to secure a PV module to the racking system. In some examples, the device is a torquing device.”);
wherein the at least one controller is configured to locate the one or more torque tubes and/or the clamp position to estimate the panel poses (See at least Para [0038] “…The perception system can include sensors such as, optical cameras (monocular and stereo), infrared (IR) sensors, pressure sensors, inertial measurement units (IMUS), LIDAR cameras, and rangefinder sensors (sonar, laser, structured light), among others…”, Para [0047] “…The system 200 , for example one of the control systems 216 and 232 , will receive and process a combination of 2D and 31) sensor data to map the environment. For 2D sensing high resolution color imagery can be captured with an optical camera, such as optical camera 262. The 2D data can then be correlated with concurrently captured 3D depth and point cloud data, captured from sensors such as multiple IR cameras 263, a lidar sensor 266 and/or rangefinder sensors 267 (such as time-of-flight sensors)… This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230 , how to pose in order to place the panel correctly with respect to the racking system orientation.”, Para [0053] “…The system can include a perception system performing related and/or supporting operations 450 , the operations can include receiving sensor data at 451 , identifying a target object at 452 , extracting features of a target object at 453 , determining a location at 454 , and determining an orientation at 455 . In some examples, the target object can include a PV module, a racking system, and an installation location on the racking system.”, Para [0098] “In example 24, the subject matter of any one of examples 8-23 includes identifying the installation location by detecting a position and orientation of a structural feature of a PV module racking system.”), and …
wherein at least one robot comprises a first assembly moving robot including a first end-of-arm assembly tool that includes … and a plurality of attachment devices (See at least Para [0050] “…The EOAT 240 is using a series of vacuum cups to grab the PV module 120 A. In this example, the robotic arm 230 has been positioned over the ADV 130 A by the AWP 210 …”)…, and wherein the first assembly moving robot is configured to position the first end-of-arm assembly tool relative to the installation structure (See at least Para [0052] “FIG. 3C illustrates the PV installation system 200 performing the final positioning of the PV module 120 A over the racking system 160 A…”, Fig 3C); and …
However, Schneider does not explicitly spell out the system of claim 11, wherein the at least one controller is configured to use (i) a model for semantic segmentation for identifying a solar panel segment, and (ii) a model for instance segmentation for identifying a plurality of solar panel segments,
wherein the at least one controller is configured to use a Mask R-CNN framework for instance segmentation,
wherein the at least one controller is configured to use one or more computer vision algorithms for post-processing, Hough transform, filtering and segmentation of Hough lines, finding horizontal and/or vertical Hough line intersections, …
wherein the one or more images is of a clamp and/or a center structure for the in- progress solar installation, and the computer vision locates the clamps and/or the center structures to estimate the panel poses,
wherein the camera system is configured to obtain the one or more images using one or more filters for avoiding direct sun glare for detecting End-of-Arm Tooling (EOAT),
…
wherein the camera system includes a ring light for locating a nut,
wherein the at least one controller is configured to locate the nut,
… a frame … coupled to the frame…
wherein the at least one robot comprises a second assembly moving robot including a second end-of-arm assembly tool that includes a clamp interface structure and a clamp tightening structure having a pivot socket and a forward biasing assembly, and the second assembly moving robot is configured to position the second end-of-arm assembly tool relative to the installation structure.
Janthori teaches … the method of claim 1, wherein automatically detecting uses a trained neural network comprises (i) a model for semantic segmentation for identifying a solar panel segment (See at least Page 280 Col 1 “D. Panel Segmentation Most industrial photovoltaic panels are designed with a frame, commonly in white color. Taking advantage of this characteristic, the lines of frames can be detected from the images using Hough Line Transformation…”), and (ii) a model for instance segmentation for identifying a plurality of solar panel segments (See at least Page 280 Col 1 “C. Array Segmentation Utilizing mask region-based Convolutional Neural Network (Mask-R-CNN) [4], the photovoltaic panel array, Fig 6, is segmented from the background for further process.”)
wherein the trained neural network uses a Mask R-CNN framework for instance segmentation (See at least Page 280 Col 1 “C. Array Segmentation Utilizing mask region-based Convolutional Neural Network (Mask-R-CNN) [4], the photovoltaic panel array, Fig 6, is segmented from the background for further process.”),
wherein the initial determination and the subsequent determination use computer vision that comprises one or more computer vision algorithms for post-processing, Hough transform, filtering and segmentation of Hough lines, finding horizontal and/or vertical Hough line intersections segment (See at least Page 280 Col 1 “D. Panel Segmentation Most industrial photovoltaic panels are designed with a frame, commonly in white color. Taking advantage of this characteristic, the lines of frames can be detected from the images using Hough Line Transformation…”, Fig. 1 shows using computer vision for image processing during automatic data collection process), …
wherein the one or more images is of a clamp and/or a center structure for the in- progress solar installation, and the computer vision locates the clamps and/or the center structures to estimate the panel poses (See at least Page 280 Col 2 “B. Photovoltaic Array Detection The object detection, YOLOv5 [5], is used for detection of the photovoltaic array. The detection visualizes as a region of interest of the object, Fig. 9, that can be modify for other usage, such as finding area of the region or position of the center of contour.”),
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Schneider with the teachings of Janthori and include the feature of automatically detecting uses a trained neural network comprises (i) a model for semantic segmentation for identifying a solar panel segment and a model for instance segmentation for identifying a plurality of solar panel segments wherein the trained neural network uses a Mask R-CNN framework for instance segmentation, wherein the initial determination and the subsequent determination use computer vision that comprises one or more computer vision algorithms for post-processing, Hough transform, filtering and segmentation of Hough lines, finding horizontal and/or vertical Hough line intersections segment, and wherein the one or more images is of a clamp and/or a center structure for the in- progress solar installation, and the computer vision locates the clamps and/or the center structures to estimate the panel poses, thereby provide solar panel detection precisely which will reduce labor cost and installation time and improve the overall quality control of photovoltaic installations (See at least Page 283 Col 2 “VI. CONCLUSION - … reduce the labor, cost, and time of the inspection compared to the previous inspection method and improve the overall quality control of photovoltaic installations.”).
Takai teaches, wherein the obtaining the one or more images includes using one or more filters for avoiding direct sun glare for detecting End-of-Arm Tooling (EOAT) (See at least Para [0106] “FIG. 9 shows a measurement system used to examine formation of polysilane in exhaust gas. FIG. 9 shows a CCD camera 901, a bandpass filter 902…”, Fig 9).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Schneider with the teachings of Takai and include the feature of obtaining the image includes using one or more filters for avoiding direct sun glare for detecting End-of-Arm Tooling (EOAT), thereby precisely detect End-of-Arm Tooling (EOAT) and reduce cost (See at least Para [0043] “Another object of the present invention is to reduce manufacturing costs of semiconductor devices…”).
Tokiwa wherein obtaining the one or more images includes using a ring light for locating a nut, and the computer vision locates the nut (See at least Col 11 Lines 37-42 “…The tool approach confirmation point P.sub.3 is provided to confirm whether or not the center of the fastening nut 28 has been located on the light path L of light beams of the light sensors 48 and 49 and the arm (the ring holder) of the robot has properly entered a predetermined position in the tool-exchange process.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Schneider with the teachings of Tokiwa and include the feature of obtaining the image including using a ring light for locating a nut, thereby providing precision and accuracy during solar installation process and increase productivity (See at least Col 15 Lines 21-23 “Thus, a process of automatically exchanging polishing tools can be performed without difficulty, and the productivity of the mold producing operation is enhanced.”).
Nakamoto teaches … a frame (See at least Para [0037] “As shown in FIG. 2, the end effector 13 of this embodiment includes a frame 21, a suction unit 22, a joint 23, and an actuator 24.”) … coupled to the frame (See at least Para [0037] “As shown in FIG. 2, the end effector 13 of this embodiment includes a frame 21, a suction unit 22, a joint 23, and an actuator 24.”) …
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Schneider with the teachings of Nakamoto and include the feature of frame that holds plurality of attachment devices, thereby making it easy to hold the attachment devices which makes assembly process accurate and efficient.
Lin teaches …
a second assembly moving robot including a second end-of-arm assembly tool that includes a clamp interface structure and a clamp tightening structure having a pivot socket and a forward biasing assembly (See at least col 2 Lines 11-19 “A swing lock assembly for an end-effector includes a clamp configured to movably secure a swing arm to a branch rail. The clamp includes an arm portion and a base portion having a protrusion on an end face thereof. The swing arm has a recess correspondingly shaped with the protrusion. A pivot shaft extends through the arm and base portions of the clamp and is configured to rotationally secure the swing arm to the clamp. A swing plate is arranged on the arm portion of the clamp and is keyed to the pivot shaft.”, Col 9 Lines 3-17 “The locking fastener 112 further includes a shaft 126 and a head 128 coupled to the shaft 126. The head 128 is arranged to protrude from the swing plate 108, while the shaft 126 is partially disposed in a blind hole 130 extending through the pivot shaft 104. The blind hole 130 of the pivot shaft 104 is internally threaded and is configured, shaped, and sized to mate with an external thread of the shaft 126. As a result, rotating the locking fastener 112 causes the split flange base 114 and the split flange arm 116 to move together or apart, thereby tightening or loosening the wrap-around clamp 106 with respect to the branch rail 54 and the swing arm 68, concurrently. Specifically, rotating the locking fastener 112 in a first rotational direction (e.g., clockwise) threads the locking fastener 112 into the pivot shaft 104 and locks the wrap-around clamp 106…”), and the second assembly moving robot is configured to position the second end-of-arm assembly tool relative to the installation structure (See at least Col 4 Lines 45-49 “…The controller 18 generates or receives input signals (arrow 30) informing the controller 18 as to the required work task(s) to perform on the corresponding workpiece(s) and outputs control signals (arrow 32) to the robot 12 to command the required actions from the robot 12.”, Col 5 Lines 64-67 – Col 6 Lines 1-11 “As described below with particular reference to FIGS. 2 and 3, the branch rails 54 with attached tool modules 56 are automatically repositionable by the robot 12 using the configuration tool 14 and instructions executed by the controller 18. Accordingly, the tool branches 52 may be arranged as desired to permit the tool modules 56, or more precisely, the individual end tools 66 of the tool modules 56, to attach to or otherwise interact with a given workpiece. In a non-limiting body panel example, the corresponding end tools 66 as shown in the various figures are configured as pneumatic suction cups or grippers of the type commonly used to secure and move automotive or other body panels without marring cosmetic show surfaces. However, other end tools 66, such as pinchers, clamps, spray nozzles, may be used...”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the system of Schneider with the teachings of Lin and include the feature of a second assembly moving robot including a second end-of-arm assembly tool that includes a clamp interface structure and a clamp tightening structure having a pivot socket and a forward biasing assembly and the second assembly moving robot being configured to position the second end-of-arm assembly tool relative to the installation structure, thereby enhance installation efficiency (See at least Col 7 Lines 1-3 “In this way, manufacturing flexibility and efficiency can be enhanced, while reducing tooling costs and system downtime.”).
Claim(s) 31-44 are rejected under 35 U.S.C. 103 as being unpatentable over Schneider et al. (US 2021/0379757 A1) (Hereinafter Schneider) in view of Janthori et al. (T. Janthori, B. Lertpornsuksawat and T. Sapsaman, "Automatic Data Collection for Ariel Thermographic Inspections of Photovoltaic Modules," 2022 Research, Invention, and Innovation Congress: Innovative Electricals and Electronics (RI2C), Bangkok, Thailand, 2022, pp. 278-283 (Hereinafter Janthori), and further in view of Bommes et al. (Lukas Bommes, Claudia Buerhop-Lutz, Tobias Pickel, Jens Hauch, Christoph Brabec, Ian Marius Peters, Georeferencing of Photovoltaic Modules from Aerial Infrared Videos using Structure-from-Motion) (Hereinafter Bommes).
Regarding Claim 31, modified Schneider teaches all the elements of claim 1. Schneider further teaches the method of claim 1, wherein the at least one aspect of the one or more solar panels is … determined using different methodologies (See at least Para [0044] “… The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262, or IR cameras, such as IR cameras 263. The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein. The perception system (e.g., control system 232 in this example) can access other sensors, such as a force sensor 264, a lidar sensor 266 and/or a rangefinder sensor 267 to provide additional 2D and 3D information about the surroundings and operation of the linear slide 220, EMT 240, and the robotic manipulator 230.”, Para [0111] “In example 37, the subject matter of any one of examples 25-36 includes using the perception system to receive information from sensors including a two-dimensional (2D) sensor that provides data in two orthogonal dimensions and a three-dimensional sensor that provides data in three orthogonal dimensions.”, Para [0058] “Operations discussed in reference to technique 400 can be repeated as need to populate the entire solar generation plant with PV modules…”)….
However, Schneider does not explicitly spell out the at least one aspect of the one or
more solar panels is iteratively determined to reduce error.
Bommes teaches … the at least one aspect of the one or more solar panels is iteratively
determined … to reduce the error (See at least Page 3 Col 2 Para 3 “2) Initialization of the reconstruction : One frame pair with sufficient parallax is selected for initialization of the reconstruction. The pose of the first frame is set as world coordinate origin. The pose of the second frame relative to the first frame is estimated with the five-point algorithm”, Page 3 Col 2 Para 4 “3) Iterative reconstruction: Starting from the initial frame pair the other key frames are added incrementally to the reconstruction. In each iteration the frame with most matches to any of the reconstructed frames is selected. Its pose is estimated from observed 3D scene points in the reconstruction and their corresponding 2Dprojections in the frame by solving the perspective-n-point problem…”, Page 2 Col 1 Para 2 “This improves robustness to GPS measurement errors and allows to use standard GPS instead of RTK-GPS…”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Schneider with the teachings of Bommes and include the feature of one aspect of the one or more solar panels is iteratively determined using different methodologies in order to reduce the error, thereby provide PV detection accuracy and improve the overall quality control of photovoltaic installations.
Regarding Claim 32, modified Schneider teaches all the elements of claim 31. Schneider further teaches the method of claim 31, wherein the different methodologies comprise different computer vision techniques (See at least Para [0044] “… The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262, or IR cameras, such as IR cameras 263. The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein. The perception system (e.g., control system 232 in this example) can access other sensors, such as a force sensor 264, a lidar sensor 266 and/or a rangefinder sensor 267 to provide additional 2D and 3D information about the surroundings and operation of the linear slide 220, EMT 240, and the robotic manipulator 230.”, Para [0111] “In example 37, the subject matter of any one of examples 25-36 includes using the perception system to receive information from sensors including a two-dimensional (2D) sensor that provides data in two orthogonal dimensions and a three-dimensional sensor that provides data in three orthogonal dimensions.”).
Regarding Claim 33, modified Schneider teaches all the elements of claim 31. Schneider further teaches the method of claim 31, wherein the different methodologies comprise the initial determination using one or more neural networks and the subsequent determination using computer vision (See at least Para [0025] “O-AMPP—The inventors' solution to the problems discussed herein, titled “Outdoor Autonomous Manipulation of Photovoltaic Panels (O-AMPP)”, fuses computer vision and machine learning algorithms, a customized sensor solution, customized End-Of-Arm Tooling (EOAT), industry-leading power-dense, outdoor-rated robotic arm technology, and best-in-class outdoor unmanned ground vehicles (UGVs), to produce an innovative solution that will reduce the cost and time for producing a new solar plant.”, Para [0033] “… To pick up PV modules first perception sensors and computer vision is used to detect and locate key objects of interest including the modules and the racking system 160A…”, Para [0047] “…As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed)”)…
Also, Janthori teaches the subsequent determination using computer vision (See at least Fig 1.
shows image processing using computer).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Schneider with the teachings of Janthori and include the feature of subsequent determination of PVs using computer vision, thereby ensure precise detection of solar panels and improve the overall quality control of photovoltaic installations (See at least Page 283 “VI. CONCLUSION … Moreover, it is possible to reduce the labor, cost, and time of the inspection compared to the previous inspection method and improve the overall quality control of photovoltaic installations.”).
Regarding Claim 34, modified Schneider teaches all the elements of claim 31. Schneider further teaches the method of claim 31, wherein the initial determination comprises a coarse determination of the at least one aspect of the one or more solar panels (See at least Para [0044] “… The control system 232 can received a two dimensional (2D) image from an optical camera 262 for processing to roughly locate a PV module using common machine vision techniques such as edge detection or blob analysis…”, Para [0047] “… As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed)…”);…
However, Schneider does not explicitly spell out … and wherein the subsequent determination comprises a precise determination of the at least one aspect of the one or more solar panels.
Janthori teaches … and wherein the subsequent determination comprises a precise determination of the at least one aspect of the one or more solar panels (See at least Page 282 Col 2 Para 1 “Array Segmentation Initially, this process would utilize Mask-R-CNN for the segmentation. However, the output of the segmented stitched image is not favorable, Fig. 21. Further experiment was attempted to improve the segmentation of the panel by adjusting training batch size, resolution, and number of epochs. The new result shows improvement to the array segmentation process”, Page 283 Col 1 Para 1 “Panel Segmentation This process identifies frame lines on the image and draws over them, which then used to identify each individual panel.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Schneider with the teachings of Janthori and include the feature of the subsequent determination comprising a precise determination of the at least one aspect of the one or more solar panels, thereby ensure precise detection of solar panels and improve the overall quality control of photovoltaic installations (See at least Page 283 “VI. CONCLUSION … Moreover, it is possible to reduce the labor, cost, and time of the inspection compared to the previous inspection method and improve the overall quality control of photovoltaic installations.”).
Regarding Claim 35, modified Schneider teaches all the elements of claim 34. Schneider further teaches the method of claim 34, wherein the at least one aspect of the one or more solar panels comprises at least a part of an edge of the one or more solar panels (See at least [0047] “… As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed)…”, Para [0097] “In example 23, the subject matter of any one of examples 8-22 includes identifying the PV module by detecting edges of the PV module within a 2D image or a 3D image.”).
Regarding Claim 36, modified Schneider teaches all the elements of claim 1. Schneider further teaches the method of claim 1, wherein the one or more solar panels are positioned on the installation structure (See at least Para [0099] “…Placing the PV module in position on the racking system is done using the robotic arm…”, Para [0057]);
further comprising … determining one or more positions of the installation structure based on a coarse determination (See at least Para [0047] “… This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230, how to pose in order to place the panel correctly with respect to the racking system orientation…”, Para [0058] “Operations discussed in reference to technique 400 can be repeated as need to populate the entire solar generation plant with PV modules…”) … ; and
wherein the at least one robot is automatically controlled using the control signals generated based on the estimated one or more panel poses and the one or more positions of the installation structure (See at least Para [0044] “…The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location...”, Para [0047] “… This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230, how to pose in order to place the panel correctly with respect to the racking system orientation.”).
However, Schneider does not explicitly spell out … iteratively determining one or more positions of the installation structure … and a fine determination.
Janthori teaches … and a fine determination (See at least Page 282 Col 2 Para 1 “Array Segmentation Initially, this process would utilize Mask-R-CNN for the segmentation. However, the output of the segmented stitched image is not favorable, Fig. 21. Further experiment was attempted to improve the segmentation of the panel by adjusting training batch size, resolution, and number of epochs. The new result shows improvement to the array segmentation process”, Page 283 Col 1 Para 1 “Panel Segmentation This process identifies frame lines on the image and draws over them, which then used to identify each individual panel.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Schneider with the teachings of Janthori and include the feature of fine determination of the solar panels, thereby ensure precise detection of solar panels and improve the overall quality control of photovoltaic installations (See at least Page 283 “VI. CONCLUSION … Moreover, it is possible to reduce the labor, cost, and time of the inspection compared to the previous inspection method and improve the overall quality control of photovoltaic installations.”).
Bommes teaches … iteratively determining one or more positions of the installation structure (See at least Page 3 Col 2 Para 3 “2) Initialization of the reconstruction : One frame pair with sufficient parallax is selected for initialization of the reconstruction. The pose of the first frame is set as world coordinate origin. The pose of the second frame relative to the first frame is estimated with the five-point algorithm”, Page 3 Col 2 Para 4 “3) Iterative reconstruction: Starting from the initial frame pair the other key frames are added incrementally to the reconstruction. In each iteration the frame with most matches to any of the reconstructed frames is selected. Its pose is estimated from observed 3D scene points in the reconstruction and their corresponding 2Dprojections in the frame by solving the perspective-n-point problem…”, Page 2 Col 1 Para 2 “This improves robustness to GPS measurement errors and allows to use standard GPS instead of RTK-GPS…”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Schneider with the teachings of Bommes and include the feature of one aspect of the one or more solar panels is iteratively determined, thereby provide PV detection accuracy and improve the overall quality control of photovoltaic installations.
Regarding Claim 37, modified Schneider teaches all the elements of claim 36. Schneider further teaches the method of claim 36, wherein the installation structure comprises torque tubes and clamps (See at least Para [0034] “… In certain examples, the EOAT can be configured to perform both positioning of the PV module and securing the PV module to the racking system. Securing the PV module can include operating a tool to torque clamps into a secure position.”); and
wherein the one or more positions of one or both of the torque tubes or the clamps are … determined (See at least Para [0034] “… In certain examples, the EOAT can be configured to perform both positioning of the PV module and securing the PV module to the racking system. Securing the PV module can include operating a tool to torque clamps into a secure position.”, Para [0058] “Operations discussed in reference to technique 400 can be repeated as need to populate the entire solar generation plant with PV modules…”).
However, Schneider does not explicitly spell out … positions of one or both of the torque tubes or the clamps are iteratively determined.
Bommes teaches positions of one or both of the torque tubes or the clamps are iteratively determined (See at least Page 3 Col 2 Para 3 “2) Initialization of the reconstruction : One frame pair with sufficient parallax is selected for initialization of the reconstruction. The pose of the first frame is set as world coordinate origin. The pose of the second frame relative to the first frame is estimated with the five-point algorithm”, Page 3 Col 2 Para 4 “3) Iterative reconstruction: Starting from the initial frame pair the other key frames are added incrementally to the reconstruction. In each iteration the frame with most matches to any of the reconstructed frames is selected. Its pose is estimated from observed 3D scene points in the reconstruction and their corresponding 2Dprojections in the frame by solving the perspective-n-point problem…”, Page 2 Col 1 Para 2 “This improves robustness to GPS measurement errors and allows to use standard GPS instead of RTK-GPS…”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Schneider with the teachings of Bommes and include the feature of determining the positions of one or both of the torque tubes or the clamps are iteratively, thereby provide PV detection accuracy and improve the overall quality control of photovoltaic installations.
Regarding Claim 38, modified Schneider teaches all the elements of claim 11. Schneider further teaches the system of claim 11, wherein the at least one controller is configured to … determine the at least one aspect of the one or more solar panels using different methodologies (See at least Para [0044] “… The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262, or IR cameras, such as IR cameras 263. The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein. The perception system (e.g., control system 232 in this example) can access other sensors, such as a force sensor 264, a lidar sensor 266 and/or a rangefinder sensor 267 to provide additional 2D and 3D information about the surroundings and operation of the linear slide 220, EMT 240, and the robotic manipulator 230.”, Para [0111] “In example 37, the subject matter of any one of examples 25-36 includes using the perception system to receive information from sensors including a two-dimensional (2D) sensor that provides data in two orthogonal dimensions and a three-dimensional sensor that provides data in three orthogonal dimensions.”, Para [0058] “Operations discussed in reference to technique 400 can be repeated as need to populate the entire solar generation plant with PV modules…”)….
However, Schneider does not explicitly spell out the at least one aspect of the one or
more solar panels is iteratively determined to reduce error.
Bommes teaches … the at least one aspect of the one or more solar panels is iteratively
determined … to reduce the error (See at least Page 3 Col 2 Para 3 “2) Initialization of the reconstruction : One frame pair with sufficient parallax is selected for initialization of the reconstruction. The pose of the first frame is set as world coordinate origin. The pose of the second frame relative to the first frame is estimated with the five-point algorithm”, Page 3 Col 2 Para 4 “3) Iterative reconstruction: Starting from the initial frame pair the other key frames are added incrementally to the reconstruction. In each iteration the frame with most matches to any of the reconstructed frames is selected. Its pose is estimated from observed 3D scene points in the reconstruction and their corresponding 2Dprojections in the frame by solving the perspective-n-point problem…”, Page 2 Col 1 Para 2 “This improves robustness to GPS measurement errors and allows to use standard GPS instead of RTK-GPS…”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Schneider with the teachings of Bommes and include the feature of the at least one aspect of the one or more solar panels is iteratively
determined to reduce the error, thereby provide PV detection accuracy and improve the overall quality control of photovoltaic installations.
Regarding Claim 39, modified Schneider teaches all the elements of claim 38. Schneider further teaches the system of claim 38, wherein the different methodologies comprise different computer vision techniques (See at least Para [0044] “… The controls system 232 can also (or alternatively) receive three dimensional (3D) data from a stereo pair of optical cameras, such as optical cameras 262, or IR cameras, such as IR cameras 263. The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location. The control system 232 can operate as a perception system for the robotic manipulator 230 to assist in picking up and placing a PV module on a racking system as discussed herein. The perception system (e.g., control system 232 in this example) can access other sensors, such as a force sensor 264, a lidar sensor 266 and/or a rangefinder sensor 267 to provide additional 2D and 3D information about the surroundings and operation of the linear slide 220, EMT 240, and the robotic manipulator 230.”, Para [0111] “In example 37, the subject matter of any one of examples 25-36 includes using the perception system to receive information from sensors including a two-dimensional (2D) sensor that provides data in two orthogonal dimensions and a three-dimensional sensor that provides data in three orthogonal dimensions.”).
Regarding Claim 40, modified Schneider teaches all the elements of claim 38. Schneider further teaches the system of claim 38, wherein the different methodologies comprise the initial determination using one or more neural networks and the subsequent determination using computer vision (See at least Para [0025] “O-AMPP—The inventors' solution to the problems discussed herein, titled “Outdoor Autonomous Manipulation of Photovoltaic Panels (O-AMPP)”, fuses computer vision and machine learning algorithms, a customized sensor solution, customized End-Of-Arm Tooling (EOAT), industry-leading power-dense, outdoor-rated robotic arm technology, and best-in-class outdoor unmanned ground vehicles (UGVs), to produce an innovative solution that will reduce the cost and time for producing a new solar plant.”, Para [0033] “… To pick up PV modules first perception sensors and computer vision is used to detect and locate key objects of interest including the modules and the racking system 160A…”, Para [0047] “…As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed)”)…
Also, Janthori teaches the subsequent determination using computer vision (See at least Fig 1.
shows image processing using computer).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Schneider with the teachings of Janthori and include the feature of subsequent determination of PVs using computer vision, thereby ensure precise detection of solar panels and improve the overall quality control of photovoltaic installations (See at least Page 283 “VI. CONCLUSION … Moreover, it is possible to reduce the labor, cost, and time of the inspection compared to the previous inspection method and improve the overall quality control of photovoltaic installations.”).
Regarding Claim 41, modified Schneider teaches all the elements of claim 31. Schneider further teaches the system of claim 38, wherein the initial determination comprises a coarse determination of the at least one aspect of the one or more solar panels (See at least Para [0044] “… The control system 232 can received a two dimensional (2D) image from an optical camera 262 for processing to roughly locate a PV module using common machine vision techniques such as edge detection or blob analysis…”, Para [0047] “… As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed)…”);…
However, Schneider does not explicitly spell out … and wherein the subsequent determination comprises a precise determination of the at least one aspect of the one or more solar panels.
Janthori teaches … and wherein the subsequent determination comprises a precise determination of the at least one aspect of the one or more solar panels (See at least Page 282 Col 2 Para 1 “Array Segmentation Initially, this process would utilize Mask-R-CNN for the segmentation. However, the output of the segmented stitched image is not favorable, Fig. 21. Further experiment was attempted to improve the segmentation of the panel by adjusting training batch size, resolution, and number of epochs. The new result shows improvement to the array segmentation process”, Page 283 Col 1 Para 1 “Panel Segmentation This process identifies frame lines on the image and draws over them, which then used to identify each individual panel.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Schneider with the teachings of Janthori and include the feature of the subsequent determination comprising a precise determination of the at least one aspect of the one or more solar panels, thereby ensure precise detection of solar panels and improve the overall quality control of photovoltaic installations (See at least Page 283 “VI. CONCLUSION … Moreover, it is possible to reduce the labor, cost, and time of the inspection compared to the previous inspection method and improve the overall quality control of photovoltaic installations.”).
Regarding Claim 42, modified Schneider teaches all the elements of claim 41. Schneider further teaches the system of claim 41, wherein the at least one aspect of the one or more solar panels comprises at least a part of an edge of the one or more solar panels (See at least [0047] “… As an example, detection of PV modules can involve processing 2D image data for a large, mostly black, reflective rectangle using a blob analysis algorithm, and then employ an edge detection mechanism to project the sides and corners of the PV module. The 2D edge and corner data can then be correlated to the 3D data to project a plane for the top surface of the PV module. Once a 3D plane is determined for the PV module, the EOAT 240 can be positioned with the robotic manipulator 230 using this information to pick up the PV module. The perception system, operating within control system 216 and/or 232, can use algorithms to detect the edges, corners, and other uniquely identifiably features of the PV module racking system to define the 3D representation of the targeted mounting structure of the racking system (i.e., where the module is to be placed)…”, Para [0097] “In example 23, the subject matter of any one of examples 8-22 includes identifying the PV module by detecting edges of the PV module within a 2D image or a 3D image.”).
Regarding Claim 43, modified Schneider teaches all the elements of claim 11. Schneider further teaches the system of claim 11, wherein the one or more solar panels are positioned on the installation structure (See at least Para [0099] “…Placing the PV module in position on the racking system is done using the robotic arm…”, Para [0057]);
wherein the at least one controller is further configured to … determine one or more positions of the installation structure based on a coarse determination (See at least Para [0047] “…This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230, how to pose in order to place the panel correctly with respect to the racking system orientation…”, Para [0058] “Operations discussed in reference to technique 400 can be repeated as need to populate the entire solar generation plant with PV modules…”) … ; and
wherein the at least one robot is automatically controlled using the control signals generated based on the estimated one or more panel poses and the one or more positions of the installation structure (See at least Para [0044] “…The 3D data can be analyzed to map precise location and orientation of the PV modules, as well as portions of the racking system, such as the mounting location...”, Para [0047] “… This allows determination of the 3D position and orientation of the structure with respect to the sensor, the robotic system, and also the grasped PV modules. This information informs the manipulator, such as robotic manipulator 230, how to pose in order to place the panel correctly with respect to the racking system orientation.”).
However, Schneider does not explicitly spell out … iteratively determine one or more positions of the installation structure … and a fine determination.
Janthori teaches … and a fine determination (See at least Page 282 Col 2 Para 1 “Array Segmentation Initially, this process would utilize Mask-R-CNN for the segmentation. However, the output of the segmented stitched image is not favorable, Fig. 21. Further experiment was attempted to improve the segmentation of the panel by adjusting training batch size, resolution, and number of epochs. The new result shows improvement to the array segmentation process”, Page 283 Col 1 Para 1 “Panel Segmentation This process identifies frame lines on the image and draws over them, which then used to identify each individual panel.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Schneider with the teachings of Janthori and include the feature of fine determination of the solar panels, thereby ensure precise detection of solar panels and improve the overall quality control of photovoltaic installations (See at least Page 283 “VI. CONCLUSION … Moreover, it is possible to reduce the labor, cost, and time of the inspection compared to the previous inspection method and improve the overall quality control of photovoltaic installations.”).
Bommes teaches … iteratively determining one or more positions of the installation structure (See at least Page 3 Col 2 Para 3 “2) Initialization of the reconstruction : One frame pair with sufficient parallax is selected for initialization of the reconstruction. The pose of the first frame is set as world coordinate origin. The pose of the second frame relative to the first frame is estimated with the five-point algorithm”, Page 3 Col 2 Para 4 “3) Iterative reconstruction: Starting from the initial frame pair the other key frames are added incrementally to the reconstruction. In each iteration the frame with most matches to any of the reconstructed frames is selected. Its pose is estimated from observed 3D scene points in the reconstruction and their corresponding 2Dprojections in the frame by solving the perspective-n-point problem…”, Page 2 Col 1 Para 2 “This improves robustness to GPS measurement errors and allows to use standard GPS instead of RTK-GPS…”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Schneider with the teachings of Bommes and include the feature of iteratively determine one or more positions of the installation structure, thereby provide PV detection accuracy and improve the overall quality control of photovoltaic installations.
Regarding Claim 44, modified Schneider teaches all the elements of claim 43. Schneider further teaches the system of claim 43, wherein the installation structure comprises torque tubes and clamps (See at least Para [0034] “… In certain examples, the EOAT can be configured to perform both positioning of the PV module and securing the PV module to the racking system. Securing the PV module can include operating a tool to torque clamps into a secure position.”); and
wherein the at least one controller is configured to … determine the one or more positions of one or both of the torque tubes or the clamps (See at least Para [0034] “… In certain examples, the EOAT can be configured to perform both positioning of the PV module and securing the PV module to the racking system. Securing the PV module can include operating a tool to torque clamps into a secure position.”, Para [0058] “Operations discussed in reference to technique 400 can be repeated as need to populate the entire solar generation plant with PV modules…”).
However, Schneider does not explicitly spell out … iteratively determine the one or more positions of one or both of the torque tubes or the clamps.
Bommes teaches … iteratively determine the one or more positions of one or both of the torque tubes or the clamps (See at least Page 3 Col 2 Para 3 “2) Initialization of the reconstruction : One frame pair with sufficient parallax is selected for initialization of the reconstruction. The pose of the first frame is set as world coordinate origin. The pose of the second frame relative to the first frame is estimated with the five-point algorithm”, Page 3 Col 2 Para 4 “3) Iterative reconstruction: Starting from the initial frame pair the other key frames are added incrementally to the reconstruction. In each iteration the frame with most matches to any of the reconstructed frames is selected. Its pose is estimated from observed 3D scene points in the reconstruction and their corresponding 2Dprojections in the frame by solving the perspective-n-point problem…”, Page 2 Col 1 Para 2 “This improves robustness to GPS measurement errors and allows to use standard GPS instead of RTK-GPS…”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the system of Schneider with the teachings of Bommes and include the feature of iteratively determine the one or more positions of one or both of the torque tubes or the clamps, thereby provide PV detection accuracy and improve the overall quality control of photovoltaic installations.
Conclusion
32. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Kunz et al. (US 2023/0238919 A1) teaches Methods and apparatus are presented for measuring a photoluminescence (PL) response, preferably a spatially resolved image of a PL response, from an object exposed to solar irradiation.
33. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAHEDA HOQUE whose telephone number is (571)270-5310. The examiner can normally be reached Monday-Friday 8:00 am- 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at 571-270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHAHEDA HOQUE/Examiner, Art Unit 3658
/Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658