Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments filed 01/19/2026 (regarding newly added claims 17-18) concerning the operation of the borescope to compared to the teachings of Pathak et al are persuasive.
Applicant's arguments filed 01/19/2026 have been fully considered but they are not persuasive (regarding the independent claims and newly added claims 19-20).
To summarize the below response to arguments:
regarding the independent claims the arguments concerning BRI of the original claim language are not persuasive, however the updated language is rendered obvious over Finn in view of Pathak (Pathak teaches the use of drones in general which are located within the interior of the turbine);
Claims 19-20’s arguements are not persuasive, regarding 19 “for navigating” is intended use phrasing as such the teachings for SLAM in Finn do teach such a map/generating it. Claim 20 is similar to 19 but requires active navigation from the map; however such navigation is obvious to one of ordinary skill in the art in view of Finn’s teachings for SLAM and background knowledge as evidenced by the Wikipedia articles for Simultaneous Localization and Mapping.
Regarding the arguments concerning BRI of “fly into a gas turbine engine” (original wording of the independent claims). This is not persuasive as while Finn et al teaches that the embodiment of fig.3 is “external” and “Unattached” this does not mean that it does not “fly into” the gas turbine; from [0055] “external” is meant to connote that the inspection system is not mechanically part of the engine, as opposed to the “built-in” embodiment of fig. 5.
From [0055] as cited in the previous rejection the external unattached “The sensor 12, is shown as a mobile video camera system 12 configured to capture video images of an entire forward surface 32 of at least one gas turbine engine blade 34. The camera 12 can be mobile (shown as arrows), such that the camera can move, pan, slide or otherwise reposition to capture the necessary image data 14 of the entire forward surface 32. The mobile video camera system 12 can be moved through location and pose variation to image the entire forward surface 32 of each of the blades 34 of the gas turbine engine 38.” While not explicitly saying that the above repositioning options includes moving “into” the engine, when looking at the figures; in particular fig. 5 which depicts the engine with its nacelle it can be seen that if the unattached embodiment where to remain external the images it captures during operation would either have blocked views (blocked by the nacelle) of the outer most section of the fan blades and/or would have to take images of those parts of the fan blades at oblique angles (which would degrade the ability to detect small cracks compared to direct images). As such the previous rejection/citation would teaches “flying into” at least the portion of the engine in front of the blades but past the entrance of the nacelle.
PNG
media_image1.png
550
438
media_image1.png
Greyscale
As such the applicant’s arguments concerning the previous rejection are not persuasive; however the amended claim language is that the drone flies “into the interior of” the gas turbine, as such applicant is correct in that Finn does not teach the external unattached device being positioned/moving into the “interior of” the turbine.
An updated search and consideration was performed in view of the amendments, US 20190294883 A1, Pathak et al, “BEST IMAGE GRAB FROM VIDEO WITH DIGITAL GRID ASSISTANCE FOR AVIATION ENGINE BORESCOPE INSPECTION”, was found to render obvious the use of robots which travel into the interior of an aircraft’s gas turbine engine and obtain images throughout the engine of its components which includes the use of borescopes equipped on those drones to obtain the images. ([0028] In various embodiments, the grid component 108 can generate a digital grid and visual layer overlay on a raw video feed from borescope inspections. A digital grid can be a shape outline of an engine component. The digital grid can overlay on top of a raw video feed (e.g., original video) of the inspection. It is appreciated that the raw video feed can be received from robots, drones or other types of device that can acquire a video recording of an engine components. The robots and/or drones can include micro-video cameras and can be placed throughout an entire engine. The robots and/or drones can be crawlers that crawl inside an engine to provide a live video feed, e.g., raw video feed. Different engine components or parts can have different grids. Alternatively, an engine component or part can also have multiple digital grids for different viewing angle and zoom level.)
While Pathak et al does not teach that its drone are unmanned aerial drones, instead teaching crawler drones; however the substitution from a crawler to a flying drone would not change the underlying principles of operation of the drones of Pathak (they would still be traveling within the interior of the engine and capturing images for inspection) but only substituting one means of travel (crawling) for another (flight). With the underlying improvement being ease of repositioning between engines/inspection areas (as modified by Claybourgh in the original non-final rejection).
Regarding the newly added claims 19-20; respectfully the arguments concerning the teachings (or lack thereof) for the use of SLAM are not persuasive as they rely on a piecemeal analysis of each individual reference whereas a 103 rejection is based on the teachings which flow from the combination of the references. In the current case the 103 rejection, updated below to reflect the new claims, is of Finn which does teach the use of SLAM Finn [0031]-[0034] to generate a map of the surroundings of the device from images captured, further Finn teaches that using SLAM is well known and applied in the fields of robotics;
additionally it is noted that claim 19 as currently worded only recites the generation of the map “for navigation”; as such Finn’s teachings of SLAM (simultaneous localization and mapping) do teach such a map in that “for navigation” is an intended use in claim 19 and from the “localization” part of SLAM it is known that the map is/can be used for navigating a robot for at least the localization part (i.e. determining current position) of navigating.
Regarding, Claim 20 while similar to 19, it does require actively navigating using the map; however using a map generated via SLAM to navigate is well understood routine and conventional in the field of robotics as evidenced by the Wikipedia article; as such while Finn does not explicitly state that the SLAM/its corresponding map is used to navigate/reposition the “external” “unattached” device/sensor from the teachings for the device performing SLAM and the teachings for the device to reposition to various points one of ordinary skill in the art would make the logical inference to use SLAM is used for navigating the device. (A one of ordinary skill in the art being a person of also a person of ordinary creativity not an automaton which ordinary background knowledge of the field).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 6-7, 10, 13-14, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20190338666 A1, “SYSTEM AND METHOD FOR IN SITU AIRFOIL INSPECTION”, Finn et al (while assigned to United Technologies Corporation (merged with Raytheon which is now RTX corporation) the publishing date of November 7th 2019 is outside the one year grace period and thus its disclosure is valid prior art for the U.S) and further in view of “IMAGE PARAMETER-BASED SPATIAL POSITIONING” further in view of US 20180131864 A1, “IMAGE PARAMETER-BASED SPATIAL POSITIONING”, Bisti and further in view of US 20210261251 A1, “MOTORIZED FLYING CRAFT FOR MEASURING THE RELIEF OF SURFACES OF A PREDETERMINED OBJECT AND METHOD FOR CONTROLLING SUCH A CRAFT”, Claybourgh and further in view of US 20190294883 A1, “BEST IMAGE GRAB FROM VIDEO WITH DIGITAL GRID ASSISTANCE FOR AVIATION ENGINE BORESCOPE INSPECTION”, Pathak.
Regarding Claim 1, Finn et al teaches “A system for inspection of a gas turbine engine, the system comprising: external mobile image capturing system equipped to move ([0055] Referring also to FIG. 3 an exemplary automated damage detection system 10 can be seen. FIG. 3 depicts an external, unattached inspection system. In this disclosure, the unattached inspection system depicted in FIG. 3 is considered to be in-situ. In another exemplary embodiment, the system 10 can include an optical in-situ, i.e., built-in, system for a gas turbine engine blade inspection.) “and equipped with an imaging device and a light source”([0031] A sensor 12 may include a one-dimensional (1D), 2D, 3D sensor (depth sensor) and/or a combination and/or array thereof. Sensor 12 may be operable in the electromagnetic or acoustic spectrum capable of producing a 3D point cloud, occupancy grid or depth map of the corresponding dimension(s). Sensor 12 may provide various characteristics of the sensed electromagnetic or acoustic spectrum including intensity, spectral characteristics, polarization, etc. In various embodiments, sensor 12 may include a distance, range, and/or depth sensing device. Various depth sensing sensor technologies and devices include, but are not limited to, a structured light measurement, phase shift measurement, time of flight measurement, stereo triangulation device, sheet of light triangulation device, light field cameras, coded aperture cameras, computational imaging techniques, simultaneous localization and mapping (SLAM), imaging radar, imaging sonar, echolocation, laser radar, scanning light detection and ranging (LIDAR), flash LIDAR, or a combination comprising at least one of the foregoing. Different technologies can include active (transmitting and receiving a signal) or passive (only receiving a signal) and may operate in a band of the electromagnetic or acoustic spectrum such as visual, infrared, ultrasonic, etc. In various embodiments, sensor 12 may be operable to produce depth from defocus, a focal stack of images, or structure from motion.”);” at least one processor configured to: operate the at lmove the capturing system ”into ([0055] “The camera 12 can be mobile (shown as arrows), such that the camera can move, pan, slide or otherwise reposition to capture the necessary image data 14 of the entire forward surface 32. The mobile video camera system 12 can be moved through location and pose variation to image the entire forward surface 32 of each of the blades 34 of the gas turbine engine 38. The imaging of the blade 34 of the gas turbine engine 38 can be done either continuously or intermittently. In another exemplary embodiment, the imaging is conducted during gas turbine engine operational conditions such as coasting, spool-up, and spool-down, including at least one complete revolution.” while not explicitly remarked on as being controlled via processor/computer given that the external system operates during turbine operation conditions (such as coasting, spool-up/down, etc) implicitly this would teach to one of ordinary skill in the art that a human is not holding the camera/moving it dues to its proximity to an operational turbine (i.e. for safety reasons a human is not the holder of the camera) thus implicitly this would mean that the “external unattached” system is automatically operating to move itself as needed this implicit teaching is further reenforced as [0057] teaches creating/using SLAM during the operation and from [0032] SLAM is taught as being a part of robotic (UAV) operations, i.e. controlled/implemented via a processor);” operate the imaging device and the light source to take an image a target surface of the component from the first position,”( [0055] Referring also to FIG. 3 an exemplary automated damage detection system 10 can be seen. FIG. 3 depicts an external, unattached inspection system. In this disclosure, the unattached inspection system depicted in FIG. 3 is considered to be in-situ. In another exemplary embodiment, the system 10 can include an optical in-situ, i.e., built-in, system for a gas turbine engine blade inspection. The component 20 can be a blade of a fan, a vane, a blade of a compressor, a vane of a compressor, a blade of a turbine, or a vane of a turbine. The exemplary embodiment shown in FIG. 3 includes a fan as the component 20. The sensor 12, is shown as a mobile video camera system 12 configured to capture video images of an entire forward surface 32 of at least one gas turbine engine blade 34. The camera 12 can be mobile (shown as arrows), such that the camera can move, pan, slide or otherwise reposition to capture the necessary image data 14 of the entire forward surface 32. The mobile video camera system 12 can be moved through location and pose variation to image the entire forward surface 32 of each of the blades 34 of the gas turbine engine 38.” + see figure 3 posted below which shows the depicted unattached external inspection system.)
PNG
media_image2.png
304
426
media_image2.png
Greyscale
Finn et al however is mute as to the specific construction of its “external unattached inspection system”, thus it does not explicitly teach that this system is “at least one least one drone operable for flight” (and that its subsequent control movement for capturing of images is flight)
Further Finn et al does not teach “identify whether the image includes an obstruction blocking a portion of the target surface from view of the imaging device, in response to identifying the obstruction, operate the at least one drone to fly to a second position from which there is a line-of-sight to the target surface without the obstruction, and operate the imaging device and the light source to obtain an unobstructed image of the target surface from the second position.”
Claybourgh et al teaches a unmanned aerial vehicle for inspecting of surfaces of aircraft ([0014] The invention thus aims to provide a device for measuring the contour (depression or bump) of a surface of a large predetermined object, such as an aircraft, a wind turbine, a watercraft, a large engineering structure, etc.) using a imaging sensor and light (Abstract [0097] The apparatus 14 also comprises a source 15 for emitting a reference wave, and a matrix receiver 16 of a wave reflected by the region of interest targeted by the apparatus, and a processing unit 38, shown schematically in FIG. 5, configured to be able to determine a measurement of the contour of the region of interest targeted by the apparatus 14 from the analysis of the reference wave and of the reflected wave.
[0098] According to one embodiment, the emission source 15 is a camera for emitting structured light, and the matrix receiver 16 is an image acquisition camera. The reference waves and reflected waves are therefore images. According to this embodiment, the camera 15 for emitting structured light can be of any known type, such that the structured light can be a light with a particular pattern (lines, dots, a grid, etc.). The image acquisition camera 16 then acquires images of the various patterns projected onto the region of interest. The processing unit 38 then makes it possible to determine the deformation of the pattern. The analysis of the deformation of the pattern makes it possible to estimate the depth of the surface onto which the structured light is projected. Various technical and software solutions are available on the market for estimating depth from images acquired from structured light projection and are not described in detail here.).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application, to modify Finn et al to utilize the UAV and flight controller of Claybourgh as the platform to implement the “External unattached inspection system” taught in Finn et al. One would be motivated to implement he UAV of Claybourgh to allow for a system which can easily acquire information from multiple engines/points of interest of the aircraft as opposed to fixed sensors (such as that which is disclosed in figure 5/its embodiment of Finn). Claybourgh et al teaches this motivation in ([0015] The invention aims in particular to provide a device for measuring the contour of a surface which does not require a human presence in the vicinity of the inspected regions of the surface.
[0016] The invention also aims to provide, in at least one embodiment of the invention, a measurement device which can acquire a plurality of measurements of the contour of a plurality of regions of interest of a surface quickly and repeatably. The aim of the invention is in particular to allow the acquisition of three-dimensional measurements of an entire surface of the object, or even of the object in its entirety, in which case said plurality of regions of interest forms the entire surface of the object or the whole object.)
The combination however would still lack teachings for the imaging system and navigation includes “, identify whether the image includes an obstruction blocking a portion of the target surface from view of the imaging device, in response to identifying the obstruction, operate the at least one drone to fly to a second position from which there is a line-of-sight to the target surface without the obstruction, and operate the imaging device and the light source to obtain an unobstructed image of the target surface from the second position.”
Bisti et al teaches a imaging capturing system for a uav which includes for a given image “identify whether the image includes an obstruction blocking a portion of the target surface from view of the imaging device, in response to identifying the obstruction, operate the at least one drone to fly to a second position from which there is a line-of-sight to the target surface without the obstruction, and operate the imaging device and the light source to obtain an unobstructed image of the target surface from the second position” ([0023] Image parameter-based positioning program 110 sends positional adjustment instructions meeting the image parameters (step 208). In an embodiment, image parameter-based positioning program 110 may send positional adjustment instructions meeting the image parameters to multiple imaging devices. For example, image parameter-based positioning program 110 may simultaneously send positional adjustment instructions to increase the height of the imaging device and downward angle of an image in response to an obstruction blocking the subject, such as a tree. Image parameter-based positioning program 110 may subsequently send positional adjustment instructions to imaging devices containing lighting elements to increase the lumen output to a level that compensates for the decreased ambient light resulting from the shadow of the tree. In yet another embodiment, image parameter-based positioning program 110 may independently adjust the image parameters to the setting as close to the user-specified settings given an unideal environment. For example, image parameter-based positioning program 110 may be programmed to send an instruction for an imaging device to take a photograph of a subject from a particular position despite the subject being partially obscured by foliage if no better alternative exists given the current conditions.” Here teaches that in response to detecting a blocking (of the subject/target in the image) the positional adjustment (flying to a second position with better (unobstructed) views of the target))
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to further modify Finn et al to include the obstruction (in a image) detection and subsequent position correction to obtain a clear image as taught by Bisti et al. One would be motivated to implement this image analysis and response to a blocking object to improve the quality of obtained images such that they a suitable for inspection/use. (Implicit to the purpose of [0023] “meeting the image parameters” i.e. to ensure necessary quality/user satisfaction in the obtained images.)
The above combination however would still not teach flying into the “interior” of the gas turbine.
Pathak teaches a piece of analogous art in which drones (crawler robots) are deployed into the interior of a aircraft turbine engine in order to provide inspection images and/or videos ([0028] In various embodiments, the grid component 108 can generate a digital grid and visual layer overlay on a raw video feed from borescope inspections. A digital grid can be a shape outline of an engine component. The digital grid can overlay on top of a raw video feed (e.g., original video) of the inspection. It is appreciated that the raw video feed can be received from robots, drones or other types of device that can acquire a video recording of an engine components. The robots and/or drones can include micro-video cameras and can be placed throughout an entire engine. The robots and/or drones can be crawlers that crawl inside an engine to provide a live video feed, e.g., raw video feed. Different engine components or parts can have different grids. Alternatively, an engine component or part can also have multiple digital grids for different viewing angle and zoom level.)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to further modify Finn et al to include navigating/positioning itself within the interior sections of the engine turbine as taught by Pathak. One would be motivated to implement this interior navigation/positioning in order to provide a more complete inspection of the engine. Improving safety of operation compared to only inspecting the forward surface (Fan blades). This modification would fall under the KSR rational of “Use of Known Technique To Improve Similar Devices (Methods, or Products) in the Same Way”. (I) Finn (in view of Bisti and Claybourgh) teaches the based device in which in comparison the claimed device is improved in that the claimed device flies into the interior of the turbine. (II) Pathak teaches a similar device (crawler drone as opposed to a UAV) which teaches the improvement of operating into the interior of the engine turbine; as such the general improvement of autonomous robots/drones taking inspection images within a turbine is known; (III) Applying the improvement would only require software modifications of the navigation control algorithms to include instructions to reposition the device in areas past the fan blades (i.e. command the device of Finn with the UAV of claybourgh) to travel into the turbine.; (IV) Further Pathak while not explicitly teaching uavs does teach that the (internal) placement of the cameras is applicable to a wide range of devices [0028] “It is appreciated that the raw video feed can be received from robots, drones or other types of device that can acquire a video recording of an engine components. ” As such the modification of using an unattached camera (Finn) more specifically on a uav (claybourgh) instead of crawler drones falls within the general teachings/spirit of Pathak in that both the original device of Finn, and the modification to use a UAV of claybourgh falls within the general scope of “robots, drones or other types of device that can acquire a video recording of an engine components”. Thus the modification would have a reasonable expectation of success.
The resulting modified Finn et al teaches all aspects of claim 1.
Regarding Claim 2, modified Finn as modified in claim 1 teaches “The system as recited in claim 1, wherein the component is a fan blade”(Finn “[0002] Gas turbine engine components, such as blades or vanes, may suffer irregularities from manufacturing or wear and damage during operation, for example, due to erosion, hot corrosion (sulfidation), cracks, dents, nicks, gouges, and other damage, such as from foreign object damage. Detecting this damage may be achieved by images, videos, or depth data for aircraft engine blade inspection, power turbine blade inspection, internal inspection of mechanical devices, and the like.”);” and the obstruction is an inlet guide vane.”([0002] … or vanes, may suffer irregularities from manufacturing or wear and damage during operation, for example, due to erosion, hot corrosion (sulfidation), cracks, dents, nicks, gouges, and other damage, such as from foreign object damage.” Here Finn teaches that gas turbine parts include both blades and vanes (guide vanes) as such as modified with the teachings of Bisti the logic naturally flows when the target surface of Finn is the blade of the turbine other turbine parts of the turbine would be an “obstruction” is in the teachings of Bisti, other components of the turbine include guide vanes)
Regarding Claim 6, as modified in claim 1, the combination would not teach wherein the at least one drone includes first and second drones, the imaging device of the first drone taking the image of the target surface from the first position and the image device of the second drone taking the unobstructed image from the second position.”; Finn as modified by Claybourgh only teaches a single drone.( [0023] Image parameter-based positioning program 110 sends positional adjustment instructions meeting the image parameters (step 208). In an embodiment, image parameter-based positioning program 110 may send positional adjustment instructions meeting the image parameters to multiple imaging devices. For example, image parameter-based positioning program 110 may simultaneously send positional adjustment instructions to increase the height of the imaging device and downward angle of an image in response to an obstruction blocking the subject, such as a tree. Image parameter-based positioning program 110 may subsequently send positional adjustment instructions to imaging devices containing lighting elements to increase the lumen output to a level that compensates for the decreased ambient light resulting from the shadow of the tree. In yet another embodiment, image parameter-based positioning program 110 may independently adjust the image parameters to the setting as close to the user-specified settings given an unideal environment. For example, image parameter-based positioning program 110 may be programmed to send an instruction for an imaging device to take a photograph of a subject from a particular position despite the subject being partially obscured by foliage if no better alternative exists given the current conditions.” Here teaches that in response to detecting a blocking (of the subject/target in the image) the positional adjustment (flying to a second position with better (unobstructed) views of the target)))
Bisti, in addition to teaching the position correction for a single drone/imaging system teaches that this system can be used to coordinate multiple imaging systems (drones) which includes based on a first image from a first drone, moving a second drone to obtain a second image from a second position which is unobstructed by the obstacle detected in the first image. ([0022] Image parameter-based positioning program 110 receives a first image (step 206). In some embodiments, the first image establishes the position of an imaging device in relation to an imaged subject. In the embodiments, image parameter-based positioning program 110 may use the first image to determine the positional adjustments needed to enable the imaging device to meet the image parameters. An exemplary embodiment is discussed in further detail with regards to FIG. 3A and FIG. 3B. In another embodiment, image parameter-based positioning program 110 uses the first image to determine the positional adjustments needed for multiple imaging devices with multiple image parameters. For example, image parameter-based positioning program 110 may use the first images from multiple unmanned imaging vehicles 104 and send out positional adjustment instructions to each of the multiple unmanned imaging vehicles 104 based on the particular role of each unmanned imaging vehicle 104, such as lighting the subject in a particular way, taking an image with a particular focal length, and taking an image at a particular angle. In yet another embodiment, image parameter-based positioning program 110 may receive a continuous stream of images enabling image parameter-based positioning program 110 to determine and send positional adjustment instructions to one or more unmanned imaging vehicles 104 with minimal latency. In yet another embodiment, image parameter-based positioning program 110 may receive one or more imaging devices' three dimensional positions from sensors incorporated into the one or more imaging devices to determine whether the position of the imaging device meets image parameters.)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to further modify Finn et al to include the multiple cooperative imaging platforms (i.e. to utilize multiple “external unattached inspection systems”) and the correctin/adjustment of a second platform’s position and taking of a second unobstructed in response to detecting an obstruction in a first platforms image as taught by Bisti. One would be motivated to implement the multiple uavs to allow for a system in which increases the realibiltiy/ability to obtain images of the proper quality by allowing for independent adjustment/optimization of the imaging and lighting (i.e. allows for role specialization of the uavs for a given scenario/environment) (Bisti [0023] Image parameter-based positioning program 110 sends positional adjustment instructions meeting the image parameters (step 208). In an embodiment, image parameter-based positioning program 110 may send positional adjustment instructions meeting the image parameters to multiple imaging devices. For example, image parameter-based positioning program 110 may simultaneously send positional adjustment instructions to increase the height of the imaging device and downward angle of an image in response to an obstruction blocking the subject, such as a tree. Image parameter-based positioning program 110 may subsequently send positional adjustment instructions to imaging devices containing lighting elements to increase the lumen output to a level that compensates for the decreased ambient light resulting from the shadow of the tree.)
Regarding Claim 7, modified Finn does not teach the use of a borescope as part of the taking of the images.
Pathak et al teaches a aircraft engine inspection system which includes the use of borescope “([0039] A borescope inspection can be performed on an aircraft without requiring the engine to be brought into a repair shop, pulling the engine apart and identifying the defects. The borescope inspection can be performed between flights. For example, an aircraft can be scheduled to have a borescope inspection every 1,000 cycles between flights, e.g., on-wing inspection. The maintenance component 302 can notify that a borescope inspection is due at a specified time prior to the due date.);”equipped drones which takes images/inspect the aircraft engine (i.e. a gas turbine) ([0028] “In various embodiments, the grid component 108 can generate a digital grid and visual layer overlay on a raw video feed from borescope inspections. A digital grid can be a shape outline of an engine component. The digital grid can overlay on top of a raw video feed (e.g., original video) of the inspection. It is appreciated that the raw video feed can be received from robots, drones or other types of device that can acquire a video recording of an engine components. The robots and/or drones can include micro-video cameras and can be placed throughout an entire engine. The robots and/or drones can be crawlers that crawl inside an engine to provide a live video feed, e.g., raw video feed. Different engine components or parts can have different grids. Alternatively, an engine component or part can also have multiple digital grids for different viewing angle and zoom level.”)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to modify the unattached external inspection system of Finn et al to utilize borescopes as the cameras for image taking. One would be motivated to implement borescopes in particular as they allow for access to parts and surfaces of the engine further into the turbine allowing for a more complete inspection of the turbine for damage ([0039] “A borescope inspection can be performed on an aircraft without requiring the engine to be brought into a repair shop, pulling the engine apart and identifying the defects. The borescope inspection can be performed between flights.”)
Regarding Claim 10 it is a method equivalent to the system of claim 1, it has the same citations, combination, motivation as claim 1 for its rejection.
Regarding Claim 13 it is a method equivalent to claim 6 above, it has the same grounds of rejection as its equivalent.
Regarding Claim 14 it has the same overall rejection, combination, and motivation for combination as claim 7, the additional (compared to claim 7) limitation of deploying the borescope to take the unobstructed image naturally flows from the combination in claim 7 as the borescope is being used to take the images in the first place.
Regarding Claim 19, modified Finn teaches “The system as recited in claim 1, wherein the at least one processor is configured, based on images of the gas turbine engine, to generate a spatial map of the gas turbine engine for navigation of the engine.”( Finn [0032] In an exemplary embodiment, Simultaneous Localization and Mapping (SLAM) constructs a map of an unknown environment (for example, a fan and the space in front of it) while simultaneously keeping track of the camera's location within that environment. SLAM has been used unmanned aerial, ground, and underwater vehicles; planetary rovers; and domestic robots.
[0033] In an exemplary embodiment, one or more 2D cameras are employed in a visual SLAM (VSLAM) approach. In an alternative embodiment, various 3D sensors (also called depth sensors) may be employed in place of a 2D camera. In SLAM, probabilistic equations for the camera's (or depth sensor's) location and the fan blades' locations are updated using Bayes' rule and the new images (or depth data). In one embodiment, the SLAM algorithm develops a high-resolution grid map (also called an occupancy grid). The resolution of the grid may be sufficiently small that damage is detectable as presence or absence of physical material in a specific grid. Alternatively, a lower-resolution grid may be employed along with low-order physical modeling of the grid data.” Here Finn teaches the generation of a spatial map, and from the “localization” of the device in it is “for navigation”)
Claim(s) 3-5, 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Modified Finn as applied to claim 1 above, and further in view of US 10520943 B2, “Unmanned Aerial Image Capture Platform”, Martirosyan et al.
Regarding Claim 3, modified Finn, while teaching a neural network to identify cracks and anomalies in the surface of the blades ([0052]) does not teach using of a neural network to identify if an obstruction is the in the capture image (nor does Bisti teach such a neural network for detecting an obstruction)
Martirosyan et al teaches a autonomous vehicle imaging and object tracking system which includes using of a neural network to identify and classify detected object in images. As subsequently using that identifications for determining of relative position/adjustments for tracking of a target object (Column 6, lines 24-41, “According to some embodiments, an image capture device of UAV 100 may be a single camera (i.e. a non-stereoscopic camera). Here, computer vision algorithms may identify the presence of an object and identify the object as belonging to a known type with particular dimensions. In such embodiments, an object may be identified by comparing the captured image to stored two-dimensional (2D) and/or three dimensional (3D) appearance models. For example, through computer vision, the subject 102 may be identified as an adult male human. In some embodiments the 2D and/or 3D appearance models may be represented as a trained neural network that utilizes deep learning to classify objects in images according to detected patterns With this recognition data, as well as other position and/or orientation data for the UAV 100 (e.g. data from GPS, WiFi, Cellular, and/or IMU, as discussed above), UAV 100 may estimate a relative position and/or orientation of the subject 102.”) and includes detecting/determining that a field of view to a target object is obscured and adjusting of the trajectory (uavs position) such that it is no longer obstructed (Columns 2-3, lines 42-03, “To improve the quality of image capture (objectively and/or subjectively), one or more criteria may be specified that define how UAV 100 is to respond to given conditions while autonomously capturing images over a physical environment. In other words, to satisfy the specified one or more criteria, UAV 100 may be configured to automatically adjust image capture, which may in some cases include adjusting its flight path. As an illustrative example, consider an example criterion that states that while tracking and capturing images of a subject in motion, the UAV 100 is to always (or at least within a threshold tolerance) maintain a clear line of sight with the subject. In other words, it is not enough to stay within a maximum separation distance. If the line of sight with the subject becomes obstructed by another object in the physical environment, the UAV may automatically adjust its flight path to alleviate the obstruction. The particular maneuver required in any given situation depends on the geometric configuration of the subject and the UAV within the physical environment. As an illustrative example, consider a UAV 100 tracking a human subject in motion. As the human subject moves under a tree, the view from the UAV 100 located overhead becomes obstructed by the leaves of the tree. To satisfy the specified criterion (of maintaining clear line of sight) a processing unit (located on board the UAV or remotely and in communication with the UAV) may generate commands configured to adjust image capture, for example, by causing the UAV 100 to reduce altitude below the level of the leaves to alleviate the obstruction in the view.”)
It would have been obvious to one of ordinary skill in the art to further modify Finn et al to include the neural network for determining the capture object and detection of obstructions (non-target object detected/classified) in the image as taught by Martirosyan et al. One would be motivated to implement the neural network based obstruction detection as taught by Martirosyan et al to allow for a system which can improve performance overtime (i.e. update/adapt the neural networks weights to improve performance). This improvement/motivation is implicit to the teachings of using neural network as taught in the cited column 6 section of Martirosyan above.
Regarding Claim 4, modified Finn teaches “The system as recited in claim 3, wherein the one or more neural networks is configured to identify an abnormality in the target surface from the image.”( Finn [0052] Various types of damage such as missing material, cracks, delamination, creep, spallation, and the like can be detected automatically by using a deep learning classifier trained from available data, such as a library of user characterized damage examples, by using statistical estimation algorithms, by image or video classification algorithms, and the like. Deep learning is the process of training or adjusting the weights of a deep neural network. In an embodiment the deep neural network is a deep convolutional neural network. Deep convolutional neural networks are trained by presenting an error map or partial error map to an input layer and, a damage/no-damage label (optionally, a descriptive label, e.g., missing material, crack, spallation, and the like), to an output layer. The training of a deep convolutional network proceeds layer-wise and does not require a label until the output layer is trained. The weights of the deep network's layers are adapted, typically by a stochastic gradient descent algorithm, to produce a correct classification. The deep learning training may use only partially labeled data, only fully labeled data, or only implicitly labeled data, or may use unlabeled data for initial or partial training with only a final training on labeled data.)
Regarding Claim 5, as modified in claim 3 modified Finn teaches “The system as recited in claim 3, wherein the one or more neural networks is configured to navigate the at least one drone.”( (Column 6, lines 24-41, “According to some embodiments, an image capture device of UAV 100 may be a single camera (i.e. a non-stereoscopic camera). Here, computer vision algorithms may identify the presence of an object and identify the object as belonging to a known type with particular dimensions. In such embodiments, an object may be identified by comparing the captured image to stored two-dimensional (2D) and/or three dimensional (3D) appearance models. For example, through computer vision, the subject 102 may be identified as an adult male human. In some embodiments the 2D and/or 3D appearance models may be represented as a trained neural network that utilizes deep learning to classify objects in images according to detected patterns With this recognition data, as well as other position and/or orientation data for the UAV 100 (e.g. data from GPS, WiFi, Cellular, and/or IMU, as discussed above), UAV 100 may estimate a relative position and/or orientation of the subject 102.” Martirosyan et al teaches using a neural network to determine recognition data which is used for flight control purposes from column 14, lines 27-39, “As previously described, in response to estimating the motions of the UAV 100 and the subject 102, a computing system (e.g. a flight controller associated with UAV 100) may generate control commands to dynamically adjust image capture to satisfy a specified criterion related to a quality of the image capture. It is generally understood that the quality of image capture in any given situation can depend on a number of different factors. For example, if the image capture is of a particular subject (e.g. a human, an animal, a vehicle, a building, or any other object), a basic determination on the quality of image capture may be whether the subject remains in view, in focus, properly framed, etc”
Regarding Claims 11-12 they are effectively method equivalents to claims 3-4 above, they have the same grounds of rejection, combination, and motivation for combination as their equivalents above.
Claim(s) 8 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over modified Finn as applied to claims 1 and 10 above, and further in view of US 20180101173 A1, Banerjee et al.
Regarding Claim 8, modified Finn does not teach the use of docking stations on the aircraft for the unattached external inspection system (or the UAV of Claybourgh).
Banerjee et al teaches a docking station and landing control system for UAVs (Abstract) which includes attaching of the docking stations onto aircraft. ([0040] In some configurations, the apparatus 102 may be a drone, may be included in a drone, and/or may be coupled to a drone. In other configurations, the apparatus 102 may be an electronic device that is remote from the drone. For example, the apparatus 102 may be an electronic device (e.g., computer, integrated circuit, etc.) in communication with the drone. In some examples, the apparatus 102 may be implemented in a moving base (e.g., a vehicle, car, truck, train, aircraft, another drone, etc.))
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to further modify Finn to include the docking stations on the aircraft and landing control algorithms of Banerjee et al as part of the drones (Claybourgh). One would be motivated to implement these to allow for the UAVs to recharge as needed during the inspections. Thereby allowing for smaller batteries (and by extension more compact drones) ([0136] “The access door 544 on landing pad C 534c may allow recovery of the drone or a package delivered by the drone. For example, once a drone has landed, the access door 544 may be opened to allow the drone to be recovered into the moving base (e.g., vehicle), may allow a package delivered by the drone to be recovered, may allow for battery recharging or replacement, and/or may allow for other drone maintenance to be performed, etc.”)
Regarding Claim 15 it has the same grounds of rejection as claim 8 above.
Claim(s) 9 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over modified Finn as applied to claims 1 and 10 above, and further in view of US 20180330027 A1, “SYSTEM AND METHOD PROVIDING SITUATIONAL AWARENESS FOR AUTONOMOUS ASSET INSPECTION ROBOT MONITOR”, SEN et al.
Regarding Claim 9, modified Finn does not teach “further comprising an operator interface configured to permit an operator to take images using the imaging device.”
Sen et al teaches a asset inspection (using UAVs) system which includes a user interface which allows for the operator to control the drones/modify their operations and cameras (i.e. “permit an operator to take images using the imaging device”) ([0047] For example, FIG. 5 is an interactive user interface display 500 in accordance with some embodiments. The display 500 might be associated with an inspection as a service process and include a representation of an industrial asset model 510, points of interest (“X”), the nearby environment, etc. According to some embodiments, the display 500 further includes results 540 of a forward simulation of robot operation from a current location though a pre-determined time window (e.g., the next thirty seconds). By removing extraneous data (e.g., past movement of the robot, conditional branch paths that will not be taken by the robot, movements too far in the future, etc.), the results 540 improve the situational awareness of a human monitor interacting with the display 500. According to some embodiments, the display 500 includes additional information, such as a live view 520 of the area, a street view 530 of the industrial asset, battery power 570 of one or more autonomous inspection robots, etc. The display 500 might further include icons 560 that, when selected by a human monitor (e.g., via a computer mouse or touchscreen), alter the inspection process (e.g., by stopping the process, transferring control to the human monitor, etc.). In some cases, selection of an element on the display 500 might result in further information being provided about that element (e.g., in a “pop-up” window), adjust display parameters (e.g., by zooming a portion of the display 500 in or out), etc.)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to modify Finn et al to include the operator interface and control overrides during asset inspection as taught by Sen et al. One would be motivated to implement interface with the operator take over to improve the efficiency and safety of operation in case of unexpected conditions (e.g. type of damage different than expected/not recognized by the camera, dangerous flight controls, etc). Sen et al teaches this motivation in ([0035] According to some embodiments, the system may also visually provide results of the forward simulation to a human inspection monitor via an interactive display along with the sensor data, indications of the points of interest, and a representation of the three-dimensional model. As a result, the amount of data presented to the human monitor may be limited allowing him or her to better process and respond to the information. According to some embodiments, the interactive display can be utilized by the inspection monitor to pause inspection, resume inspection, abort inspection, adjust the path of movement, pilot the inspection robot, etc. In this way, the human monitor can effectively respond to the current situation and avoid mission critical (e.g., preventing effective data gathering) or safety critical (e.g., resulting in asset damage or injury) failures.)
Regarding Claim 16, it has the same overall combination and motivation as claim 9.
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over modified Finn as applied to claim 10 above, and further in view of the Wikipedia article, “Simultaneous localization and mapping”.
Regarding Claim 20, while Finn et al does teach the use of SLAM (generation of a spatial map) ([0032]” In an exemplary embodiment, Simultaneous Localization and Mapping (SLAM) constructs a map of an unknown environment (for example, a fan and the space in front of it) while simultaneously keeping track of the camera's location within that environment. SLAM has been used unmanned aerial, ground, and underwater vehicles; planetary rovers; and domestic robots.”) it does not explicitly state that the SLAM map is then used to navigate the camera device; instead only explicitly teaching that the SLAM is for tracking the location of the camera.
From the Wikipedia article is can be seen that navigating a robotic device using a map generated via SLAM of that robotic device is a well understood, routine, and conventional use of SLAM. (“SLAM algorithms are based on concepts in computational geometry and computer vision, and are used in robot navigation, robotic mapping and odometry for virtual reality or augmented reality.”)
It would have been obvious to one of ordinary skill in the art, before the effective fliing date of the application to further modify Finn et al to utilize the map generated via the SLAM called for in Finn to navigate the camera system (uav from the modification with CLAYBROUGH) to fly into the interior of the turbine (Pathak). Such a modification would be obvious under the KSR rational of “Combining Prior Art Elements According to Known Methods To Yield Predictable Results”. (I) modified Finn teaches the base device and Wikipedia teaches the use of SLAM for navigating robotic devices. (II) the combination would be achieved through software based implementation, Finn (and Claybrough) both teach the navigation/movement of their respective devices but they are mute as to how specifically they navigate.
Allowable Subject Matter
Claims 17 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: For claims 17 and 18 no prior art was found to teach the imaging device including a borescope which extends between a stowed position and deployed position.
The closest piece of prior art found to this feature is US 20220194578 A1, “SYSTEMS AND METHODS FOR INSPECTING STRUCTURES WITH AN UNMANNED AERIAL VEHICLE”, Litton et al. It teaches an inspection UAV which includes Borescope attachment; however it does not teach a “stowed” position for the Borescope; it teaches that the borescope has a variable length, but is mute at to a compact stowed position.. ([0046] “As illustrated in FIG. 1F, the borescope 128 can be configured to detachably attach to the UAV 100 (directly to the UAV 100, via a sensor assembly 120 or another tool). The borescope 128 can be configured to obtain image data corresponding to images of a target area and/or object and output the image data for the controller. The image data can relate to still images, or the image data can relate to video images. The borescope 128 can be configured to obtain images in the visible spectrum, the infrared spectrum, the ultraviolet spectrum, or any other light spectrum. The borescope 128 can be configured to insert into an inspection hole (e.g., an inspection hole created by the drill) and obtain images from within the inspection hole. In this way, the borescope 128 can help determine the condition of a utility pole at various locations and heights along the utility pole. To help obtain clear images within the inspection hole, the borescope 128 can include one or more light sources (e.g., LEDs). The shaft of the borescope 128 can be rigid, or the shaft can be flexible. Optionally, the shaft of the borescope 128 can be extendable such that the length of the borescope's 128 shape can be varied (e.g., based on instructions from the controller).”)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20180290748 A1
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNETH MICHAEL DUNNE whose telephone number is (571)270-7392. The examiner can normally be reached Mon-Thurs 8:30-6:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Navid Z Mehdizadeh can be reached at (571) 272-7691. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KENNETH M DUNNE/ Primary Examiner, Art Unit 3669