Prosecution Insights
Last updated: April 19, 2026
Application No. 18/835,277

MOVING WIND TURBINE BLADE INSPECTION

Final Rejection §103§112
Filed
Aug 01, 2024
Examiner
HAGHANI, SHADAN E
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
Thales Holdings UK PLC
OA Round
2 (Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
2y 11m
To Grant
79%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
221 granted / 366 resolved
+2.4% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
33 currently pending
Career history
399
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
60.3%
+20.3% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 366 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant’s election without traverse of Claims 28 – 35 and 42 in the reply filed on 10/21/2025 is acknowledged. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Regarding claim 33, the phrase “for example” renders the claim indefinite because it is unclear whether the limitation(s) following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 28-32 are rejected under 35 U.S.C. 103 as being unpatentable over Kaufmann (US PG Publication 2020/0260013) in view of Tremblay (US 20210080260 A1) and Alley (US PG Publication 2009/0015674). Regarding Claim 28, Kaufmann (US PG Publication 2020/0260013) discloses a method for imaging (optically monitoring [0039], Fig. 1) the moving (moving components [0039]) blades (rotor blades [0039]) of a wind turbine (rotor star [0039]) using an imaging system comprising a wider field-of-view (WFOV)camera (second camera 12 has an image capture region 120, image capture region 120 has a larger image angle [0044]) and a narrower field of view (NFOV) camera (first camera 11 having image capture region 110 is comparatively narrow, i.e., the first camera 11 can represent a comparatively small area with a high resolution [0040]), wherein the NFOV camera has a narrower field of view than the WFOV camera (inherent, definition), and wherein the method comprises: … using the WFOV (second camera 12 is also mounted on a tripod and has an image capture region 120, image capture region 120 has a larger image angle [0044]) and the NFOV camera (first camera 11 having image capture region 110 is comparatively narrow, i.e., the first camera 11 can represent a comparatively small area with a high resolution [0040]) to image a plurality of regions of each of the moving blades (horizontally displacing the image capture region 110 by the tracking device 2 so that the surface of the component 3 can be completely captured, [0048], Fig. 2) by: sequentially scanning the FOV of the NFoV camera across a plurality of radial positions relative to an axis of rotation of the moving blades (horizontally displacing the image capture region 110 by the tracking device 2, [0048], Fig. 2); and using the WFoV and NFoV cameras to image, for each radial position, a corresponding region of each moving blade (the surface of the component 3 can be completely captured [0048]), wherein using the WFoV and NFoV cameras to image, at any one of the radial positions (the second camera 12 catches a larger section of the component 3 [0044]; horizontally displacing the image capture region 110 by the tracking device 2. If such recordings are captured on both the upstream and downstream sides of the wind turbine, the surface of the component 3 can be completely captured [0048]), the corresponding region of any one of the moving blades comprises: directing the FOV of the NFOV camera toward the radial position (swivel the image capture region from a first position 115a through an angle 116 into a second position 115b, along the entire length of the component 3 [0055]-[0056]); using the WFoV camera (second camera 12 having image capture region 120 [0044]) to capture a plurality of WFoV (image capture region 120 having a larger image angle [0044]) images (image data of the second camera 12 [0045]) of at least part of the moving blade in the FoV of the WFoV camera (the second camera 12 catches a larger section of the component 3 [0044]); using the captured plurality of WFoV images (image data of the second camera 12 [0045]) of at least part of (the second camera 12 catches a larger section of the component 3 [0044]) the moving blade (moving rotor blades of the rotor star [0039]) to determine a trigger time (calculates a movement prediction for the component 3 to be monitored [0045]; directs the image capturing region of the first camera 11 at predetermined times [0045]) when an edge (the leading edge of the rotor blade is visible in the image capture region 110a; the trailing edge is visible in the image capturing region 110b in addition to the outer surface of the rotor blade [0047]) of the moving blade is, or will be, in a triggering region (knows where the component 3 to be monitored or the portion of the component 3 currently to be captured will be located at the recording time [0045]); using the determined trigger time (at the recording time [0045]) and a known spatial relationship between the triggering region and a FoV of the NFoV camera (inherent: tracking device 2 [0041] is controlled to move the image capturing region 110 of first camera 11 [0040] to the portion of rotor star 3 to be captured [0045]; this is impossible if the spatial relationship between the camera and rotor star is unknown) to calculate one or more NFoV image capture times (output a control signal to the tracking device, which directs the image capturing region of the first camera 11 at predetermined times to the respective portion of the component 3 to be monitored [0045]) when the edge of the moving blade, or a body of the moving blade, is, or will be (the leading edge of the rotor blade is visible in the image capture region 110a; the trailing edge is visible in the image capturing region 110b in addition to the outer surface of the rotor blade [0047]), in the FoV of the NFoV camera (horizontally displacing the image capture region 110 by the tracking device 2 so that the surface of the component 3 can be completely captured, [0048], Fig. 2); continuing to direct the FOV of the NFOV camera toward the radial position (swivel the image capture region from a first position 115a through an angle 116 into a second position 115b, along the entire length of the component 3 [0055]-[0056], capturing each image/FOV 110, Fig. 2; plurality of individual image combined to for a high-resolution recording of the component 3 to be monitored by horizontally displaying the image capture region 110 [0048]) until the calculated one or more NFOV image capturing times (direct the camera 11 at predetermined times to the portion of component 3 to be monitored [0045]); and using the NFoV camera (first camera 11 having image capture region 110 [0040]) to capture one or more NFoV images of the region (horizontally displacing the image capture region 110 by the tracking device 2 so that the surface of the component 3 can be completely captured, [0048], Fig. 2) of the moving blade (moving rotor blades of the rotor star [0039]) at the calculated one or more NFoV image capture times (at the recording time [0045]). Kaufmann does not disclose, but Tremblay (US 20210080260 A1) teaches stabilizing the WFOV and NFOV cameras against the motion of the imaging system (Inside housing 1224 are mechanical stabilizer 1230 and imaging module 1232. Mechanical stabilizer 1230 may be configured to provide mechanical roll stabilization [0127]). Although Kaufmann can be interpreted to disclose, Alley (US PG Publication 2009/0015674) teaches continuing to direct the FOV of the NFOV camera… until (Step Stare Mode, which focuses on a particular ROI for a few seconds [0088]). One of ordinary skill in the art before the application was filed would have been motivated to supply the cameras of Kaufmann as modified for drone-flight with the stabilization of Tremblay because it is known that stabilization can assist with vision systems as it can reduce smear and blur in captured images and improve feature tracking between images of moving objects. One of ordinary skill in the art before the application was filed would have been motivated to operate the NFOV camera of Kaufmann in step-stare mode because Alley teaches that this tracking mode enables stabilization, pointing, and greater information content without increasing the communication bandwidth [0087]. Regarding Claim 29, Kaufmann (US PG Publication 2020/0260013) discloses the method as claimed in claim 28, wherein using the WFoV and NFoV cameras to image the plurality of regions of each moving blade comprises using the WFoV and NFoV cameras to image the plurality of regions of one of the moving blades (from the image data of the second camera 12, calculates a movement prediction for the component 3 and knows where the portion of component 3 to be captured will be at the recording time, and controls the first camera 11 [0045]; horizontally displacing the image capture region 110, of the first camera 11, by the tracking device 2 so that the surface of the component 3 can be completely captured, [0048], Fig. 2;) and then using the WFoV and NFoV cameras to image the plurality of regions of a different one of the moving blades until the plurality of different regions of each of the moving blades have been imaged (component to be monitored is the entire rotor star [0009]). Regarding Claim 30, Kaufmann (US PG Publication 2020/0260013) discloses the method as claimed in claim 28, wherein using the WFoV and NFoV cameras to image the plurality of regions of each moving blade comprises using the WFoV and NFoV cameras to image the corresponding regions of each of the moving blades at one radial position and then using the WFoV and NFoV cameras to image the corresponding regions of each of the moving blades at a different one of the radial positions until the plurality of regions of each of the moving blades have been imaged (horizontally displacing the image capture region 110, of the first camera 11, by the tracking device 2 so that the surface of the component 3 can be completely captured, [0048], Fig. 2; both on the leading edge and trailing edge side [0047]). Regarding Claim 31, Kaufmann (US PG Publication 2020/0260013) discloses the method as claimed in claim 28, wherein sequentially scanning the FoV of the NFoV camera across the plurality of radial positions comprises sequentially re-orienting the NFoV camera so as to sequentially scan the FoV of the NFoV camera across the plurality of radial positions (horizontally displacing the image capture region 110, of the first camera 11 [0048]). Regarding Claim 32, Kaufmann (US PG Publication 2020/0260013) discloses the method as claimed in claim 28, comprising performing the sequential scanning of the field-of-view of the NFoV camera and the imaging of the plurality of regions of each moving blade autonomously according to a pre-programmed sequence (the open/closed loop control device will output a control signal to the tracking device, which directs the image capturing region of the first camera 11 at predetermined times to the respective portion of the component 3 to be monitored [0045] to horizontally displac[e] the image capture region 110, of the first camera 11, by the tracking device 2 so that the surface of the component 3 can be completely captured, [0048], Fig. 2; both on the leading edge and trailing edge side [0047]). Claim(s) 33-35, 42 are rejected under 35 U.S.C. 103 as being unpatentable over Kaufmann (US PG Publication 2020/0260013) in view of Tremblay (US 20210080260 A1), Alley (US PG Publication 2009/0015674), and Smart (GB 2580639). Regarding Claim 33, Kaufmann (US PG Publication 2020/0260013) discloses the method as claimed in claim 28. Kaufmann does not disclose, but Tremblay (US 20210080260 A1) teaches and wherein the method comprises stabilising the [] camera[] against motion of the imaging system (Inside housing 1224 are mechanical stabilizer 1230 and imaging module 1232. Mechanical stabilizer 1230 may be configured to provide mechanical roll stabilization [0127]); and wherein the method comprises stabilising the enclosure against motion of the imaging system (inside housing 1224 are mechanical stabilizer 1230 and imaging module 1232. Mechanical stabilizer 1230 may be configured to provide mechanical roll stabilization [0127]). Kaufmann does not disclose, but Smart (GB 2580639) teaches wherein the imaging system comprises an enclosure (a body in the form of a housing 12, P. 9 line 30 – P. 10 line 5), wherein the WFoV (wider field-of view (FOV), lower resolution, imaging arrangement generally designated 20, P. 9 line 30 – P. 10 line 5) and NFoV (second, higher resolution, narrower FOV, imaging arrangement generally designated 30, P. 9 line 30 – P. 10 line 5) cameras are both located within, and fixed to, the enclosure (a body in the form of a housing 12. The first and second imaging arrangements 20, 30 are each located within, and fixed to, the housing 12, P. 9 line 30 – P. 10 line 5), for example wherein the enclosure is sealed so as to isolate the WFoV and NFoV cameras from an environment external to the enclosure (the claim does not make clear if this is a limitation, as it is just an example). One of ordinary skill in the art before the application was filed would have been motivated to supply the cameras of Kaufmann as modified for drone-flight with the stabilization of Tremblay because it is known that stabilization can assist with vision systems as it can reduce smear and blur in captured images and improve feature tracking between images of moving objects. One of ordinary skill in the art before the application was filed would have been motivated to install the cameras of Kaufmann together and construct them to be carried by a drone, as in Smart, so that the cameras can be used for off-shore turbines and improve their usability (P. 1 lines 19-23). Regarding Claim 34, Kaufmann (US PG Publication 2020/0260013) discloses the method as claimed in claim 28, comprising using the WFoV and NFoV cameras to image one or both sides of each moving blade… or to image one or both edges of each moving blade (both on the leading edge and trailing edge side [0047]). Kaufmann does not disclose, but Smart (GB 2580639) teaches translating the WFoV and NFoV cameras together along a path around the wind turbine (mounting the system 10 on an airborne platform of any kind and using the airborne platform to fly the system 10 by, or around, the wind turbine 2, P. 15 lines 20-25) and using the WFoV and NFoV cameras (first, wider field-of view (FOV), lower resolution, imaging arrangement generally designated 20 and a second, higher resolution, narrower FOV, imaging arrangement generally designated 30. a body in the form of a housing 12. The first and second imaging arrangements 20, 30 are each located within, and fixed to, the housing 12, P. 9 line 30 – P. 10 line 5) to image each moving blade from one or more predetermined different vantage points on the path (the wind turbine 2 can be circled quickly, during which time all imagery is captured, P. 11 lines 5-6), for example translating the WFoV and NFoV cameras together along a path around the wind turbine (mounting the system 10 on an airborne platform of any kind and using the airborne platform to fly the system 10 by, or around, the wind turbine 2, P. 15 lines 20-25) and using the WFoV and NFoV cameras to image one or both sides of each moving blade (the wind turbine 2 can be circled quickly, during which time all imagery is captured, P. 11 lines 5-6) from the one or more predetermined different vantage points on the path (the wind turbine 2 can be circled quickly, during which time all imagery is captured, P. 11 lines 5-6) and/or to image … each moving blade (track the motion of the moving blades 4a, 4b, 4c and image one or more regions of interest of the moving blades 4a, 4b, 4c at a higher resolution, P. 11 lines 1-5) from the one or more predetermined different vantage points on the path (the wind turbine 2 can be circled quickly, during which time all imagery is captured, P. 11 lines 5-6), and optionally, wherein the method comprises translating the WFoV and NFoV cameras together along the path around the wind turbine autonomously and/or receiving a signal including information relating to the wind direction and/or the direction in which the wind turbine is pointing and determining the path around the wind turbine based on the wind direction and/or the direction in which the wind turbine is pointing, a known or stored position of the wind turbine, and a known or stored length of the blades of the wind turbine (because this limitation is optional, the broadest reasonable interpretation of it is that it is an option that is not performed/executed, and it is being examined as if it is an option not exercised). One of ordinary skill in the art before the application was filed would have been motivated to install the cameras of Kaufmann together and construct them to be carried by a drone, as in Smart, so that the cameras can be used for off-shore turbines and improve their usability (P. 1 lines 19-23). Regarding Claim 35, Kaufmann (US PG Publication 2020/0260013) discloses the method as claimed in claim 34, wherein… vantage point is located at a position at or around the same level as a base of the wind turbine (see the height of the tripod relative to the height of the wind turbine, Fig. 1), wherein the position defines an acute angle relative to a plane of rotation of the moving blades of the wind turbine (the camera 11 with the tracking device 2 has a lower height than the hub height of the wind turbine [0047]) and, optionally, wherein the acute angle is in the region of 45-degrees (since this is optional, it has no patentable weight—assuming the option is not exercised). Kaufmann does not disclose, but Smart (GB 2580639) teaches wherein each predetermined different vantage point is located at a position at or around … the wind turbine (mounting the system 10 on an airborne platform of any kind and using the airborne platform to fly the system 10 by, or around, the wind turbine 2, P. 15 lines 20-25; the wind turbine 2 can be circled quickly, during which time all imagery is captured, P. 11 lines 5-6). One of ordinary skill in the art before the application was filed would have been motivated to install the cameras of Kaufmann together and construct them to be carried by a drone, as in Smart, so that the cameras can be used for off-shore turbines and improve their usability (P. 1 lines 19-23). Regarding Claim 42, Kaufmann (US PG Publication 2020/0260013) discloses a processing resource (open-loop or closed-loop control device 5 [0042]) configured for communication with the WFoV camera (driving signal is generated by the open-loop or closed-loop control device 5 on the basis of the image data of a second camera 12 [0043]) and the NFoV camera (generates the driving signal of the tracking device 2 [0042], which maneuvers the first camera 11 [0041]), wherein the processing resource is configured to control the [] the NFoV camera (generates the driving signal of the tracking device 2 [0042], which maneuvers the first camera 11 [0041]). Kaufmann does not disclose, but Smart (GB 2580639) teaches wherein the processing resource is configured to control the WFoV camera (controller 40 to implement a method in which the controller 40 causes the first imaging arrangement 20 to sequentially capture a plurality of wider-FOV, lower-resolution initial images of the moving blades 4a, 4b, 4c of the wind turbine 2 at a corresponding plurality of historical instants in time, P. 10 lines 27-end). The remainder of Claim 42 is rejected on the grounds provided in Claim 28. One of ordinary skill in the art before the application was filed would have been motivated to install the cameras of Kaufmann together and construct them to be carried by a drone, as in Smart, so that the cameras can be used for off-shore turbines and improve their usability (P. 1 lines 19-23). Response to Arguments Applicant’s remarks filed 2/2/2026 have been considered but are unpersuasive. Applicant argues that because Kaufman “actively tracks a point,” it does not stabilize the camera. Remarks at 19. This is unpersuasive. There is no contradiction between stabilization and tracking. Tracking cameras such as video cameras include stabilization. Stabilization counteracts undesired shake and jitter, not intentional panning. Kaufmann might be silent on stabilization, but silence is not discouragement or contradiction. Applicant’s arguments regarding stabilization and the alleged “step-stare” embodiment (Remarks at 19) suppose that Kaufmann’s camera never stops moving. E.g., Applicant cites paragraphs [0041] and [0051] of Kaufmann, which discuss panning and rolling, and concludes that Kaufmann fails to disclose stabilization and the “step-stare of amended claim 28.” Examiner disagrees. The cited portions disclose that Kaufmann does pan and does roll the camera; the cited portions say nothing about not-stopping and not-staring. These citations might be silent on this feature; that does not prove its absence or contradict its existence. Contrary to Applicant’s supposition, Kaufmann moves the camera at “predetermined times.” Kaufmann at [0045]. The phrase “Predetermined times” connotes discrete motion, not continuous, which means that there is a time gap between motions—i.e., a stare period. Last, Applicant supposes that the amended claim language, “continuing to direct the …camera …until the calculated… time” narrows the claim to the “step-stare” embodiment of p. 26. Remarks at 17, 18, top of each page; Spec. at 26 ll. 1-13. The literal and broadest reasonable interpretation of the claim language is that the claimed camera moves, not that it ever stops moving. I.e., the claim language is not limited to the step-stare embodiment because it does not recite that the camera stops moving during image capture. It moves “until” capture, yet it can continue to move during and through, under the broadest reasonable interpretation. This claim language lacks specificity. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20160063350 A1 – inspecting turbine blades at multiple radial angles THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHADAN E HAGHANI whose telephone number is (571)270-5631. The examiner can normally be reached M-F 9AM - 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 571-272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHADAN E HAGHANI/ Examiner, Art Unit 2485
Read full office action

Prosecution Timeline

Aug 01, 2024
Application Filed
Oct 30, 2025
Non-Final Rejection — §103, §112
Feb 02, 2026
Response Filed
Mar 02, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604020
VIDEO DECODING METHOD AND DECODER DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12598323
INTER PREDICTION-BASED VIDEO ENCODING AND DECODING
2y 5m to grant Granted Apr 07, 2026
Patent 12586336
WEARABLE DEVICE, METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM CONTROLLING LIGHT RADIATION OF LIGHT SOURCE
2y 5m to grant Granted Mar 24, 2026
Patent 12574549
CHROMA INTRA PREDICTION WITH FILTERING
2y 5m to grant Granted Mar 10, 2026
Patent 12568225
LIMITING A NUMBER OF CONTEXT CODED BINS FOR RESIDUE CODING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
79%
With Interview (+18.6%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 366 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month