Prosecution Insights
Last updated: April 19, 2026
Application No. 18/625,660

SYSTEMS AND METHODS UTILIZING ARTIFICIAL INTELLIGENCE FOR AUTOMATED VISUAL INSPECTION OF ROPES

Non-Final OA §103
Filed
Apr 03, 2024
Examiner
KUDO, KEN
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Samson Rope Technologies Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
12 currently pending
Career history
12
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
51.3%
+11.3% vs TC avg
§102
2.6%
-37.4% vs TC avg
§112
25.6%
-14.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1–22 are rejected under 35 U.S.C. §103 as being unpatentable over Mahadevappa (Mahadevappa et al, US 2020/0118259 A1, 2020) in view of Kuwertz (Kuwertz et al., US 2019/0279356 A1, 2019). Regarding claim 1, with deficiencies of Mahadevappa noted in square brackets [], Mahadevappa teaches a computer-implemented method for inspecting a rope ([0034]: Mahadevappa describes system 100 and method of inspecting a rope), comprising the steps of: [using a mobile device], capturing visual data of said rope, wherein said visual data includes one or more sections of said rope along a length thereof (Mahadevappa, in [0035-0036] & [0044-0045], teaches an image capture device monitors the rope (flexible members) and generate image data, “including the captured image” and “the location of the rope segment”; “indicates a location of the length of rope” [corresponds to section along length of the rope]); analyzing said visual data using a knowledge base implemented within logic of a control system [of said mobile device] (Mahadevappa, in [0037] & [0046], teaches receiving and using reference image data and threshold setting information (defect types / classes; application / rope-type dependent), and compares current image data to reference image data to determine defects based on thresholds [corresponds to analyzing the visual data using a knowledge base]), calculating, from said knowledge base, an expected life for said rope (Mahadevappa, in [0037], teaches threshold setting information includes defect classes mapped to remaining life or strength of the rope [corresponds to calculating an expected life for the rope from the knowledge base]; and Mahadevappa further teaches using machine learning and field service data [knowledge base] to correlate [a statistical calculation] defects and optimize remaining life and recommendations, [0047]); generating a report [on said mobile device] displaying said expected life of said rope as calculated from said knowledge base (Mahadevappa, [0038–0039] teaches transmitting results to a display and presenting analysis results including rope health, severity of any defects and a recommendation as to when the rope should be repaired / replaced, corresponds to generating a report displaying the life-related outcome). As noted above in square brackets, Mahadevappa fails to disclose but Kuwertz teaches: using a mobile device ([0021-0024]: Kuwertz teaches a field operator takes a picture or set of pictures or video using an image capturing device such as a smartphone, and the device captures an image or video include multiple images along the rope) of said mobile device (Kuwertz, in [0032], teaches the mobile device 102 may include two or more connected devices including one that “stores, processes, and/or transmits the images” and the image may be captured in response to guidance received from the processor or a program that is local to the device 102, which corresponds to processing logic implemented on the mobile device in the image capture and evaluation workflow) on said mobile device ([0045]: Kuwertz teaches transmitting image data “for display thereon” to the device 102, which is a mobile device) It would have been prima facie obvious to a POSITA, before the effective filing date of the claimed invention, motivated to modify Mahadevappa’s rope-inspection system to use Kuwertz’s mobile-device capture, on-device processing logic, and on-device display / reporting, because Kuwertz teaches that a smartphone / tablet can capture still images or video in the field and can include local program logic for capture / processing guidance and for presenting results on the device, which would have predictably improved portability and field usability of Mahadevappa’s rope inspection (allowing an inspector to conveniently capture multiple rope sections along the rope length and receive displayed inspection outputs at the point of inspection), while using known, art-recognized mobile inspection techniques and without changing Mahadevappa’s underlying defect-detection and remaining-life determination principles. Regarding claim 2, Mahadevappa [as modified by Kuwertz] teaches the computer-implemented method of claim 1, further comprising the step of processing said visual data to ready said visual data for analysis (Mahadevappa, in [0036], teaches image processing to “proper the image” for defect detection. Kuwertz, in [0042], also teaches preprocessing and filtering images). Regarding claim 3, Mahadevappa [as modified by Kuwertz] teaches the computer-implemented method of claim 2, wherein the step of processing said visual data further comprises the step of tiling said visual data, wherein said visual data is broken into multiple image segments along said section of said rope ([0045]: Mahadevappa teaches that each frame monitors a certain length of the rope and splits the frame into smaller frames/windows for analysis against the reference frame). Regarding claim 4, Mahadevappa [as modified by Kuwertz] teaches the computer-implemented method of claim 3, wherein for the step of tiling said visual data, said visual data is cropped to form each said image segment with minimal background ([0036]: Mahadevappa teaches cropping image data to a region of interest, eliminate background to prepare the image for further processing and defect detection). Regarding claim 5, Mahadevappa [as modified by Kuwertz] teaches the computer-implemented method of claim 3, wherein for the step of tiling said visual data, contrast of each said image segment is enhanced ([0036]: Mahadevappa teaches enhancing images: sharpness, deblur, distortion removal). Regarding claim 6, Mahadevappa [as modified by Kuwertz] teaches the computer-implemented method of claim 3, further comprising generating and assigning a damage level scale to each said image segment along said length ([0032], [0039-0040]: Mahadevappa teaches classifying defect types and ranking severity, including defect classes used for operational clearance and health assessment; [0045]: Mahadevappa teaches that each frame monitors a certain length of the rope, which ties frames / segments to rope length [corresponds to segment along said length]). Regarding claim 7, Mahadevappa [as modified by Kuwertz] teaches the computer-implemented method of claim 6, further comprising the step of averaging each said damage level scale (Kuwertz, in [0051], expressly teaches calculating an “average degradation value” based on a plurality of consumption amount values). Regarding claim 8, Mahadevappa [as modified by Kuwertz] teaches the computer-implemented method of claim 2, further comprising the step of converting said visual data to grayscale ([0044]: Mahadevappa teaches converting image data to black and white for further processing, which is a grayscale conversion). Regarding claim 9, Mahadevappa [as modified by Kuwertz] teaches the computer-implemented method of claim 1, further comprising allowing said knowledge base to be continuously supplemented from a data pipeline (in [0033], Mahadevappa teaches updating the system based on collected field failure data to improve detection efficiency. Kuwertz, in [0030], also teaches storing degradation metrics in a database to establish patterns). Regarding claim 10, Mahadevappa [as modified by Kuwertz] teaches the computer-implemented method of claim 1, further comprising allowing said knowledge base to continuously learn to enhance calculations for said expected life (Mahadevappa, in [0040], teaches machine learning to optimize defect identification and correlation to remaining life based on field service data). Regarding claim 11, Mahadevappa [as modified by Kuwertz] teaches the computer-implemented method of claim 1, wherein for the step of capturing said visual data, said mobile device is moved along said rope while said rope remains stationary (Kuwertz, in [0032] & [0046], teaches positioning and moving the mobile video-capture device (including by a robotic arm) using direction, angle, distance, elevation guidance, i.e., moving the mobile capture device relative to the object being imaged). Regarding claims 12–22, the rationale provided for claims 1–11 is incorporated herein. In addition, the computer-readable storage medium of claims 12–22 corresponds to the method of claims 1–11, and performs the steps disclosed herein. Therefore, the claims are all ineligible. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEN KUDO whose telephone number is (571)272-4498. The examiner can normally be reached M-F 8am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. KEN KUDO Examiner Art Unit 2671 /KEN KUDO/Examiner, Art Unit 2671 /VINCENT RUDOLPH/Supervisory Patent Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Apr 03, 2024
Application Filed
Feb 10, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month