DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-6 and 10-20 are pending.
Claims 7-9 are canceled.
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claim(s) 10-11 and 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Giering et al (US20180129974) in view of Lopez et al (US20170085863A1).
Regarding claim 10, Giering teaches a defect depth estimation system comprising:
(Giering, "FIG. 5 shows a defect detection process 500 for a boroscope DRL method.", [0050]; a defect detection system)
an image sensor configured to generate at least one 2D test image of a test object existing in real space and having a defect with a depth;
(Giering, "The sensors 205 can include one or more cameras 204 ... In some embodiments, one or more of the cameras 204 can be coupled to an imaging probe (not depicted), for instance, a boroscope camera ... The feature of interest 212 can vary depending upon the application, such as, a surface depth, a component defect, an object, a person, etc.", [0042]; an image sensor like a boroscope configured to capture images of physical test objects; the test object existing in real space has a defect with a depth; "A streaming boroscope video 501 is acquired", [0050]; generating 2D test images via the streaming video)
a processing system configured to
input the at least one 2D test image,
(Giering, "A streaming boroscope video 501 is acquired and pre-processed 502 initially ... a region-of-interest detector 504 analyzes frames of image data 503 from the streaming boroscope video 501", [0050]; acquiring/inputting the video images; inputting frames of image data, which are 2D test images, into the system for analysis)
identify and isolate a localized defect region containing the defect, and
(Giering, "a region-of-interest detector 504 analyzes frames of image data 503 from the streaming boroscope video 501 or a database to detect any regions of interest ... The patch detector 508 can detect patches (i.e., areas) of interest based on the regions of interest identified by the region-of-interest detector 504", [0050]; identifying regions of interest containing potential defects; isolating a localized defect region by detecting specific patches/areas of interest based on the identified regions)
inputting the isolated localized defect region to a trained machine learning model and
(Giering, "the machine learning 511 is applied to the one or more patches of interest using CNN 515", [0051]; inputting the isolated localized defect region (the patches of interest) to a trained machine learning model (CNN 515))
to output estimated depth information corresponding to the localized defect region of the test image and
(Giering, "The feature of interest 212 can vary depending upon the application, such as, a surface depth, a component defect", [0042]; evaluating surface depth and component defects; on the other hand, Lopez explicitly teaches outputting estimated depth information directly from the ML model for the localized region; "One or more embodiments may use machine learning to generate an object depth model ... The object depth model input may include, for example, an object mask. The corresponding output may include any information that assigns depth to one or more points within the mask ... The machine learning system may learn a function that maps object depth model inputs, such as a mask, into the corresponding object depth model outputs", [0029]; using a machine learning model to output depth information; inputting an isolated localized region (mask); the machine learning output is estimated depth information corresponding exactly to the localized region/mask; inputting an isolated localized region (mask))
indicating an estimation of the depth of the defect.
(Giering, "The feature of interest 212 can vary depending upon the application, such as, a surface depth, a component defect", [0042]; the feature being analyzed is a component defect having a depth; Lopez, "The corresponding output may include any information that assigns depth to one or more points within the mask.", [0029]; the depth information is generated by the ML model assigns depth values to points within the isolated region, which applied to Giering's defect patches, indicates an estimation of the depth of the defect)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Lopez into the system or method of Giering in order to configure the machine learning model of D1 to output an estimated depth model for the isolated defect region, thereby providing quantitative depth assignments for the defect instead of mere categorical classification, which would predictably improve the diagnostic assessment and severity measurement of the component defect. The combination of Giering and Lopez also teaches other enhanced capabilities.
Regarding claim 11, the combination of Giering and Lopez teaches its/their respective base claim(s).
The combination further teaches the defect depth estimation system comprising of claim 10, wherein the at least one 2D test image includes a 2D image frame included in a video stream captured by the image sensor.
(Giering, "A streaming boroscope video 501 is acquired and pre-processed 502 initially. A DNN (such as a CNN) can be used to detect a crack and provide image data to a visualization process 520. For example, a region-of-interest detector 504 analyzes frames of image data 503 from the streaming boroscope video 501", [0050]; the images processed for defect detection are frames of image data analyzed from a streaming video captured by the sensor)
Regarding claim 14, the combination of Giering and Lopez teaches its/their respective base claim(s).
The combination further teaches the defect depth estimation system of claim 10, wherein the estimated depth information includes at least one of an estimated depth scalar value of the actual depth and an estimated depth map of the actual depth.
(Giering, "The feature of interest 212 can vary depending upon the application, such as, a surface depth, a component defect", [0042]; depth is as a feature of interest; Lopez, “One or more embodiments may use machine learning to generate an object depth model.", [0029]; "This step creates a 3D model or a depth map that indicates how far away each pixel within the mask is from the viewer.", [0097]; the machine learning output is a depth map; a depth map inherently contains depth values/scalars for pixels; combining Giering and Lopez is for utilizing a machine learning model that generates a comprehensive depth map for localized regions, allowing the system to provide detailed spatial information of the defect surface rather than just a categorical classification)
Regarding claim 15, the combination of Giering and Lopez teaches its/their respective base claim(s).
The combination further teaches the defect depth estimation system of claim 10, wherein the image sensor is a borescope.
(Giering, "In some embodiments, one or more of the cameras 204 can be coupled to an imaging probe (not depicted), for instance, a boroscope camera.", [0042]; "FIG. 5 shows a defect detection process 500 for a boroscope DRL method.", [0050]; a borescope is used as the image sensor for defect detection)
Claim(s) 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Giering et al (US20180129974) in view of Lopez et al (US20170085863A1) and further in view of Sah et al (US2024/0404296)
Regarding claim 12, the combination of Giering and Lopez teaches its/their respective base claim(s).
The combination does not expressly disclose but Sah teaches the defect depth estimation system comprising of claim 10,
wherein the at least one 2D test image includes a video stream containing movement of the test object, and
wherein the processing system performs optical flow processing on the video stream to determine the estimated depth information of the defect.
(Sah, “the external object detector uses sequential image frames from the image sensor data to compute an optical flow, and then uses the optical flow to estimate a relative depth of different segmentations of voxels to detect the presence of a moving object”, [0023]; the moving object may be the defect of Giering (Fig. 2, 212, [0042])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Sah into the modified system or method of Giering and Lopez in order to measure the depth of a moving object such as a moving defect using optical flow. The combination of Giering, Lopez and Sah also teaches other enhanced capabilities.
Regarding claim 13, the combination of Giering, Lopez and Sah teaches its/their respective base claim(s).
The combination further teaches the defect depth estimation system comprising of claim 12, wherein the optical flow processing includes:
comparing a first image frame included in the 2D video stream to a second image frame of the 2D video stream that precedes the first frame;
determining a change in a position of the defect as the second image frame transitions to the first image frame; and
determining the estimation of the depth based on the change in the position.
(Sah, “the external object detector uses sequential image frames from the image sensor data to compute an optical flow, and then uses the optical flow to estimate a relative depth of different segmentations of voxels to detect the presence of a moving object”, [0023])
Allowable Subject Matter
Claim(s) 1-6 and 16-20 is/are allowed.
Statement of reasons for the indication of allowable subject matter: applicant’s amendment and argument filed on 1/6/2026 are persuasive.
Response to Arguments
Applicant's arguments filed on 1/6/2026 with respect to one or more of the pending claims have been fully considered but are moot in view of the new ground(s) of rejection.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIANXUN YANG whose telephone number is (571)272-9874. The examiner can normally be reached on MON-FRI: 8AM-5PM Pacific Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center. for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272- 1000.
/JIANXUN YANG/
Primary Examiner, Art Unit 2662 3/9/2026