Prosecution Insights
Last updated: April 19, 2026
Application No. 18/029,157

SEMANTIC SEGMENTATION OF INSPECTION TARGETS

Final Rejection §103
Filed
Mar 29, 2023
Examiner
NGUYEN, LEON VIET Q
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Kitov AI Ltd.
OA Round
2 (Final)
85%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
95%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
954 granted / 1122 resolved
+23.0% vs TC avg
Moderate +10% lift
Without
With
+10.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
26 currently pending
Career history
1148
Total Applications
across all art units

Statute-Specific Performance

§101
4.9%
-35.1% vs TC avg
§103
61.5%
+21.5% vs TC avg
§102
17.9%
-22.1% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1122 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to communication fled on 10/23/2025. Claims 1-17 and 43-45 are pending on this application. Response to Arguments Applicant’s arguments with respect to claim(s) 1 and 43 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 5-17, and 43-45 is/are rejected under 35 U.S.C. 103 as being unpatentable over Davis et al (US20080247635) in view of Stoppa et al (US20180211373). Regarding claim 1, Davis teaches a method of specifying visual inspection parameters for an item of manufacture (para. [0002]), the method comprising: accessing a plurality of enrollment images of an example of the item of manufacture (para. [0017], An image set of an object to be inspected is acquired at 102. The image set may comprise, for example, a 3D model of the object and optionally, a plurality of image files such as high resolution 2D image files of the object; para. [0019], The image files may comprise high resolution images generated while scanning the object to be inspected for purposes of creating the corresponding 3D model); for each of a plurality of regions appearing in a respective image of the plurality of enrollment images (para. [0020], Global coordinate points of the 3D model are designated at 106 that characterize the location of interest of the modeled object that was identified at 104), classifying the region as imaging an identified inspection target having an inspection target type (para. [0020], Also, a markup tag of user-defined information is created at 108 that annotates the location of interest as will be described in greater detail herein; para. [0022], By mapping image points to corresponding global coordinate points of the associated 3D model for example, visual inspection data such as markup tags, etc., generated during the inspection of 2D optical images may be mapped to corresponding 3D surface position data of the 3D model; para. [0052]); generating, using the regions and their classifications, a spatial model of the item of manufacture which indicates the spatial positioning of inspection targets and their respective inspection target types (para. [0022], By mapping image points to corresponding global coordinate points of the associated 3D model for example, visual inspection data such as markup tags, etc., generated during the inspection of 2D optical images may be mapped to corresponding 3D surface position data of the 3D model. As such, collected inspection information, e.g., markup tags etc. collected from multiple views may be coalesced with a digital 3D model of an object); and calculating camera poses for use in obtaining images appropriate to inspection of the inspection targets, based on their respective modeled spatial positions and inspection target types (para. [0070]-[0071], The second inspection may be performed using a second view of the virtual object displayed with a surface as produced using a second type of 2D inspection data, such as relatively higher resolution black and white or color photographic data, as an example. Any combination of displays of the virtual object may be used in any order as may be found to function effectively to diagnose conditions of interest. Regions found to contain features of interest in any view may be marked as described above, with such information being saved digitally in a manner that facilitates the sorting, grouping and analyzing of such data for one or more such objects in any one of the associated views of the object). Davis fails to teach identifying, by a circuitry, an inspection target in a region from a plurality of regions appearing in a respective image of the plurality of enrollment images; and classifying, by the circuitry, the inspection target as belonging to an inspection target type. However Stoppa teaches identifying, by a circuitry (fig. 2), an inspection target in a region from a plurality of regions appearing in a respective image (para. [0009], [0127]) of a plurality of enrollment images (para. [0066], [0107]); and classifying, by the circuitry, the inspection target as belonging to an inspection target type (para. [0010]). Therefore taking the combined teachings of Davis and Stoppa as a whole, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to incorporate the steps of Stoppa into the method of Davis. The motivation to combine Stoppa and Davis would be to accurately detect targets (para. [0139]-[0140] of Stoppa). Regarding claim 2, the modified method of Davis teaches a method comprising identifying a change in an initial camera pose used to obtain at least one of the plurality of enrollment images (para. [0071] of Davis, The second inspection may be performed using a second view of the virtual object displayed with a surface as produced using a second type of 2D inspection data), which said change potentially will provide an image with increased suitability for enrolling the identified inspection target, compared to the initial camera pose (the term “potentially” is not definitive. Therefore any image is interpreted to teach the limitation); obtaining an auxiliary enrollment image using the changed camera pose (para. [0071] of Davis, a second type of 2D inspection data, such as relatively higher resolution black and white or color photographic data); and using the auxiliary enrollment image in the classifying (para. [0071] of Davis, Regions found to contain features of interest in any view may be marked as described above). Regarding claim 3, the modified method of Davis teaches a method wherein the calculated camera poses include camera poses not used in the enrollment images used to generate the spatial model of the item of manufacture (para. [0019]-[0020] of Davis), the calculated camera poses being relatively more suitable as inspection images of the inspection targets than the camera poses used in obtaining the enrollment images (para. [0071] of Davis). Regarding claim 5, the modified method of Davis teaches a method wherein the generating the spatial model includes using the classifications to identify regions in different images which correspond to the same portion of the spatial model (para. [0022], [0029] of Davis). Regarding claim 6, the modified method of Davis teaches a method wherein the generating a spatial model comprises assigning geometric constraints to the identified inspection targets, based on the inspection target type classifications (para. [0053] of Davis). Regarding claim 7, the modified method of Davis teaches a method wherein the generating uses the assigned geometric constraints for estimating surface angles of the example of the item of manufacture (para. [0053] of Davis). Regarding claim 8, the modified method of Davis teaches a method wherein the generating uses the assigned geometric constraints for estimating orientations of the example of the item of manufacture (para. [0053] of Davis). Regarding claim 9, the modified method of Davis teaches a method wherein the generating the spatial model includes using the assigned geometrical constraints to identify regions in different images which correspond to the same portion of the spatial model (para. [0029] of Davis). Regarding claim 10, the modified method of Davis teaches a method wherein the enrollment images comprise 2-D images of the example of the item of manufacture (para. [0019]-[0020] of Davis). Regarding claim 11, the modified method of Davis teaches a method wherein the classifying comprises using a machine learning product to identify the inspection target type (para. [0063] of Davis). Regarding claim 12, the modified method of Davis teaches a method comprising imaging to produce the enrollment images (para. [0019]-[0020] of Davis). Regarding claim 13, the modified method of Davis teaches a method comprising synthesizing a combined image from a plurality of the enrollment images, and performing the classifying and generating also using a region within the combined image spanning more than one of said plurality of the enrollment images (para. [0029] of Davis). Regarding claim 14, the modified method of Davis teaches a method wherein the classifying comprises at least two stages of classifying for at least one of the inspection targets, and operations of the second stage of classifying are triggered by a result of the first stage of classifying (para. [0089] of Davis). Regarding claim 15, the modified method of Davis teaches a method wherein the second stage of classifying classifies a region including at least a portion of, but different in size than another region classified in the first stage of classifying (para. [0089] of Davis, defects of a certain type). Regarding claim 16, the modified method of Davis teaches a method wherein the second stage of classifying classifies a region to a more particular type belonging to a type identified in the first stage of classifying (para. [0089] of Davis, defects of a certain type). Regarding claim 17, the modified method of Davis teaches a method wherein the generating (para. [0022] of Davis) also uses camera pose data indicative of camera poses from which the plurality of enrollment images were imaged (para. [0070]-[0071] of Davis). Regarding claim 43, the claim recites similar subject matter as claim 1 and is rejected for the same reasons as stated above. Regarding claim 44, the claim recites similar subject matter as claim 2 and is rejected for the same reasons as stated above. Regarding claim 45, the claim recites similar subject matter as claim 14 and is rejected for the same reasons as stated above. Allowable Subject Matter Claim 4 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEON VIET Q NGUYEN whose telephone number is (571)270-1185. The examiner can normally be reached Mon-Fri 11AM-7PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LEON VIET Q NGUYEN/Primary Examiner, Art Unit 2663
Read full office action

Prosecution Timeline

Mar 29, 2023
Application Filed
Mar 29, 2023
Response after Non-Final Action
Jul 22, 2025
Non-Final Rejection — §103
Oct 23, 2025
Response Filed
Mar 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602795
FALSE POSITIVE REDUCTION OF LOCATION SPECIFIC EVENT CLASSIFICATION
2y 5m to grant Granted Apr 14, 2026
Patent 12597270
SYSTEMS AND METHODS FOR USING IMAGE DATA TO ANALYZE AN IMAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12592094
METHODS AND SYSTEMS OF AUTOMATICALLY ASSOCIATING TEXT AND CONTROL OBJECTS
2y 5m to grant Granted Mar 31, 2026
Patent 12586235
SYSTEMS AND METHODS FOR HEAD RELATED TRANSFER FUNCTION PERSONALIZATION
2y 5m to grant Granted Mar 24, 2026
Patent 12586357
COLLECTING METHOD FOR TRAINING DATA
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
85%
Grant Probability
95%
With Interview (+10.2%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 1122 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month