Prosecution Insights
Last updated: April 19, 2026
Application No. 18/443,383

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

Non-Final OA §103
Filed
Feb 16, 2024
Examiner
GEBRESLASSIE, WINTA
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Keyence Corporation
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
101 granted / 133 resolved
+13.9% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
53 currently pending
Career history
186
Total Applications
across all art units

Statute-Specific Performance

§101
3.3%
-36.7% vs TC avg
§103
66.4%
+26.4% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
5.0%
-35.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 133 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1, 9, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Kurihara et al. (US 20200364840 A1) in view of Shimodaira et al. et al. (US 20200364905 A1). Regarding claim 1, Kurihara et al. teaches an image processing device for executing image processing by a machine learning tool and image processing by a rule-based tool (see para [0009]; “a rule-based inspection mode and a learning-based inspection mode are implemented in an image inspection apparatus”), the image processing device comprising: a UI generation unit configured to generate a user interface screen for displaying a setting window for setting the image processing and to cause a display unit to display the user interface screen (see para [0087]; “The startup time user interface 100 can also be generated by an UI generating section (not illustrated) or the like. Other user interfaces described below can also be generated by the UI generating section or the like…..and an image display area 100c….. The image display area 100c can display images captured by the image-capturing unit 3”, see also para [0010]; “an inspection window setting unit that receives a setting of a rule-based inspection window”); an input unit configured to receive an input for arranging, in the setting window of the user interface screen (see para [0010]; “an image processing tool selecting unit that receives selection of an image processing tool in the setting mode in the rule-based inspection; an inspection window setting unit that receives a setting of a rule-based inspection window for defining a range”, see also para [0090]; “receives the selection of either the standard inspection mode or the learning inspection mode. The inspection type selecting section 28 generates an inspection type selection user interface 101 illustrated in FIG. 5 and displays this interface in the display device 4”), a machine learning tool indicating image processing by a machine learning model and a rule-based tool indicating image processing according to a predetermined rule (see Abstract; “In the setting mode in a learning-based inspection, a non-defective product image and a defective product image are input to generate a distinguishing device” Note; the user inputs a non-defective product image and a defective product image to learn a distinctive device is equivalent of the claim), and an input of a common data set including a plurality of images to be referred to by the machine learning tool and the rule-based tool (see Abstract; “The rule-based inspection mode and the learning-based inspection mode are implemented in the image inspection apparatus. In the setting mode in a rule-based inspection, the user can set an imaging condition, select an image processing tool, set the application range of an image processing tool, and adjust the parameters of the image processing tool. In the setting mode in a learning-based inspection, a non-defective product image and a defective product image are input to generate a distinguishing device” Note; "Common Data Set... referred to by both tools" corresponds to the overall image inspection apparatus that implements both rule-based... and learning-based inspection modes. Rule-based... settings (imaging conditions, selecting tools, parameters) correspond to the rule-based inspection mode. Input of non-defective... and defective... images to generate a distinguishing device is the definition of the learning-based inspection mode). However, Kurihara et al. does not teach and an image processing unit configured to execute one of the image processing by the machine learning tool or the image processing by the rule-based tool on the data set, and execute the other image processing on the data set after the one image processing is executed. In the same field of endeavor, Shimodaira et al. teaches and an image processing unit configured to execute one of the image processing by the machine learning tool or the image processing by the rule-based tool on the data set, and execute the other image processing on the data set after the one image processing is executed (see para [0005]; “When a normal inspection and an inspection through deep learning processing is applicable, high inspection accuracy is obtained while reducing a processing time. The normal inspection processing is applied to a newly acquired inspection target image… The deep learning processing is applied to the inspection target image having the characteristic amount with which the non-defective product determination or the defective product determination is not confirmable” Note; discloses two different image processing tools (a normal inspection (rule-based) then a deep learning process) operating on the same inspection target image data, which is sequential execution). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify an apparatus creating inspection setting window for a rule-based inspection mode and a learning-based inspection mode of Kurihara et al. in view of a method image inspection apparatus that performs a quality determination of an inspection target based on an image of Shimodaira et al. in order to generate high inspection accuracy while reducing a processing time (see para [0005]). Regarding claim 9, the rejection of claim 1 is incorporated herein. Kurihara et al. in the combination further teach wherein the image processing according to the predetermined rule by the rule-based tool is pre-processing of emphasizing a feature area in an input image (see para [0176]; “when the inspection target image has more features as a defective product, the control unit 2 generates the inspection target image in which a region having a feature as a defective product is selectively enhanced. Then, the inspection target image is displayed in the display device 4”), and the image processing unit: generates a plurality of pre-processed images emphasizing a feature area of each of the plurality of images constituting the data set by executing the pre-processing on the image (see para [0177]; “when the determination axis extracting section 34 extracts color as a determination axis, enhancement can be achieved by coloring the region having the feature as a defective product in the color of the defective product or by coloring the region having the feature as a non-defective product in the color of the non-defective product”). Shimodaria et al. in the combination further teach and executes the image processing by the machine learning tool or training of the machine learning model on the plurality of pre-processed images (see para [0005]; “the deep learning, the learning is completed by inputting images to which a plurality of non-defective product attributes is given in advance and images to which defective product attributes are given to the multi-layer neural network and adjusting a plurality of parameters within the network such that the non-defective product images and the defective product images are discriminable”). Regarding claim 15, the scope of claim 15 is fully incorporated herein, the rejection analysis of claim 1 is equally applicable here. Claims 2-3, 12, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Kurihara et al. in view of Shimodaira et al. as applied in claim 1 above, and further in view of Suenaga et al. (US 20170032177 A1). Regarding claim 2, the rejection of claim 1 is incorporated herein. The combination of Kurihara et al. and Shimodaira et al. as a whole does not teach wherein the UI generation unit generates a user interface screen for displaying, side by side with the setting window, a data set window for displaying the images included in the data set referred to by both the machine learning tool and the rule-based tool arranged in the setting window. In the same field of endeavor Suenaga et al. teaches wherein the UI generation unit generates a user interface screen for displaying, side by side with the setting window, a data set window for displaying the images included in the data set referred to by both the machine learning tool and the rule-based tool arranged in the setting window (see para [0148]; “As shown in FIG. 9, a main display field 411, a top sub-display field 412, and a side sub-display field 413 are displayed in the display part 400. The main display field 411 is allocated to a wide area from the left side toward the central part of the screen of the display part 400. The top sub-display field 412 is allocated to a band-like area above the main display field 411. The side sub-display field 413 is allocated to a band-like area beside the right side of the main display field 411” Note: The "data set window" for displaying images and the "setting window" are arranged within the, top, and side sub-display fields (411, 412, 413) described in FIG. 9, creating a side-by-side (or multi-panel) view to facilitate managing data across both tool types). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify an apparatus creating inspection setting window for a rule-based inspection mode and a learning-based inspection mode of Kurihara et al. in view of a method image inspection apparatus that performs a quality determination of an inspection target based on an image of Shimodaira et al. and an image inspection device, an image inspection method and an image inspection program of Suenaga et al. in order to easily and accurately inspecting a shape of an inspection target (see para [0148]). Regarding claim 3, the rejection of claim 2 is incorporated herein. Kurihara et al. in the combination further teach wherein the image processing unit generates a plurality of processed images by executing the image processing by the rule-based tool on the plurality of images constituting the data set (see para [0015]; “when the inspection type selecting unit performs switching to the learning-based inspection and the mode switching unit selects the setting mode, the learning image registering unit registers, as non-defective product images, a plurality of images obtained by causing the imaging unit to capture the inspection target a plurality of times while changing the imaging condition”), Suenaga et al. in the combination further teach and the UI generation unit generates a user interface screen for displaying the plurality of processed images in the data set window (see Abstract; “an image inspection method and an image inspection program which are capable of easily and accurately inspecting a shape of an inspection target.. positioning image data of a setting target placed on a stage is registered…. image data for alignment of the inspection target is acquired, and then aligned to image data for alignment of the setting target… The determined determination result is displayed on the display unit”). Regarding claim 12, the rejection of claim 2 is incorporated herein. Kurihara et al. in the combination further teach wherein the image processing unit: executes the image processing by the machine learning tool on the plurality of images constituting the data set; and executes the image processing by the rule-based tool on the plurality of images on which the image processing by the machine learning tool is executed (see para [0015]; “when the inspection type selecting unit performs switching to the learning-based inspection and the mode switching unit selects the setting mode, the learning image registering unit registers, as non-defective product images, a plurality of images obtained by causing the imaging unit to capture the inspection target a plurality of times while changing the imaging condition”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify an apparatus creating inspection setting window for a rule-based inspection mode and a learning-based inspection mode of Kurihara et al. in view of a method image inspection apparatus that performs a quality determination of an inspection target based on an image of Shimodaira et al. and an image inspection device, an image inspection method and an image inspection program of Suenaga et al. in order to easily and accurately inspecting a shape of an inspection target (see para [0148]). Regarding claim 14, the rejection of claim 12 is incorporated herein. Kurihara et al. in the combination further teach wherein the input unit receives a user input for causing the machine learning tool and the rule-based tool to refer to the data set (see para [0082]; “the communication board 17 receives various operations by the user input”, and Abstract; “The rule-based inspection mode and the learning-based inspection mode are implemented in the image inspection apparatus”). Claims 4-8, 10-11, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Kurihara et al. and Shimodaira et al in view of et al. Suenaga et al. as applied in claims 1, and 2 and further in view of Shingo (WO 2020/246325 A1). Regarding claim 4, the rejection of claim 2 is incorporated herein. Suenaga et al. in the combination further teach wherein the image processing unit outputs an inference result of each of the plurality of processed images by executing the image processing by the machine learning tool on the processed image (see para [0024]; “Once initially trained, the system allows incremental improvements to be performed on a lower-powered inference engine”). However, the combination of Kurihara et al., Shimodaira et al. and Suenaga et al. as a whole does not teach and the UI generation unit generates a user interface screen for displaying the plurality of processed images and the inference result corresponding to each of the processed images in the data set window. In the same field of endeavor, Shingo teaches and the UI generation unit generates a user interface screen for displaying the plurality of processed images and the inference result corresponding to each of the processed images in the data set window (see page 8, 8th para]; “The UI generation unit 16 generates a UI for exchanging information between the user and the information processing device 100. Typically, the UI generation unit 16 generates a GUI (Graphical User Interface) such as a UI screen (see FIG. 5 and the like) displayed on the display unit 11 when generating a data set for machine learning. On the UI screen, for example, information to be presented to the user, an input field for the user to input information, and the like are displayed. The user can operate the operation unit (keyboard, etc.) while looking at the UI screen to specify various settings, values, and the like”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify an apparatus creating inspection setting window for a rule-based inspection mode and a learning-based inspection mode of Kurihara et al. in view of a method image inspection apparatus that performs a quality determination of an inspection target based on an image of Shimodaira et al. and further in view of an image inspection device, an image inspection method and an image inspection program of Suenaga et al. and data generation unit generates a data set for constructing a learning model from the original data set of Shingo in order to easily generate a required amount of training data set, and it is possible to sufficiently support the creation of the training data set (see page 8, 8th para). Regarding claim 5, the rejection of claim 2 is incorporated herein. Shimodaira et al. in the combination further teach wherein the image processing unit: trains the machine learning model with the plurality of images displayed in the data set window and a result of executing the image processing by the rule-based tool on the plurality of images; executes the image processing by the rule-based tool on an inspection target image in which a workpiece is imaged; and executes the image processing using the trained machine learning model on the inspection target image based on an execution result of the image processing by the rule-based tool (see Abstract; “a normal inspection and an inspection through deep learning processing is applicable, high inspection accuracy is obtained while reducing a processing time.. The deep learning processing is applied to the inspection target image having the characteristic amount with which the non-defective product determination or the defective product determination is not confirmable”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify an apparatus creating inspection setting window for a rule-based inspection mode and a learning-based inspection mode of Kurihara et al. in view of a method image inspection apparatus that performs a quality determination of an inspection target based on an image of Shimodaira et al. and further in view of an image inspection device, an image inspection method and an image inspection program of Suenaga et al. and data generation unit generates a data set for constructing a learning model from the original data set of Shingo in order to obtain high inspection accuracy (see Abstract). Regarding claim 6, the rejection of claim 2 is incorporated herein. Kurihara et al. in the combination further teach such that the workpiece in each of the plurality of images to be subjected to the image processing by the machine learning tool have the same position and posture; and executes the image processing by the machine learning tool on the plurality of images subjected to the position correction (see para [0106]; “The positions, sizes, and shapes of the learning inspection windows 200A and 200B can be corrected”, see also para [0109]; “The setting method for the position compensation window 201 may be the same as the setting method for the learning inspection windows 200A and 200B. An image in the position compensation window 201 is the target of position compensation processing and an image in the position compensation window 201 can be compensated to have a predetermined position and a predetermined posture by a conventionally known method”). Suenaga et al. in the combination further teach wherein the image processing unit: extracts a feature from each of the plurality of images included in the data set (see para [0232]; “when information of the reference plane is included in the inspection setting information, there are extracted height image data of one or more areas corresponding to one or more rectangular areas specified for registering the reference plane”); detects a position of a workpiece included in the image based on the extracted feature (see para [0232]; “the reference plane is calculated based on the extracted height image data, and is then registered. Thereafter, the height image data acquired concerning the inspection target S is corrected such that the position of each portion on the surface of the inspection target S represents the distance from the registered reference plane in a direction orthogonal to the reference plane”); executes position correction based on the detected position of the workpiece (see para [0232]; “In this case, with the height data after the correction, it is possible to easily confirm the position of each portion on the surface of the inspection target S from the reference plane”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify an apparatus creating inspection setting window for a rule-based inspection mode and a learning-based inspection mode of Kurihara et al. in view of a method image inspection apparatus that performs a quality determination of an inspection target based on an image of Shimodaira et al. and further in view of an image inspection device, an image inspection method and an image inspection program of Suenaga et al. and data generation unit generates a data set for constructing a learning model from the original data set of Shingo in in order to easily and accurately inspecting a shape of an inspection target (see [0232). Regarding claim 7, the rejection of claim 6 is incorporated herein. Shimodaria et al. in the combination further teach wherein the input unit is configured to receive designation of target areas of the image processing by the machine learning tool in the images included in the data set, and executes the image processing by the machine learning tool on the target area of each image of the data set (see para [0009]; “a deep learning setting section that causes a neural network to learn by inputting a plurality of non-defective product images to which non-defective product attributes are given and/or a plurality of defective product images to which defective product attributes are given to an input layer of the neural network, and performs a setting of deep learning processing for classifying a newly input inspection target image”), (see para [0065]; “processing for performing the quality determination of the inspection target based on various characteristic amounts (color, edge, and position) of the inspection target within the image”). Suenaga et al. in the combination further teach and the image processing unit: executes the position correction such that the position of the workpiece detected in each image is included in the designated target area (see para [0232]; “the reference plane is calculated based on the extracted height image data, and is then registered. Thereafter, the height image data acquired concerning the inspection target S is corrected such that the position of each portion on the surface of the inspection target S represents the distance from the registered reference plane in a direction orthogonal to the reference plane…In this case, with the height data after the correction, it is possible to easily confirm the position of each portion on the surface of the inspection target S from the reference plane”). Regarding claim 8, the rejection of claim 7 is incorporated herein. Suenga et al. in the combination further teach wherein the UI generation unit generates a user interface screen for displaying the plurality of images of the data set in the data set window, in a state where the workpiece detected in each image is included in the target area due to the position correction (see para [0148]; “”(As shown in FIG. 9, a main display field 411, a top sub-display field 412, and a side sub-display field 413 are displayed in the display part 400. …The side sub-display field 413 is allocated to a band-like area beside the right side of the main display field 411”). Regarding claim 10, the rejection of claim 1 is incorporated herein. Shingo in the combination further teach wherein the UI generation unit generates a user interface screen for displaying, in the setting window, an indicator indicating that the machine learning tool and the rule-based tool arranged in the setting window both refer to the data set (see page 8, 8th para]; “The UI generation unit 16 generates a UI for exchanging information between the user and the information processing device 100. Typically, the UI generation unit 16 generates a GUI (Graphical User Interface) such as a UI screen (see FIG. 5 and the like) displayed on the display unit 11 when generating a data set for machine learning. On the UI screen, for example, information to be presented to the user, an input field for the user to input information, and the like are displayed. The user can operate the operation unit (keyboard, etc.) while looking at the UI screen to specify various settings, values, and the like”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to modify an apparatus creating inspection setting window for a rule-based inspection mode and a learning-based inspection mode of Kurihara et al. in view of a method image inspection apparatus that performs a quality determination of an inspection target based on an image of Shimodaira et al. and further in view of an image inspection device, an image inspection method and an image inspection program of Suenaga et al. and data generation unit generates a data set for constructing a learning model from the original data set of Shingo in order to easily generate a required amount of training data set, and it is possible to sufficiently support the creation of the training data set (see page 8, 8th para). Regarding claim 11, the rejection of claim 1 is incorporated herein. Shingo in the combination further teach wherein the input unit receives user input for: arranging in the setting window a first machine learning tool and a second machine learning tool indicating image processing using a machine learning model, and a first rule-based tool and a second rule-based tool indicating image processing according to a predetermined rule (see page 2, 3rd para; “The data generation unit may generate a plurality of partial learning data sets by shifting the virtual period with respect to the period unit, and may combine the plurality of partial learning data sets to generate the learning data set. .. As a result, it is possible to easily generate a required amount of training data set, and it is possible to sufficiently support the creation of the training data se”); causing the first machine learning tool and the first rule-based tool arranged in the setting window to refer to a first data set including a plurality of images; and causing the second machine learning tool and the second rule-based tool arranged in the setting window to refer to a second data set including a plurality of images (see page 2, 4th para; “The data acquisition unit 15 acquires the original data set 20. The original data set 20 is a data set that is the source of the data set for machine learning described later. For example, the data acquisition unit 15 reads the original data set 20 specified by the user from the database 13. In the present disclosure, a data set is a collection of a plurality of data. Typically, a data set is a collection of a plurality of data samples, and each data sample is stored in association with data (item values) for each predetermined item. Therefore, it can be said that the original data set 20 is a plurality of data samples each including a plurality of data items”), and the UI generation unit generates a user interface screen for displaying, in the setting window: a first indicator indicating that the first machine learning tool and the first rule-based tool arranged in the setting window both refer to the first data set; and a second indicator indicating that the second machine learning tool and the second rule-based tool arranged in the setting window both refer to the second data set (see page 8, 8th para]; “The UI generation unit 16 generates a UI for exchanging information between the user and the information processing device 100. Typically, the UI generation unit 16 generates a GUI (Graphical User Interface) such as a UI screen (see FIG. 5 and the like) displayed on the display unit 11 when generating a data set for machine learning. On the UI screen, for example, information to be presented to the user, an input field for the user to input information, and the like are displayed. The user can operate the operation unit (keyboard, etc.) while looking at the UI screen to specify various settings, values, and the like”). Regarding claim 13, the rejection of claim 12 is incorporated herein. Shimodaira et al. in the combination further teach wherein the machine learning tool includes a classification tool indicating classification of classifying an input image into any one of a plurality of classes by the machine learning model, the rule-based tool includes a normal inspection tool indicating image inspection based on a predetermined rule, the input unit receives a user input for (see para [0009]; “performs a setting of deep learning processing for classifying a newly input inspection target image into the non-defective product image and the defective product image… an inspection execution section that applies the normal inspection processing to a newly acquired inspection target image, confirms the non-defective product determination or the defective product determination for the inspection target image having the characteristic amount with which the non-defective product determination”). Shingo in the combination further teach arranging in the setting window the classification tool and the normal inspection tool corresponding to each of the plurality of classes; and causing the classification tool and the plurality of normal inspection tools arranged in the setting window to refer to the data set including the plurality of images (see page 10, last para; “In this embodiment, a check box is displayed in the "Modify with tool" item that explains the countermeasure plan. When this check box is checked (black square in the figure), the countermeasure plan is selected. In addition, when the check box is unchecked (white square in the figure), the countermeasure plan is not selected. In addition, two check boxes or the like indicating selection / non-selection of the countermeasure plan may be used. In addition, the layout of the UI screen 40 and the display method and selection method of each presentation item 43 are not limited. For example, an icon or the like may be used to explain the item, or a detailed explanation may be displayed using a pop-up window or the like”), the image processing unit: classifies each image constituting the data set into any one of the plurality of classes by the classification tool arranged in the setting window; and executes image inspection on the classified image by the normal inspection tool corresponding to the classified class, and the UI generation unit generates a user interface screen for displaying, in the data set window, a result of the classification by the classification tool and a result of the image inspection by the normal inspection tools (see page 12, 3rd para; “When the recommended plan 46 is selected, each problem of the original data set 20, the countermeasure plan 45, and the recommended plan 46 are presented (step 204). Specifically, the UI generation unit 16 generates a UI screen 40 (see FIG. 5) showing each problem (evaluation information 44) of the original data set 20 and a countermeasure 45 thereof. As a result, the user can easily grasp the problems of the original data set 20 and their countermeasures. A UI screen 40 that displays only problems, a UI screen 40 that displays only countermeasures 45, and the like may be generated. Even in such a case, the user can edit and process the original data set 20 so as to solve the problem or according to the countermeasure 45, and can generate an appropriate input data set 30. .. In this way, the UI generation unit 16 presents at least one of the evaluation information 44 and the countermeasure plan 45”). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WINTA GEBRESLASSIE whose telephone number is (571)272-3475. The examiner can normally be reached Monday-Friday9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WINTA GEBRESLASSIE/ Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Feb 16, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579683
IMAGE VIEW ADJUSTMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12573238
BIOMETRIC FACIAL RECOGNITION AND LIVENESS DETECTOR USING AI COMPUTER VISION
2y 5m to grant Granted Mar 10, 2026
Patent 12530768
SYSTEMS AND METHODS FOR IMAGE STORAGE
2y 5m to grant Granted Jan 20, 2026
Patent 12524932
MACHINE LEARNING IMAGE RECONSTRUCTION
2y 5m to grant Granted Jan 13, 2026
Patent 12511861
DETECTION OF ANNOTATED REGIONS OF INTEREST IN IMAGES
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+24.7%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 133 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month