Prosecution Insights
Last updated: April 19, 2026
Application No. 18/499,250

Method of identifying a whiteboard based on image tokens

Non-Final OA §103
Filed
Nov 01, 2023
Examiner
BUDISALICH, ANDREW STEVEN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Kneron (Taiwan ) Co. Ltd.
OA Round
3 (Non-Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
87%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
36 granted / 46 resolved
+16.3% vs TC avg
Moderate +9% lift
Without
With
+8.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
35 currently pending
Career history
81
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
65.6%
+25.6% vs TC avg
§102
5.2%
-34.8% vs TC avg
§112
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 46 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/10/2026 has been entered. Status of Claims Claims 1-3, 7-12, and 14 are pending. Claims 4-6 and 13 are canceled. Response to Arguments Applicant’s arguments, see p.5-8, filed 02/10/2026, with respect to the rejections of Claims 1-12 and 14 under 35 U.S.C. 103 have been fully considered but are moot because Applicant’s amendments have altered the scope of the claims, and therefore, necessitated new grounds of rejection which are presented below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 7, 11, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Oules et al. (US 11790572 B1) in view of Sasaki et al. (US 8564674 B2), Kim et al. (US 20190303708 A1), Kunio et al. (US 20220346885 A1), Taylor et al. (US 20200121228 A1), Lavallee et al. (US 20230277271 A1), Klinger et al. (US 20230196609 A1), and Alzen et al. (US 20130114785 A1). Regarding Claim 1, Oules teaches "A method of identifying a whiteboard based on image tokens comprising: setting N image tokens on the whiteboard; obtaining an image including the whiteboard from a camera"; (Oules, FIG. 4 and Abstract and Col. 6 lines 20-35, teaches the depiction of a physical whiteboard may be captured by an image capture device to generate a virtual whiteboard wherein one or more markers may be used to facilitate the detection/analysis of the visual content in which the physical whiteboard may include four markers at the corners of the physical whiteboard, i.e., identify a whiteboard based on image tokens being the markers wherein the image tokens are set on the whiteboard and an image including the whiteboard is obtained from a camera). However, Oules does not explicitly teach “detecting image shaking of the image; only if image shaking of the image is not detected, detecting image tokens of the whiteboard by using a machine learning model; if only M image tokens are detected, generating N-M image tokens either automatically or after obtaining a user permission; applying token tracking to the N image tokens to enhance stability and robustness of token detection; calculating coordinates of the N image tokens; determining an optimal mapping matrix based on the coordinates of the N image tokens; mapping the image based on the optimal mapping matrix to generate a mapped image of the whiteboard; and displaying the mapped image on a screen; wherein M< N, M is a positive integer, and N is an integer larger than 1”. In an analogous field of endeavor, Sasaki teaches "detecting image shaking of the image"; (Sasaki, Claim 1, teaches detecting image shaking comprising an image shaking detection unit which detects image shaking based on a size of a motion vector which represents movement of each pixel and is detected from a plurality of successively obtained image frames wherein the image shaking detection unit compares the motion vector with a threshold to determine image shaking is detected or has disappeared). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Oules by including the detection of image shaking taught by Sasaki. One of ordinary skill in the art would be motivated to combine the references since it improves processing efficiency and speed (Sasaki, Col. 11 Lines 20-29, teaches the motivation of combination to be to improve processing efficiency and processing speed). However, the combination of references of Oules in view of Sasaki does not explicitly teach “only if image shaking of the image is not detected, detecting image tokens of the whiteboard by using a machine learning model; if only M image tokens are detected, generating N-M image tokens either automatically or after obtaining a user permission; applying token tracking to the N image tokens to enhance stability and robustness of token detection; calculating coordinates of the N image tokens; determining an optimal mapping matrix based on the coordinates of the N image tokens; mapping the image based on the optimal mapping matrix to generate a mapped image of the whiteboard; and displaying the mapped image on a screen; wherein M< N, M is a positive integer, and N is an integer larger than 1”. In an analogous field of endeavor, Kim teaches "only if image shaking of the image is not detected, detecting image tokens of the whiteboard (Kim, Abstract and Para. 62, teaches a motion sensor to detect accuracy of motion of the electronic device and check stability of the electronic device on the basis of the motion vector wherein when motion of the electronic device is less than or equal to a threshold value, a motion value may be transmitted to allow the image recognition engine to detect an object, i.e., detect objects only if image shaking of the image is not detected being the motion of the image capturing electronic device being below a threshold). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Oules and Sasaki wherein the image frames are of a whiteboard, the objects are tokens, and image shaking in the image is detected by comparing the motion vector to a threshold by including the detection of objects in the image only if the image shaking is not being detected taught by Kim. One of ordinary skill in the art would be motivated to combine the references since it improves user convenience (Kim, Para. 4, teaches the motivation of combination to be to improve user convenience). However, the combination of references of Oules in view of Sasaki and Kim does not explicitly teach “detecting image tokens of the whiteboard by using a machine learning model; if only M image tokens are detected, generating N-M image tokens either automatically or after obtaining a user permission; applying token tracking to the N image tokens to enhance stability and robustness of token detection; calculating coordinates of the N image tokens; determining an optimal mapping matrix based on the coordinates of the N image tokens; mapping the image based on the optimal mapping matrix to generate a mapped image of the whiteboard; and displaying the mapped image on a screen; wherein M< N, M is a positive integer, and N is an integer larger than 1”. In an analogous field of endeavor, Kunio teaches "detecting image tokens of the whiteboard by using a machine learning model"; (Kunio, Para. 13, teaches applying machine learning to identify one or more markers in angiography image frames, i.e., detect image tokens using a machine learning model); " "calculating coordinates of the N image tokens"; (Kunio, Para. 17, teaches outputting spatial coordinates defining the marker locations, i.e., calculating coordinates of the image tokens). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Oules, Sasaki, and Kim wherein the image frames are of a whiteboard and the objects are only detected if image shaking is not detected by including the detection of image tokens or markers using a machine learning model and calculating the coordinates of the tokens taught by Kunio. One of ordinary skill in the art would be motivated to combine the references since it improves detection (Kunio, Abstract, teaches the motivation of combination to be to improve or optimize marker detection and coregistration). However, the combination of references of Oules in view of Sasaki, Kim, and Kunio does not explicitly teach “if only M image tokens are detected, generating N-M image tokens either automatically or after obtaining a user permission; applying token tracking to the N image tokens to enhance stability and robustness of token detection; determining an optimal mapping matrix based on the coordinates of the N image tokens; mapping the image based on the optimal mapping matrix to generate a mapped image of the whiteboard; and displaying the mapped image on a screen; wherein M< N, M is a positive integer, and N is an integer larger than 1”. In an analogous field of endeavor, Taylor teaches "if only M image tokens are detected, generating N-M image tokens either automatically or after obtaining a user permission";(Taylor, Paras. 92 and 99, teaches calibration frames wherein the upper two corners of each frame portion include a fiducial mark, and the inner bottom corner of each frame portion also includes a fiducial mark in which arrangement and number of fiducials may vary and may be placed at one or more corners of frame portions and wherein if any fiducials are not found due to error then their locations may be interpolated or extrapolated based on the locations of the discovered fiducial marks and the known geometry of the calibration frames in which the corners of the missing fiducial can be estimated using the intersections of lines of known fiducials along axes, i.e., if only M image tokens are detected or found, generating the remaining image tokens automatically by interpolating or extrapolating the positions of the missing fiducials by using the known and detected fiducials); "wherein M< N, M is a positive integer, and N is an integer larger than 1";(Taylor, Paras. 92 and 99, teaches calibration frames wherein the upper two corners of each frame portion include a fiducial mark, and the inner bottom corner of each frame portion also includes a fiducial mark in which arrangement and number of fiducials may vary and may be placed at one or more corners of frame portions and wherein if any fiducials are not found due to error then their locations may be interpolated or extrapolated based on the locations of the discovered fiducial marks and the known geometry of the calibration frames in which the corners of the missing fiducial can be estimated using the intersections of lines of known fiducials along axes, i.e., the number of fiducials or image tokens is an integer larger than 1 and is greater than the number of fiducials which are not found which is a positive integer). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Oules in view of Sasaki, Kim, and Kunio by including the automatic generation of image tokens which were not detected but supposed to be present in which the number of image tokens present is greater than 1 and the number not detected is positive and less than the number of tokens supposed to be present taught by Taylor. One of ordinary skill in the art would be motivated to combine the references since it improves operation of the system (Taylor, Para. 31, teaches the motivation of combination to be to improve operation of the system by calibrating image data). However, the combination of references of Oules in view of Sasaki, Kim, Kunio, and Taylor does not explicitly teach “applying token tracking to the N image tokens to enhance stability and robustness of token detection; determining an optimal mapping matrix based on the coordinates of the N image tokens; mapping the image based on the optimal mapping matrix to generate a mapped image of the whiteboard; and displaying the mapped image on a screen”. In an analogous field of endeavor, Lavallee teaches "applying token tracking to the N image tokens to enhance stability and robustness of token detection"; (Lavallee, Paras. 94 and 98, teaches tracking a pattern of optical markers sufficient to allow a robust and accurate detection and localization of the tracking pattern wherein a greater number of markers may increase the robustness and accuracy of detection, i.e., applying token tracking being the tracking pattern to the image tokens to enhance stability and robustness of detection). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Oules, Sasaki, Kim, Kunio, and Taylor by including the token tracking to enhance stability and robustness of detection taught by Lavallee. One of ordinary skill in the art would be motivated to combine the references since it increases robustness and accuracy (Lavallee, Para. 94, teaches the motivation of combination to be to increase robustness and accuracy of the detection). However, the combination of references of Oules in view of Sasaki, Kim, Kunio, Taylor, and Lavallee does not explicitly teach “determining an optimal mapping matrix based on the coordinates of the N image tokens; mapping the image based on the optimal mapping matrix to generate a mapped image of the whiteboard; and displaying the mapped image on a screen”. In an analogous field of endeavor, Klinger teaches "determining an optimal mapping matrix based on the coordinates of the N image tokens"; (Klinger, Para. 10, teaches determining marker positions and determining a transformation matrix as a function of the marker positions, i.e., determine an optimal mapping matrix as the transformation matrix based on coordinates of the image tokens being the marker positions); "mapping the image based on the optimal mapping matrix to generate a mapped image of the whiteboard";(Klinger, Para. 10, teaches the transformation matrix maps the markers on the object onto the marker display in the image, i.e., mapping the image based on the optimal mapping matrix being the transformation matrix to generate a mapped image). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Oules, Sasaki, Kim, Kunio, Taylor, and Lavallee wherein the image is of a whiteboard by including the mapping of the image based on a mapping matrix to generate a mapped image based on the coordinates of the tokens taught by Klinger. One of ordinary skill in the art would be motivated to combine the references since it improves reliability of detection (Klinger, Para. 9, teaches the motivation of combination to be to improve the reliability of the detection of the markers under different environmental conditions). However, the combination of references of Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, and Klinger does not explicitly teach "and displaying the mapped image on a screen". In an analogous field of endeavor, Alzen teaches "and displaying the mapped image on a screen"; (Alzen, Abstract, teaches image content mapped onto an observation plane is displayed with the mapped image content on a screen, i.e., displaying the mapped image on a screen). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Oules, Sasaki, Kim, Kunio, Taylor, Lavallee, and Klinger by including the display of the mapped image on a screen taught by Alzen. One of ordinary skill in the art would be motivated to combine the references since it enables example image representation (Alzen, Abstract, teaches the motivation of combination to be to enable example image representation). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Regarding Claim 2, the combination of references of Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, and Alzen teaches "The method of claim 1, wherein the N image tokens are magnets, magic tapes, stickers, patterns such as rectangular symbols, star symbols, diamond symbols, or line symbols drawn by a pen (a whiteboard pen or a color pen), gestures, or display of electronic screens"; (Oules, FIG. 4 and Col. 6 lines 20-35, teaches the one or more markers being machine-readable optical codes such as barcodes or QR codes, i.e., image tokens are rectangular symbols). Please note that the exemplary “such as” language is not limiting the scope of the claim. Regarding Claim 3, the combination of references of Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, and Alzen teaches "The method of claim 1, wherein the machine learning model is artificial intelligence (AI) model, deep learning model, computer vision model, or you only look once (YOLO) model"; (Kunio, Abstract, teaches artificial intelligence applications including deep or machine learning and computer vision, i.e., the machine learning model is an AI model, deep learning model, or computer vision model). The proposed combination as well as the motivation for combining the Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, and Alzen references presented in the rejection of Claim 1, applies to claim 3. Thus, the method recited in claim 3 is met by Oules in view of Sasaki, Kunio, Taylor, Lavallee, Klinger, and Alzen. Regarding Claim 7, the combination of references of Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, and Alzen teaches "The method of claim 1, wherein setting the N image tokens on the whiteboard is setting the N image tokens at the corners of the whiteboard"; (Oules, FIG. 4 and Col. 6 lines 20-35, teaches the physical whiteboard including four markers at the corners of the physical whiteboard, i.e., setting the plurality of image tokens on the whiteboard at the corners of the whiteboard). Regarding Claim 11, the combination of references of Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, and Alzen teaches "The method of claim 1, wherein the whiteboard is a glass whiteboard, a mobile whiteboard, a projection screen, or a mobile green board"; (Oules, FIGS. 4 and 6 and Col. 2 lines 33-46, teaches a projection component configured to project the changes to the virtual whiteboard on top of the physical whiteboard and/or other locations, i.e., the whiteboard is a projection screen displaying or projection changes of the whiteboard). Regarding Claim 14, the combination of references of Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, and Alzen teaches "The method of claim 1, further comprising checking if the N image tokens were detected"; (Kunio, Para. 19, teaches checking whether the detected marker location defining detected results is correct or accurate, i.e., checking if image tokens were detected). The proposed combination as well as the motivation for combining the Oules in view of Sasaki, Kunio, Taylor, Lavallee, Klinger, and Alzen references presented in the rejection of Claim 1, applies to claim 14. Thus, the method recited in claim 14 is met by Oules in view of Sasaki, Kunio, Taylor, Lavallee, Klinger, and Alzen. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, Alzen, and Peng et al. (US 20240314277 A1). Regarding Claim 8, the combination of references of Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, and Alzen does not explicitly teach "The method of claim 1, wherein determining an optimal mapping matrix based on the coordinates of the N image tokens is determining a warp perspective transform based on the coordinates of the N image tokens". In an analogous field of endeavor, Peng teaches "The method of claim 1, wherein determining an optimal mapping matrix based on the coordinates of the N image tokens is determining a warp perspective transform based on the coordinates of the N image tokens"; (Peng, Paras. 24 and 36, teaches a one-to-one mapping relationship between the original image and the projection position is determined by calculating the homograph matrix mapping wherein the system may determine coordinates of the projected image and the position coordinates of the corrected distortion output corners can be calculated to calculate a homograph matrix in which the system may perform warp perspective transformation to correct every output projected frame, i.e., determining optimal mapping matrix as determining a warp perspective transformation based on image coordinates). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, and Alzen wherein image coordinates are coordinates of the image tokens by including the determination of a mapping matrix based on coordinates is determining a warp perspective transform taught by Peng. One of ordinary skill in the art would be motivated to combine the references since it determines an improved area (Peng, Para. 25, teaches the motivation of combination to determine a new and improved projection area). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, Alzen, and Kumar (US 20220198622 A1). Regarding Claim 9, the combination of references of Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, and Alzen does not explicitly teach "The method of claim 1, further comprising post-processing the mapped image of the whiteboard". In an analogous field of endeavor, Kumar teaches "The method of claim 1, further comprising post-processing the mapped image of the whiteboard"; (Kumar, Para. 21, teaches a postprocessing engine applying post processing effects to each raw HDR frame to produce a complete HDR frame). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, and Alzen wherein the frame or image is a mapped image of the whiteboard by including the post-processing of the mapped image taught by Kumar. One of ordinary skill in the art would be motivated to combine the references since it improves the quality of the image (Kumar, Para. 21, teaches the motivation of combination to be to improve the quality of the frames). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Regarding Claim 10, the combination of references of Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, Alzen, and Kumar teaches "The method of claim 9, wherein post-processing the mapped image of the whiteboard comprises: blurring, sharpening, and/or mosaicking the mapped image of the whiteboard"; (Kumar, Para. 21, teaches post processing effects including blurring and sharpening, i.e., post-processing the image includes blurring or sharpening the image). The proposed combination as well as the motivation for combining the Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, Alzen, and Kumar references presented in the rejection of Claim 9, applies to claim 10. Thus, the method recited in claim 10 is met by Oules in view of Sasaki, Kunio, Taylor, Lavallee, Klinger, Alzen, and Kumar. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, Alzen, and Li et al. (US 20240062518 A1). Regarding Claim 12, the combination of references of Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, and Alzen does not explicitly teach "The method of claim 1, wherein the token tracking is applied based on a center point based tracking system, or an intersection over union based tracking system". In an analogous field of endeavor, Li teaches "The method of claim 1, wherein the token tracking is applied based on a center point based tracking system, or an intersection over union based tracking system"; (Li, Para. 7, teaches tracking the detected objects using intersection over union of previous and current frames of the object, i.e., tracking is applied based on an intersection over union based tracking system). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Oules in view of Sasaki, Kim, Kunio, Taylor, Lavallee, Klinger, and Alzen wherein the objects detected and tracked are tokens by including the tracking is based on an intersection over union based tracking system taught by Li. One of ordinary skill in the art would be motivated to combine the references since it achieves higher accuracy (Li, Para. 46, teaches the motivation of combination to be to achieve higher accuracy for the model). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW STEVEN BUDISALICH whose telephone number is (703)756-5568. The examiner can normally be reached Monday - Friday 8:30am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW S BUDISALICH/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Nov 01, 2023
Application Filed
Oct 23, 2025
Non-Final Rejection — §103
Dec 08, 2025
Response Filed
Jan 07, 2026
Final Rejection — §103
Feb 10, 2026
Request for Continued Examination
Feb 18, 2026
Response after Non-Final Action
Feb 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602820
METHOD AND APPARATUS WITH ATTENTION-BASED OBJECT ANALYSIS
2y 5m to grant Granted Apr 14, 2026
Patent 12597106
METHOD AND APPARATUS FOR IDENTIFYING DEFECT GRADE OF BAD PICTURE, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12592078
VIDEO MONITORING DEVICE, VIDEO MONITORING SYSTEM, VIDEO MONITORING METHOD, AND STORAGE MEDIUM STORING VIDEO MONITORING PROGRAM
2y 5m to grant Granted Mar 31, 2026
Patent 12586232
METHOD FOR OBJECT DETECTION USING CROPPED IMAGES
2y 5m to grant Granted Mar 24, 2026
Patent 12567151
Microscopy System and Method for Instance Segmentation
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
87%
With Interview (+8.9%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 46 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month