Prosecution Insights
Last updated: April 19, 2026
Application No. 18/597,254

VIDEO ANOMALY DETECTION

Non-Final OA §102§103
Filed
Mar 06, 2024
Examiner
SANTOS, DANIEL JOSEPH
Art Unit
2667
Tech Center
2600 — Communications
Assignee
Milestone Systems A/S
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
22 granted / 28 resolved
+16.6% vs TC avg
Strong +23% interview lift
Without
With
+22.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
33 currently pending
Career history
61
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
18.3%
-21.7% vs TC avg
§112
24.4%
-15.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 28 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDSs) submitted on March 6, 2024 and September 12, 2024 are in compliance with 37 CFR 1.97 and 1.98 and therefore have been considered by the examiner and placed in the file. Drawings The drawings are objected to because lines in Figs. 3-9F are not sufficiently dense, dark and thick to give them satisfactory reproduction characteristics, as required by 37 CFR 1.84(l). Additionally, the pictures shown in Figs. 3, 5A-6C and 8-9F appear to be photographs or computer renderings that are not of sufficient quality to allow all details to be reproduced in a printed patent. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Interpretation The claims in this application are given their broadest reasonable interpretation (BRI) using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The BRI of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification. In the following, some of the terms in the claims have been given BRIs in light of the specification. These BRIs are used for purposes of searching for prior art and examining the claims, but cannot be incorporated into the claims. Should Applicant believe that different interpretations are appropriate, Applicant should point to the portions of the specification that clearly support a different interpretation. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-6, 15 and 17-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by U.S. Pat. No. 12,511,630 B2 by Marinkovich et al. (hereinafter referred to as “Marinkovich”). Regarding claim 1, Marinkovich discloses a computer implemented method of Video Anomaly Detection, VAD (para. [0490]: “[i]n some embodiments, the machine learning model 3000 may be defined via anomaly detection, i.e., by identifying rare and/or outlier instances of one or more items, events and/or observations.”), the method comprising: detecting and tracking at least one object of interest across consecutive frames of video surveillance data (para. [0367]: “[f]or example, the artificial intelligence system 1160 may process image frames of the video feed to find markings (such as produce labels, SKUs, images, logos, or the like), shapes (such as packages of a particular size or shape), activities (such as loading or unloading activities) or the like that may indicate that a product has moved through the loading dock.”; see also para. [0297] discussing motion tracking of objects and people); performing VAD using a Probabilistic Graphical Model, PGM, based on the said at least one object that has been detected and tracked (para. [0483]: “[i]n some embodiments, the machine learning model 3000 may be and/or include a Bayesian network. The Bayesian network may be a probabilistic graphical model configured to represent a set of random variables and conditional independence of the set of random variables. The Bayesian network may be configured to represent the random variables and conditional independence via a directed acyclic graph. The Bayesian network may include one or both of a dynamic Bayesian network and an influence diagram.”). Regarding claim 2, Marinkovich discloses that the probabilistic graphical model (PGM) comprises a Discrete Bayesian Network, DBN (para. [0483]; para. [11522] discusses that the Bayesian model can be a hidden Markov model, which is a discrete Bayesian model in that the hidden states are discrete). Regarding claim 3, Marinkovich discloses that the PGM comprises a computer-readable Directed Acyclic Graph, DAG (para. [0483]: “[t]he Bayesian network may be a probabilistic graphical model configured to represent a set of random variables and conditional independence of the set of random variables. The Bayesian network may be configured to represent the random variables and conditional independence via a directed acyclic graph.”). Regarding claim 4, the BRI for modeling the spatial dimension, based on para. [0070] of the present disclosure, is that it means modeling the position of an object being tracked across consecutive frames. The BRI for modeling the temporal dimension, based on para. [0021] of the present disclosure, is that it means modeling movement of the object over time as the object is tracked across consecutive frames. Para. [2062] of Marinkovich discusses the dynamic vision system 11300 temporally combining the outputs of sensors using conditional probabilities to create a combined view of the object being tracked across frames in terms of time, position, orientation and motion. This spatial and temporal dimension information is modeled in the PGM of Marinkovich (para. [2045] discusses modeling of the spatial and temporal dimension information with reference to Fig. 124). Regarding claim 5, Marinkovich discloses generating bounding boxes representing at least areas in the frames where the said at least one object has been detected (para. [1566]: “[i]n embodiments, an object detection model extends the functionality of CNN based image classification neural network models by not only classifying objects but also determining their locations in an image in terms of bounding boxes.”). Regarding claim 6, Marinkovich discloses that the PGM models at least a spatial dimension for performing VAD within each of the said consecutive frames (para. [1566], the bounding boxes used in each frame for object segmentation/classification indicate that the PGM models the spatial dimension for each of the consecutive frames) and a temporal dimension for performing VAD across the said consecutive frames (Para. [2062] of Marinkovich discusses the dynamic vision system 11300 temporally combining the outputs of sensors using conditional probabilities as objects are being tracked across consecutive frames to create a combined view of the object being tracked across frames in terms of time, position, orientation and motion across consecutive frames), the method further comprising: generating bounding boxes representing at least areas in the frames where the said at least one object has been detected (para. [1566]: “[i]n embodiments, an object detection model extends the functionality of CNN based image classification neural network models by not only classifying objects but also determining their locations in an image in terms of bounding boxes.”), wherein each of the spatial and temporal dimensions is defined by a plurality of variables related to characteristics of the bounding boxes, characteristics of the respective frames in which these boxes are and/or characteristics of the object that has been detected and tracked (para. [0483], the spatial and temporal dimensions are defined in the PGM by a plurality of random variables having conditional independence modeled by the directed acyclic graph of the PGM; these variables are related to at least characteristics of the bounding boxes used to detect and track the object within a frame and across consecutive frames; para. [2062] discusses the relationship between the conditional probabilities of the PGM and the spatial and temporal dimensions of the objects being tracked using bounding boxes). Regarding claim 15, Marinkovich discloses that the at least some values of the said variables are determined and discretized in order to perform VAD using the PGM (see the rejection of claims 1-3, the PGM can be implemented as a hidden Markov model for which values of the random variables of the DAG of the PGM are determined and discretized in order enable the PGM to performed VAD). Regarding claim 17, Marinkovich discloses using parallel processing to perform VAD (para. [1267] discloses using a plurality of neural networks in parallel in the cloud to provide “massively parallel computational capability”). Regarding claim 18, to the extent that claim 18 recites the same limitations that are recited in claim 1, the rejection of claim 1 applies mutatis mutandis to claim 18. Claim 18 recites a computer readable storage medium storing a program for causing a computer to perform the operations recited in claims 1 and 18. Marinkovich discloses a computer readable storage medium storing a program for causing a computer to perform the operations recited in claims 1 and 18 (para. [2784]). Regarding claim 19, the rejection of claim 1 applies mutatis mutandis to claim 19. Regarding claim 20, this claim combines the limitations of claims 2 and 4. Accordingly, the rejections of claims 2 and 4 apply mutatis mutandis to claim 20. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 7-10 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Marinkovich in view of U.S. Pat. No. 9,483,839 B1 to Kwon et al. (hereinafter referred to as “Kwon”). Regarding claim 7, as indicated above in the rejection of claim 5, Marinkovich discloses using bounding boxes to perform object tracking, but does not explicitly disclose dividing the consecutive frames into uniform grid structures of adjacent grid cells and determining, for each bounding box, which cells intersect with at least a part of that box, for performing VAD. Kwon, in the same field of endeavor of detecting and tracking objects in video frames, discloses dividing the consecutive frames into uniform grid structures of adjacent grid cells (Col. 8, lines 30-56, Fig. 2, block 206, Figs. 3A and 3B, the frames 120 are divided into uniform grid structures of cells 304), and determining for each bounding box, which cells intersect with at least a part of that box (Col. 9, lines 17-62, Figs. 4A and 4B, the bounding box is shifted across the uniform grid structure and a determination is made of which cells 304 intersect the bounding box, i.e., which cells are overlapping cells 306). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the tracking algorithm performed by the Marinkovich to divide the image frames into a uniform grid structure of cells and to shift the bounding box across each frame to determine which cells intersect with the bounding box as taught by Kwon. One of ordinary skill in the art would have been motivated to make the modification to improve the robustness of the tracking algorithm by successfully tracking objects across frames even when the objects are partially occluded as taught by Kwon. The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results (modifying the tracking software executed by the system of Marinkovich to divide image frames into a grid structure of cells, shift the bounding box across the divided grid structure and determine which cells intersect the bounding box). Regarding claim 8, although Marinkovich discloses using bounding boxes to perform object tracking, Marinkovich does not explicitly disclose dividing the frames into uniform grid structures of cells and considering whether the whole bounding box is partially or fully intersected by the cells. Kwon discloses that for each bounding box, the whole bounding box is considered for determining which cells partially or fully intersect with that box (Col. 9, lines 17-62, Figs. 4A and 4B, once the bounding box size is chosen, the bounding box is shifted from the left top corner of the image frame across the entire image frame to the bottom right corner of the image frame by shifting the box “over and down one cell at a time” and determining the intersections between the cells and the box). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the tracking algorithm performed by the Marinkovich to divide the image frames into a uniform grid structure of cells and to shift the bounding box across the entirety of the grid structure of each frame to determine which cells intersect with the bounding box as taught by Kwon. One of ordinary skill in the art would have been motivated to make the modification to improve the robustness of the tracking algorithm by successfully tracking objects across frames even when the objects are partially occluded as taught by Kwon. The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results (modifying the tracking software executed by the system of Marinkovich to divide image frames into a grid structure of cells, shift the bounding box across the divided grid structure and determine which cells intersect the bounding box). Regarding claim 9, although Marinkovich discloses using bounding boxes to perform object classification and tracking, Marinkovich does not explicitly disclose dividing the frames into uniform grid structures of cells and determining intersections between only a bottom part of the bounding box and the cells. As indicated above, Kwon discloses dividing consecutive image frames into uniform grid structures of cells, choosing a suitable bounding box size and shifting the bounding box across the entirety of the grid structure while determining which cells intersect with the bounding box. Kwon does not explicitly disclose considering only the bottom part of the bounding box when determining which cells intersect the bounding box. However, Kwon discusses using multiple sizes of bounding boxes and considerations that are taken into account when choosing the bounding box size (Col. 9, lines 1-16: “depending on various conditions such as a location (center, left, or right) and size of the object/target within the image frame, viewpoints (view angles) toward the target, or existence of occlusions and clutters, it may not be guaranteed to have cell-to-cell matching correspondence among different image frames of the same target. To make target signature matching robust, multiple sub-regions are assigned in overlapped and multiple-sized ways. An example purpose of overlapping is to include the same features in many sub-regions, and an example purpose of multiple-sizes is to consider that an effective number of cells in a sub-region varies due to background inclusion or partial occlusion in the image frame.”). It would have been obvious to a person of ordinary skill in the art before the effective filing data of the present disclosure to try only considering the bottom part of the bounding box for determining which cells intersect the bounding box in Kwon since Kwon discusses various conditions that are taken into account when selecting configurations of the bounding boxes, since there are a finite number of bounding box configurations that can be used and since various configurations of bounding boxes can be used with a reasonable expectation of success. (See KSR International Co. v. Teleflex Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007) discussing “obvious to try” being a valid rationale for an obviousness finding; See also MPEM 2144.05) A person of ordinary skill in the art would have been motivated to consider intersections of cells with only the bottom part of the bottom part of the box in order to reduce the number of tracking computations that have to be made and to avoid noise associated with occlusions in cases where all except the bottom part of the target image is occluded. The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results (modifying the tracking software executed by the system of Marinkovich to divide image frames into a grid structure of cells, shift the bounding box across the divided grid structure and determine which cells intersect only the portion of the bounding box). Regarding claim 10, Marinkovich does not explicitly disclose that the spatial dimension is defined by a plurality of variables chosen amongst the group comprising a frame identifier, a scene identifier, a grid cell identifier, an intersection area representing an area of overlap between a bounding box and at least one grid cell, an object class representing a category of the object that has been detected and tracked, a bounding box size, and a bounding aspect ratio corresponding to a bounding box width-to-heigh ratio. Kwon discloses that the spatial dimension, the BRI for which is provided above, is defined by at least a grid cell identifier, an intersection area representing an area of overlap between a bounding box and at least one grid cell, and a bounding box size (Figs. 4A and 4B, Col. 9, lines 17-62 discuss the bounding box sizes and the intersection areas representing areas of overlap between the bounding box and the grid cells; since the intersections of the cells are being determined, the cells necessarily have cell identifiers associated with them in order to allow the system to determine where the intersections occurred; therefore, the spatial dimension is defined in terms of at least the bounding box sizes, the intersection areas representing areas of overlap between the bounding box and the grid cell identifiers). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the tracking algorithm performed by the Marinkovich to divide the image frames into a uniform grid structure of cells and to shift the bounding box across the entirety of the grid structure of each frame to determine which cells intersect with the bounding box as taught by Kwon. In such a modified system, the spatial dimension would necessarily be defined at least in terms of a grid cell identifier, an intersection area representing an area of overlap between a bounding box and at least one grid cell, and a bounding box size. One of ordinary skill in the art would have been motivated to make the modification to improve the robustness of the tracking algorithm by successfully tracking objects across frames even when the objects are partially occluded as taught by Kwon. The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results (modifying the tracking software executed by the system of Marinkovich to divide image frames into a grid structure of cells, shift the bounding box across the divided grid structure and determine which cells intersect the bounding box). Regarding claim 16, to the extent that claim 16 recites the same limitations that are recited 7, the rejection of claim 7 applies mutatis mutandis to claim 16. The only limitation that is recited in claim 16 that is not also recited in claim 7 is the limitation of, for at least one cell which intersects with a bounding box, displaying values of the variables in the said plurality of variables for that cell, where the variables are related to characteristics of the bounding boxes, characteristics of the respective frames in which these boxes are and/or characteristics of the object that has been detected and tracked. Marinkovich does not explicitly disclose displaying the values of these variables. Kwon discloses that the system includes a display (Col. 4, lines 31-39, Fig. 1, system 100 includes display 125 that displays the reference image frame 120 and the candidate image frames 122) and discloses displaying values of variables related to characteristics of the sizes of the bounding boxes (top row of Fig. 4A shows a display of characteristics of the frame 120, of the 3x3 cell size of the bounding box and of the intersections between the cells and the 3x3 bounding box; bottom row of Fig. 4A shows a display of characteristics of the frame 120, of the 4x4 cell size of the bounding box and of the intersections between the cells and the 4x4 bounding box). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the tracking algorithm performed by the Marinkovich to divide the image frames into a uniform grid structure of cells, to shift the bounding box across the entirety of the grid structure of each frame to determine which cells intersect with the bounding box, and to display the corresponding characteristics as taught by Kwon. One of ordinary skill in the art would have been motivated to make the modification to improve the robustness of the tracking algorithm by successfully tracking objects across frames even when the objects are partially occluded as taught by Kwon and displaying the bounding box, cell and intersection characteristics as taught by Kwon. The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results (modifying the tracking software executed by the system of Marinkovich to divide image frames into a grid structure of cells, shift the bounding box across the divided grid structure, determine which cells intersect the bounding box and display the corresponding steps and results). Claims 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Marinkovich in view of U.S. Pat. No. 12,494,065 B2 to Fox (hereinafter referred to as “Fox”). Regarding claim 11, as indicated above in the rejection of claim 4, Marinkovich discloses that the PGM models and tracks the spatial and temporal dimensions, and that the temporal dimension is defined by the movement direction of the object being tracked (Para. [2062] of Marinkovich discusses the dynamic vision system 11300 temporally combining the outputs of sensors using conditional probabilities to create a combined view of the object being tracked across frames in terms of time, position, orientation and motion; motion of the tracked object combined with position of the tracked object provides movement direction). However, Marinkovich does not explicitly disclose that the temporal dimension is also defined by a velocity of the object that has been detected and tracked. Fox, in the same field of endeavor of detecting and tracking objects in video frames, discloses modeling a temporal dimension that is defined in part by the velocity of the object being detected and tracked across video frames (Col. 2, line 61-Col. 3, line 62). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the object detection and tracking algorithm performed by the Marinkovich to include velocity of the object being detected and tracked as part of the temporal dimension being modeled by the PGM of Marinkovich. One of ordinary skill in the art would have been motivated to make the modification to improve the robustness of the tracking algorithm by also taking into account the velocity of the object being detected and tracked across consecutive video frames. The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results (modifying the tracking software executed by the system of Marinkovich to calculate velocity of the object based on the objects sequential positions and the time intervals between changes of position as taught by Fox). Regarding claim 12, as indicated above, Marinkovich discloses using bounding boxes to track objects across consecutive frames. However, Marinkovich does not explicitly disclose that the velocity is determined based on the velocity of a bounding box across consecutive images. Fox discloses that the velocity is determined based on velocity and movement of a bounding box containing the object being detected and tracked across consecutive frames (Col. 3, lines 11-40). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the object detection and tracking algorithm performed by the Marinkovich system to include the velocity of the object being detected and tracked as part of the temporal dimension being modeled by the PGM of Marinkovich and to determine the velocity based on the velocity of the bounding box across consecutive frames. One of ordinary skill in the art would have been motivated to make the modification to improve the robustness of the tracking algorithm by also taking into account the velocity of the object being tracked in bounding boxes across consecutive video frames. The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results (modifying the tracking software executed by the system of Marinkovich to calculate velocity of the object based on the velocity of bounding boxes in sequential positions and the corresponding time intervals between changes of position as taught by Fox). Regarding claim 13, this claim combines the limitations of claims 7, 11 and 12. Accordingly, the rejections of claims 7, 11 and 12 apply mutatis mutandis to claim 13. Regarding claim 14, to the extent that claim 14 recites the same limitations that are recited in claim 2, the rejection of claim 2 applies mutatis mutandis to claim 14. Regarding the limitation “wherein the DBN analyzes dependencies between the said variables by means of conditional probability distributions”, that is the function performed by DBNs. Therefore, since, as indicated above in the rejection of claim 2, Marinkovich discloses that the PGM comprises a DBN, Marinkovich discloses this limitation. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL J SANTOS whose telephone number is (571)272-2867. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Bella can be reached at (571)272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL JOSEPH SANTOS/Examiner, Art Unit 2667 /MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Mar 06, 2024
Application Filed
Jan 08, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602737
RAPID RECONSTRUCTION OF HIGH RESOLUTION IMAGES FROM LOWER RESOLUTION IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12586385
SYSTEM AND METHOD FOR OCCLUSION DETECTION IN AUTONOMOUS VEHICLE OPERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12573174
IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Mar 10, 2026
Patent 12564307
IMAGE ANALYSIS PROCESSING APPARATUS, ENDOSCOPE SYSTEM, OPERATION METHOD OF IMAGE ANALYSIS PROCESSING APPARATUS, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12561781
METHODS, SYSTEMS, AND COMPUTER PROGRAM PRODUCT FOR VALIDATING DRUG PRODUCT PACKAGE CONTENT USING TIERED EVALUATION FACTORS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+22.9%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 28 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month