Prosecution Insights
Last updated: April 19, 2026
Application No. 18/624,023

METHODS AND SYSTEMS FOR COMBINING IMAGES TO DETECT MOVING OBJECTS DEPICTED IN VIDEO CAMERA DATA

Non-Final OA §103
Filed
Apr 01, 2024
Examiner
ABDI, AMARA
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Verkada Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
76%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
677 granted / 816 resolved
+21.0% vs TC avg
Minimal -8% lift
Without
With
+-7.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
33 currently pending
Career history
849
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
60.7%
+20.7% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 816 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 14-17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Spears et al, (US-PGPUB 20230177726) In regards to claim 1, Spears et al discloses a non-transitory, processor- readable medium storing instructions that, when executed by a processor, (see at least: Par. 0160-0161, “machine-readable medium 1022 … capable of storing instructions”), cause the processor to: receive a video stream including a plurality of video frames that depicts an object in motion, (see at least: Par. 0066-0067, image capture devices 104 (e.g., IR image capture devices) may capture images of a gas station view 600, where the input engine 202 may be configured to receive and decode images and/or video from the image capture device(s) 104 …, and once the sets of images are stacked by the frame engine 206, areas of movement, such as gas leaks in areas 602 and 604, may be picked up, [i.e., receiving a video stream, “the input engine 202 implicit receives the video stream captured by the image capture devices 104 “, including a plurality of video frames that depicts an object in motion, “implicit by detecting areas of movement 602, 604, by the frame engine 206”]); encode, based on the first video frame, a first channel of a pixel included in an image, to define a first encoded channel; encode, based on the second video frame, a second channel of the pixel, to define a second encoded channel; encode, based on the third video frame, a third channel of the pixel, to define a third encoded channel, (see at least: Par. 0005, 0007, and 0061, three frames passed from the frame registration engine 204 may be converted to a single-channel grayscale image …. , placing three images in a time series in different color channels, and when these images are interpreted as a standard RGB image, they show a rainbow effect in areas of movement caught within the image, [i.e., encoding first channel, second channel, third channel based on placing three images in a time series in different color channels, interpreted as a standard RGB image]); and detecting, using a neural network, the object in motion based on the first encoded channel, the second encoded channel, and the third encoded channel, (see at least: Par. 0041, the gas monitoring system 102 utilizes one or more models from region-based convolution neural networks for gas leak identification; and from Par. 0006, 0061, 0065, 0067, areas of the sets of images representing the stationary objects will have the same intensity in all three channels: red, green, and blue … once the sets of images are stacked by the frame engine 206, areas of movement, such as VOC gas leaks, may be picked up by the different colored channels, [i.e., detecting, using a neural network, “region-based convolutional neural networks”, the object in motion based on the first encoded channel, the second encoded channel, and the third encoded channel, “areas of movement picked up by the different colored channels”]). Spears does not expressly disclose selecting, from the plurality of video frames, a first video frame, a second video frame, and a third video frame However, Spears discloses that the input engine 202 may select any number of the images (e.g., periodically select images), from the video images, which technically meets the limitation of selecting, from the plurality of video frames, a first video frame, a second video frame, and a third video frame, (see at least: Par. 0055). In regards to claim 2, Spears obviously discloses the limitations of claim 1. Spears further discloses wherein each of the first video frame, the second video frame, and the third video frame is associated with a different grayscale image from a plurality of grayscale images, (see at least: Par. 0039, 0061, the multi-channel image is not limited to red, green, and blue, but may be any combination of colors or grayscales, “i.e., the plurality of images, (first, second, and third frames), implicitly correspond to the plurality of grayscale images”]). In regards to claim 3, Spears obviously discloses the limitations of claim 1. Spears further discloses wherein: the image is an RGB image, (see at least: Par. 0039, multi-channel image is described herein as an RGB image); and the neural network is a convolutional neural network configured to process an RGB image, (see at least: Par. 0041, 0070, implicit by using one or more models from region-based convolution neural networks for gas leak identification). In regards to claim 4, Spears obviously discloses the limitations of claim 1. Spears further discloses wherein the first video frame, the second video frame, and the third video frame are ordered consecutively within the plurality of video frames, (see at least: Par. 0061, implicit by placing three images in a time series in different color channels, “ordered consecutively”). In regards to claim 5, Spears obviously discloses the limitations of claim 1. Spears further discloses wherein: the first video frame is temporally spaced, by a predefined interval, from the second video frame within the video stream; and the second video frame is temporally spaced, by the predefined interval, from the third video frame within the video stream, (see at least: Par. 0062, implicit by using three versions of the time series of grayscale images from the image capture device, “the plurality of frames are temporally spaced in time series”). In regards to claim 14, Spears et al discloses a non-transitory, processor-readable medium storing instructions that, when executed by a processor, (see at least: Par. 0160-0161, “machine-readable medium 1022 … capable of storing instructions”), cause the processor to: receive a plurality of images associated with a plurality of video frames, the plurality of images including a first image, a second image, and a third image, (see at least: Par. 0038, an image capture device 104 may be an RGB camera (e.g., capable of capturing color images using a red-green-blue (RGB) image sensor); and from Par. 0055, the input engine 202 may be configured to receive and decode images, from the image capture device(s) 104, [i.e., receive a plurality of images associated with a plurality of video frames, “implicit by receiving and decode images by the input engine 202”, the plurality of images including a first image, a second image, and a third image, “implicitly by using red-green-blue (RGB) image sensor”]); generate a multi-channel image based on the first image, the second image, and the third image, (see at least: Par. 0005, and Par. 0039, combining information across IR images into a multi-channel image, described herein as an RGB image, [i.e., generate a multi-channel image, “multi-channel image RGB”, based on the first video frame, the second video frame, and the third video frame, “based on the plurality of IR images, including first video frame, a second video frame, and a third video frame”]). train, neural networks for gas leak identification; and from Par. 0061, the frame engine 206 may “stack” the three grayscale images to a single three channel image, and when these images are interpreted as a standard RGB image, they show a rainbow effect in areas of movements caught within the image; and from Par. 0067,once the sets of images are stacked by the frame engine 206, areas of movement, such as VOC gas leaks, may be picked up by the different colored channels, [i.e., implicitly training one of the first image, the second image, or the third image, a neural network to detect an object in motion based on the multi-channel image, “implicitly training one or more models from region-based convolution neural networks with the image data acquired by RGB camera”, to detect an object in motion based on the multi-channel image, “implicit by picking areas of movement, such as VOC gas leaks in the standard RGB image”]). Spears does not expressly disclose using as a ground truth image, one of the first image, the second image, or the third image, in training the neural network. However, Spears discloses the gas monitoring system 102 as training engine, which uses at least a portion of a training set of images and/or segmentation masks to train the AI modeling system to assist in identifying regions of interest within the image for creating segmentation masks to identify gas leaks associated with the segmentation mask, (see at least: Par. 0076); …and the gas leak detection engine 210 may receive the results from the AI engine 208 and notify the user if a colored cloud representing movement in the RGB image is classified as a gas leak, (Par. 0084), which the at least a portion of a training set of images and/or segmentation masks is technically equivalent to the ground truth image, that trains the AI modeling system for detecting colored cloud representing movement in the RGB image. In regards to claim 15, Spears obviously discloses the limitations of claim 1. Spears further discloses wherein the neural network is a convolutional neural network configured to process the multi-channel image, (see at least: Par. 0041, 0070, using one or more models from region-based convolution neural networks for gas leak identification). In regards to claim 16, Spears obviously discloses the limitations of claim 1. Spears further discloses wherein each of the first image, the second image, and the third image is a grayscale image from a plurality of grayscale images, (see at least: Par. 0039, 0061-0062, three frames passed from the frame registration engine 204 may be converted to a single-channel grayscale image, “each of (first, second, and third frame), is implicitly a grayscale image from a plurality of grayscale images). In regards to claim 17, Spears obviously discloses the limitations of claim 1. Spears further discloses wherein the ground truth image is associated with a label, (see at least: Par. Par. 0082-0084, the frame engine 206 may categorize or otherwise label objects in or associated with ROIs as gas leaks based on any criteria including or not including the segmentation mask criteria and/or any number of models, ….., the gas leak detection engine 210 may receive the results from the AI engine 208 and notify the user if a colored cloud representing movement in the RGB image is classified as a gas leak, [i.e., the one or more ROI’s of objects corresponding to the ground truth image, and is implicitly associated with the label objects]). In regards to claim 19, Spears obviously discloses the limitations of claim 1. Spears further discloses the first image is temporally spaced, by a predefined interval and within the plurality of video frames, from the second image; and the second image is temporally spaced, by the predefined interval and within the plurality of video frames, from the third image, (see at least: Par. 0062, implicit by using three versions of the time series of grayscale images from the image capture device, “the plurality of frames are temporally spaced in time series”). Claims 6-13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Spears et al, (US-PGPUB 20230177726) in view of Gowda, (US-Patent 11,640,692) In regards to claim 6, Spears obviously discloses the limitations of claim 1. Spears further discloses wherein: the image depicts an artifact associated with the object in motion, (see at least: Par. 0030, using an AI system to recognize artifacts, objects, and the like in images, “artifacts are implicitly associated with the object in motion”); and Spears does not expressly disclose the instructions to detect the object in motion include instructions to detect, using the neural network, the object in motion based on the artifact depicted in the image However, Gowda discloses the instructions to detect the object in motion include instructions to detect, using the neural network, the object in motion based on the artifact depicted in the image, (see at least: col. 6, line 35 through col. 7, line 18, mask data unit 246 can include a machine learning unit trained to identify an object that is moving and creating distortion or noise in the acquired image data and determine to generate a mask for that particular object, [i.e., detecting, using the neural network, “machine learning unit”, the object in motion, “identify an object that is moving”, based on the artifact depicted in the image, “based on distortion or noise created by the moving object in the acquired image data”]). Spears and Gowda are combinable because they are both concerned with object detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Spears, to use the mask data unit 246 including the machine learning unit, as though by Gowda, in order to identify an object that is moving and creating distortion or noise in the acquired image data, (Gowda, col. 7, lines 12-18) In regards to claim 7, Spears et al discloses a non-transitory, processor- readable medium storing instructions that, when executed by a processor, (see at least: Par. 0160-0161, “machine-readable medium 1022 … capable of storing instructions”), cause the processor to: receive a video stream including a plurality of video frames that depicts an object in motion, (see at least: Par. 0066-0067, “see the rejection of claim 1 for more details” select, from the plurality of video frames, a first video frame, a second video frame, and a third video frame, (see at least: Par. 0055, “see the rejection of claim 1 for more details”); generate a multi-channel image based on the first video frame, the second video frame, and the third video frame, (see at least: Par. 0005, and Par. 0039, combining information across IR images into a multi-channel image, described herein as an RGB image, [i.e., generate a multi-channel image, “multi-channel image RGB”, based on the first video frame, the second video frame, and the third video frame, “based on the plurality of IR images, including first video frame, a second video frame, and a third video frame”]); and detecting, using a neural network, the object in motion movement, such as VOC gas leaks, may be picked up by the different colored channels, [i.e., detecting, using a neural network, the object in motion depicted in the multi-channel image, “implicit by picking areas of movement, such as VOC gas leaks in the standard RGB image”]); Spears does not expressly disclose that the object motion detection is based on motion blur. However, Gowda discloses that the object motion detection is based on motion blur, (see at least: col. 6, line 35 through col. 7, line 18, segmentation unit 244 obtains a sequence of light intensity images (e.g., RGB) and performs a semantic segmentation algorithm to recognized objects, ….and a mask data unit 246 can include a machine learning unit trained to identify an object that is moving and creating distortion or noise in the acquired image data and determine to generate a mask for that particular object, [i.e., detecting, using a neural network, “machine learning unit”, the object in motion, “identify an object that is moving”, based on motion blur depicted in the multi-channel image, “ based on distortion or noise created by the moving object in the acquired image data (RGB image)]). Spears and Gowda are combinable because they are both concerned with object detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Spears, to use the mask data unit 246 including the machine learning unit, as though by Gowda, in order to identify an object that is moving and creating distortion or noise in the acquired image data, (Gowda, col. 7, lines 12-18) In regards to claim 8, the combine teaching Spears and Gowda as whole discloses the limitations of claim 7. Spears further discloses wherein: each of the first video frame, the second video frame, and the third video frame includes a plurality of pixels, (see at least: Par. 0042, selecting images received from the image capture device 104 (e.g., from video) and locates each pixel of every object in the image, “the images implicitly include plurality of pixels”); and for each of the first video frame, the second video frame and the third video frame, each pixel from the plurality of pixels for that video frame is represented by a single channel, (Par. 0065, areas of the sets of images representing the stationary objects will have the same intensity in all three channels: red, green, and blue, [i.e., each pixel from the plurality of pixels for that video frame is represented by a single channel, “red, green, and blue”]). In regards to claim 9, the combine teaching Spears and Gowda as whole discloses the limitations of claim 7. Spears further discloses wherein: the instructions to generate the multi-channel image include instructions to: encode a first channel of each pixel of the multi-channel image based on the first video frame, to define a first encoded channel, encode a second channel of each pixel of the multi-channel image based on the second video frame, to define a second encoded channel, and encode a third channel of each pixel of the multi-channel image based on the third video frame, to define a second encoded channel, (see at least: Par. 0005, 0007, and 0061, placing three images in a time series in different color channels, and when these images are interpreted as a standard RGB image, they show a rainbow effect in areas of movement caught within the image, [i.e., encoding first channel, second channel, third channel based on placing three images in a time series in different color channels, interpreted as a standard RGB image]); and the instructions to detect the object in motion include instructions to detect, using the neural network, the object in motion based on a plurality of channels of at least one pixel of the multi-channel image, the at least one pixel depicting the motion blur, (see at least: see at least: Par. 0030, utilize an AI system to recognize artifacts, objects, and the like in images; and from Par. 0042, selecting images received from the image capture device 104 (e.g., from video) and locates each pixel of every object in the image; and Par. 0006, 0041, detecting, using the neural network, the object in motion based on a plurality of channels, “see the rejection of claim 7 above for more details”) In the other hand, Gowda discloses detecting, using the neural network, the object in motion based on a plurality of channels of at least one pixel of the multi-channel image, the at least one pixel depicting the motion blur, (see at least: col. 6, line 35 through col. 7, line 18, detecting, using a neural network, “machine learning unit”, the object in motion, “identify an object that is moving”, based on a plurality of channels of at least one pixel of the multi-channel image, the at least one pixel depicting the motion blur, “based on distortion or noise created by the moving object:, where the generated mask for that particular object implicitly corresponds to the at least one pixel depicting the motion blur). In regards to claim 10, the combine teaching Spears and Gowda as whole discloses the limitations of claim 9. Spears further discloses wherein: the multi-channel image is an RGB image, (see at least: Par. 0039, the multi-channel image is described herein as an RGB image); the instructions to encode the first channel of each pixel of the RGB image include instructions to encode the first channel of each pixel of the RGB image based on an R channel of each pixel of the first video frame; the instructions to encode the second channel of each pixel of the RGB image include instructions to encode the second channel of each pixel of the RGB image based on a G channel of each pixel of the second video frame; and the instructions to encode the third channel of each pixel of the RGB image include instructions to encode the third channel of each pixel of the RGB image based on a B channel of each pixel of the third video frame, (see at least: Par. 0112, implicit by interpreting each of the three sets of frames or images with a different one of a red, green, or blue channel (or using any multichannel image). In regards to claim 11, the combine teaching Spears and Gowda as whole discloses the limitations of claim 7. Spears further discloses wherein the neural network is a convolutional neural network (1) configured to process the multi-channel image, (see at least: Par. 0041, 0070, using one or more models from region-based convolution neural networks for gas leak identification), (2) trained based on a grayscale image, (Par. 0061, three frames passed from the frame registration engine 204 may be converted to a single-channel grayscale image; and from Par. 0041, the gas monitoring system 102 utilizes one or more models from region-based convolution neural network for gas leak identification, “i.e., the CNN is implicitly trained based on a grayscale image, “single-channel grayscale image”). In regards to claim 12, the combine teaching Spears and Gowda as whole discloses the limitations of claim 7. Spears further discloses wherein: each of the first video frame, the second video frame, and the third video frame includes an associated color image, (see at least: Par. 0038, implicit by capturing color images using a red-green-blue (RGB) image sensor); and the non-transitory, processor-readable medium further stores instructions to cause the processor to: generate a first grayscale image based on the first video frame, generate a second grayscale image based on the second video frame, and generate a third grayscale image based on the third video frame, (see at least: Par. 0039, the multi-channel image may be any combination of grayscales in any order, “i.e., implicit the generating of first, second, and third grayscales based on the first, second, and third frames”); and the instructions to generate the multi-channel image include instructions to generate the multi-channel image based on the first grayscale image, the second grayscale image, and third grayscale image, (see at least: Par. 0039, implicit by the multi-channel image may be any combination of grayscales). In regards to claim 13, the combine teaching Spears and Gowda as whole discloses the limitations of claim 7. Gowda further discloses wherein: the motion blur is a color artifact; (see at least: col. 6, line 35 through col. 7, line 18, segmentation unit 244 obtains a sequence of light intensity images (e.g., RGB) … to identify an object that is moving and creating distortion or noise in the acquired image data, “i.e., implicitly detecting the motion blur that is a color artifact, (based light intensity images (e.g., RGB)). In the other hand, Spears discloses that the multi-channel image further depicts a grayscale background, (see at least: Par. 0113, although the channels are indicated as green, red, and blue, it will be appreciated that the colors may be in any order, be different colors, or be different variations of shading (e.g., in grayscale), “i.e., the multi-channel image implicitly depicts a grayscale background, (shading)). In regards to claim 18, Spears obviously discloses the limitations of claim 14. Spears further discloses wherein: the multi-channel image depicts noise associated with the object in motion, (see at least: Par. 0030, implicit by using an AI system to recognize artifacts, objects, and the like in images that indicate a leak). Spears does not expressly disclose training the neural network to detect the object in motion based on the noise depicted by the multi-channel image. However, Gowda discloses training the neural network to detect the object in motion based on the noise depicted by the multi-channel image, (see at least: col. 6, line 35 through col. 7, line 18, mask data unit 246 can include a machine learning unit trained to identify an object that is moving and creating distortion or noise in the acquired image data and determine to generate a mask for that particular object, [i.e., training the neural network to detect the object in motion based on the noise depicted by the multi-channel image, “implicitly by identifying an object that is moving based on distortion or noise created by the moving object in the acquired image data”]). Spears and Gowda are combinable because they are both concerned with object detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Spears, to use the mask data unit 246 including the machine learning unit, as though by Gowda, in order to identify an object that is moving and creating distortion or noise in the acquired image data, (Gowda, col. 7, lines 12-18). In regards to claim 20, Spears obviously discloses the limitations of claim 14. Spears further discloses wherein: the instructions to generate the multi-channel image include instructions to: encode a first channel from three channels of each pixel of the multi-channel image based on the first image, to define a first encoded channel, encode a second channel from the three channels of each pixel of the multi-channel image based on the second image, to define a second encoded channel, and encode a third channel from the three channels of each pixel of the multi-channel image based on the third image, to define a third encoded channel, (see at least: Par. 0005, 0007, and 0061, placing three images in a time series in different color channels, and when these images are interpreted as a standard RGB image, they show a rainbow effect in areas of movement caught within the image, [i.e., encoding first channel, second channel, third channel based on placing three images in a time series in different color channels, interpreted as a standard RGB image]); and Spears does not expressly disclose training the neural network include instructions to train the neural network based on the three channels of each pixel of the multi-channel image However, Gowda discloses training the neural network include instructions to train the neural network based on the three channels of each pixel of the multi-channel image, (see at least: col. 6, line 35 through col. 7, line 18, training the neural network, “machine learning unit”, based on the three channels of each pixel of the multi-channel image, “based on the mask (pixel), generated for particular object based on the sequence of light intensity images (e.g., RGB)”). Spears and Gowda are combinable because they are both concerned with object detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Spears, to use the mask data unit 246 including the machine learning unit, as though by Gowda, in order to identify an object that is moving in the acquired image data, (Gowda, col. 7, lines 12-18). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMARA ABDI whose telephone number is (571)272-0273. The examiner can normally be reached 9:00am-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMARA ABDI/Primary Examiner, Art Unit 2668 03/19/2026
Read full office action

Prosecution Timeline

Apr 01, 2024
Application Filed
Mar 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602822
METHOD DEVICE AND STORAGE MEDIUM FOR BACK-END OPTIMIZATION OF SIMULTANEOUS LOCALIZATION AND MAPPING
2y 5m to grant Granted Apr 14, 2026
Patent 12597252
METHOD OF TRACKING OBJECTS
2y 5m to grant Granted Apr 07, 2026
Patent 12576595
SYSTEMS AND METHODS FOR IMPROVED VOLUMETRIC ADDITIVE MANUFACTURING
2y 5m to grant Granted Mar 17, 2026
Patent 12574469
VIDEO SURVEILLANCE SYSTEM, VIDEO PROCESSING APPARATUS, VIDEO PROCESSING METHOD, AND VIDEO PROCESSING PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12563154
VIDEO SURVEILLANCE SYSTEM, VIDEO PROCESSING APPARATUS, VIDEO PROCESSING METHOD, AND VIDEO PROCESSING PROGRAM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
76%
With Interview (-7.5%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 816 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month