Prosecution Insights
Last updated: April 19, 2026
Application No. 18/339,843

COMPUTER VISION BASED INVENTORY SYSTEM AT INDUSTRIAL PLANTS

Non-Final OA §101§102§103
Filed
Jun 22, 2023
Examiner
YU, ARIEL J
Art Unit
3627
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Emeasurematics Inc.
OA Round
3 (Non-Final)
40%
Grant Probability
At Risk
3-4
OA Rounds
4y 3m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 40% of cases
40%
Career Allow Rate
155 granted / 389 resolved
-12.2% vs TC avg
Strong +27% interview lift
Without
With
+27.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
41 currently pending
Career history
430
Total Applications
across all art units

Statute-Specific Performance

§101
18.2%
-21.8% vs TC avg
§103
55.2%
+15.2% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 389 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/06/2011 has been entered. Response to Amendment Applicant’s “Amendment” filed on 12/11/2025 has been considered. Claims 1 and 13 are amended. Claims 3 and 14 are canceled. Claims 1-2, 4-13, and 15-20 remain pending in this application and an action on the merits follow. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 9-12 are rejected under 35 USC 101. The claimed invention is directed to non-statutory subject matter because claim 9 is directed to an abstract idea without significantly more. Claims 10-12 fail to remedy these deficiencies. The claim 9 recites receiving location data from location sensors, receiving image data, generating a prediction location of an inventory item, determining an actual location of the inventory item based on the location data, comparing the predicted location and the actual location, and updating the location prediction application. The Claim 1 recites generating, determining, comparing, and updating steps as drafted, are processes that under broadest reasonable interpretation, cover performance of managing personal behavior, but for the recitation of generic computer components. That is, other than reciting “one or more location sensors and a camera system”, nothing in the claim element precludes the steps from practically being performed by organizing human activity. For example, but for the “one or more location sensors and a camera system” in the context of these claims encompasses a person manually generates/analyzes to determine a predicted location of an inventory item, determines an actual location based on the location data, compares the predicted location and the actual location, and updates the location prediction application based on the comparison result. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by managing personal behavior but for the recitation of generic computer components, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application because receiving steps are recited at a high level of generality (i.e., as a general means of receiving location data and image data step) and amounts to mere data gathering, which is a form of insignificant extra-solution activity. This judicial exception is not integrated into a practical application because the claims as a whole merely describe how to generally “apply” the concept of receiving, generating, determining, comparing, and updating in a computer environment. The claimed computer components such as the one or more location sensors and the camera system are recited at a high level of generality and are merely invoked as tools to perform receiving, generating, determining, comparing, and updating steps. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim 9 is directed to an abstract idea. The claim 9 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using the one or more location sensors and the camera system to perform receiving, generating, determining, comparing, and updating steps amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Therefore, the claim does not amount to significantly more than the recited abstract idea (Step 2B: NO). The claim 9 is not patent eligible. Claims 10-12, disclose insignificant helpful content to further describe content, such as different type of location sensors for location determination and image data processing which are merely descriptive content to further limit the abstract idea but not make it less abstract. Thus, the claims 10-12 are directed to an abstract idea. This judicial exception is not integrated into a practical application because descriptive content in claims 10-12 further limit the abstract idea but not make it less abstract. Thus, the claims 10-12 are directed to an abstract idea. There are no additional claim element limitations recited in the claims 10-12. Therefore, the claim does not amount to significantly more than the recited abstract idea (Step 2B: NO). The claims 10-12 are not patent eligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 9-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Application Publication No. 2019/0102686 to Yang et al. With regard to claim 9, Yang discloses a method for training a location prediction application of a deep learning module, the method including: receiving, at the deep learning module, location data from one or more location sensors (paragraph 37, the automated training 204 includes using sensors 204a to capture sensor readings 204b. In additional or alternative embodiments, the sensors 204a may also include other sensors, such as, for example, global positioning system (GPS) sensor); and receiving, at the deep learning module, image data from a camera system (paragraphs 24 and 35, the planogram compliance system 200 may begin operation by using top-down view cameras 202a to capture image frames 202b in order to create a panorama view of a store for shelving unit tracking 202c for a shelving unit. In FIG. 1, the planogram shows desired placement of denim items 102, polo shirts 104, dress shirts 106, and t-shirts 108. ); generating, via the location prediction application, a predicted location of an inventory item or transaction based on the image data (paragraphs 24 and 29-30, An example planogram is illustrated in FIG. 1. As shown, the planogram indicates dimensions of an example shelving unit and desired placement of items (e.g., apparel items in the example of FIG. 1) in locations within the shelving unit. Embodiments use a merchandise tracking model with machine learning techniques in order to learn from and make predictions on data. Embodiments provide advantages over conventional planogram compliance solutions that rely on deep learning. This is because deep learning not only requires significant amount of data, but also requires constant updating of a large training dataset in order to keep the performance up to date in response to ever-changing appearance of merchandise items. Examiner notes that desired placement of items based on the planogram images is considered as “generating, via the location prediction application, a predicted location of an inventory item or transaction based on the image data”); determining an actual location of the inventory item or transaction of the inventory item based on the location data (paragraphs 2 and 39, A planogram is a visual representation of items at a location, such as, for example, products at a store. For instance, planograms are visual representations that indicate placement of retail products in stores or on store shelves. In certain embodiments, the RFID readings may be used in conjunction with GPS readings to capture sensor readings 204b that include precise location information for merchandise items within a shelving unit.); comparing the predicted location and the actual location (paragraph 41, As shown in FIG. 2, the automated training 204 uses the sensor readings 204b to perform merchandise-shelving unit clustering 204c. The merchandise-shelving unit clustering 204c is based on comparing the desired shelving unit location in the planogram 202e to merchandise item locations indicated by the sensor readings 204b. For instance, the merchandise-shelving unit clustering 204c may identify locations for clusters of merchandise items (as indicated by sensor readings 204b) and compare those cluster locations to shelving unit locations in the stored planogram 202e.); and updating the location prediction application based on the comparison of the predicted location and the actual location (paragraph 36, a determination is made as to whether the location of the shelving unit has changed. If it is determined that there has been a location change, a stored planogram 202e is updated. In this way, the self-learned planogram 202 is created and saved as the stored planogram 202e that includes the current location of the shelving unit.). With regard to claim 10, Yang discloses the one or more location sensors include at least one selected from a group consisting of a radar sensor, an infrared sensor, a lidar sensor, and a differential global positioning system (paragraph 37). With regard to claim 11, Yang discloses confirming the actual location of the inventory item based on data from an additional location sensor different from the one or more location sensors (paragraph 37, According to additional or alternative embodiments, the sensors 204a may include sensors configured to capture sensor readings 204b from passive RFID tags.). With regard to claim 12, Yang discloses determining an amount of the inventory item based on the image data (paragraph 25, Systems and methods for automated planogram compliance disclosed herein cover all aspects of planogram compliance and address the problem of lost sales caused by insufficient inventory levels, out-of-stock (OoS) and out of place items (e.g., misplaced merchandise items in the wrong department or shelving unit)). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8 and 13-19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2022/0011779 to Kim et al, in view of U.S. Patent Application Publication No. 2021/0374659 to Ganapathi et al. With regard to claims 1 and 13, Kim discloses an inventory management system for a manufacturing facility, the system comprising: a camera system (paragraph 32, The image sensor 210 captures images of an environment of a storage site for navigation, localization, collision avoidance, object recognition and identification, and inventory recognition purposes.); a central management server including a transceiver (paragraphs 30 and 47, The communication engine 255 and the I/O interface 260 are communication components to allow the robot 120 to communicate with other components in the system environment 100.); and an electronic processor configured to receive identification information for an inventory item (paragraphs 32-33, The image sensor 210 may also capture pictures of labels (e.g., barcodes) on items for inventory cycle-counting purposes. a robot 120 may include a single processor or multiple processors 215 to carry out various operations. The memory 220 may also store images and videos captured by the image sensor 210. The images may include images that capture the surrounding environment and images of the inventory such as barcodes and labels.), perform, via a vehicle configured to handle the inventory item, a transaction of the inventory item between a first location and a second location (paragraphs 20 and 43, An inventory robot may be used to track inventory items, move inventory items, and carry out other inventory management tasks. The robot 120 may receive inputs such as user commands to perform certain actions (e.g., scanning of inventory, moving an item, etc.) at certain locations. ), verify the transaction based on image data from the camera system (paragraph 44, The planner 250 may plan the route of the robot 120 based on data provided by the visual reference engine 240 and the data provided by the state estimator 235. For example, the visual reference engine 240 estimates the current location of the robot 120 by tracking the number of regularly shaped structures in the storage site 110 passed by the robot 120. Based on the location information provided by the visual reference engine 240, the planner 250 determines the route of the robot 120 and may adjust the movement of the robot 120 as the robot 120 travels along the route.), and generate, based on the plurality of images, a chain of custody record for the inventory item including a current location of the inventory item (paragraphs 23 and 33, An administrator may rely on the item coordinate data in the inventory management system 140 to ensure that items are correctly placed in the storage site 110 so that the items can be readily retrieved from a storage location. The memory 220 may also store images and videos captured by the image sensor 210. The images may include images that capture the surrounding environment and images of the inventory such as barcodes and labels). However, Kim does not disclose verify the transaction based on the plurality of images, wherein verifying the transaction includes processing the plurality of images to determine an action performed on the inventory item and change in the inventory item at each location; and operate a handling device of the vehicle, wherein operating the handling device includes determining a position of the handling device based on image data from the camera system. However, Ganapathi teaches verify the transaction based on the plurality of images, wherein verifying the transaction includes processing the plurality of images to determine an action performed on the inventory item and change in the inventory item at each location (A vehicle (such as a forklift truck, a pallet jack, an order picker, or a cart) capable of transporting the inventory and sometimes operated by a human operator (i.e. not an automatic vehicle or robot) moves throughout the warehouse and manipulates the inventory (referred to as the manipulation) or supports the manipulation of the inventory by the human operator. The manipulation is defined as one or more of the steps of moving the inventory with the vehicle or by the operator from an entry of the inventory into the warehouse, storing the inventory by the at least one vehicle at the inventory locations, picking up the inventory with the at least one vehicle from the inventory locations, to a departure of the inventory out of the warehouse. Images of the inventory are captured with at least one of the plurality of cameras on the vehicle during the manipulation of the inventory. At least one of the captured images are digitized and unique inventory information features are extracted from the captured images of the inventory during the manipulation. Count and verify the items picked from or placed into a box or inventory location through visual imagery of the activity performed. The scene is captured from multiple cameras to cover the activity from different perspectives. The aim is to identify and subsequently verify the number of items involved in the transaction to generate any potential discrepancies. paragraphs 14-15, 17-18, and 117); and operate a handling device of the vehicle, wherein operating the handling device includes determining a position of the handling device based on image data from the camera system ( These cameras and sensors are positioned strategically around the forklift so that they can capture the location of the forklift at any given instant and also the motion of the warehouse worker who is performing the picking action. As the warehouse worker drives the forklift to the location, the sensors track the location of the forklift, and when the worker arrives at the first pick location, the sensors detect that he has arrived at the location, and the cameras now start recording the motion of the worker and the items that he is picking. The image processing algorithms automatically verify that he is picking from the right location, picking the right item from the correct box, and also that he is picking the correct quantity of items from the box. Paragraphs 70-72). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kim to include, verify the transaction based on the plurality of images, wherein verifying the transaction includes processing the plurality of images to determine an action performed on the inventory item and change in the inventory item at each location; and operate a handling device of the vehicle, wherein operating the handling device includes determining a position of the handling device based on image data from the camera system, as taught in Ganapathi, in order to enhance the accuracy of inventory, at a vastly reduced cost (Ganapathi, abstract). With regard to claim 2, Kim discloses the one or more vehicles include at least one selected from a group consisting of a land-based vehicle, an air-based vehicle, and a crane system (paragraph 20, The robot 120 may take various forms such as an aerial drone, a ground robot, a vehicle, a forklift, and a mobile picking robot). With regard to claims 3 and 14, Kim discloses the electronic processor is further configured to operate a handling device of the vehicle, wherein operating the handling device includes determining a position of the handling device based on image data from the camera system (paragraphs 20 and 32, robots 120 may specialize in moving items. The robot 120 may include a digital camera that captures optical images of the environment for the state estimator 235. For example, data captured by the image sensor 210 may also be provided to the VIO unit 236 that may be included in the state estimator 235 for localization purposes such as to determine the position and orientation of the robot 120 with respect to an inertial frame). With regard to claims 4 and 15, the combination of references discloses the electronic processor is further configured to determine an amount of inventory item handled by the vehicle during the transaction based on the plurality of images as the change in the inventory item (Ganapathi, paragraphs 70, 117, and 130, The image processing algorithms automatically verify that he is picking from the right location, picking the right item from the correct box, and also that he is picking the correct quantity of items from the box. The image processing software automatically verifies that the correct quantity of the correct item from the correct box has been picked. this serves as an automatic Quality Control check on the pick event. Each object is tracked during the segment duration. An object that leaves the scene is counted as picked object. Objects added/returned to the scene is counted as returned item. All picked and placed items are summarized for each activity segment.). With regard to claims 5 and 16, Kim discloses the electronic processor is further configured to, in response to determining an inconsistency within the transaction of the inventory item in verifying the transaction based on the plurality of images, perform a chain of custody correction including moving the inventory item to a third location (paragraphs 24 and 44-45, the computing server 150 may identify discrepancies in two sets of data and determine whether any items may be misplaced, lost, damaged, or otherwise should be flagged for various reasons. In turn, the computing server 150 may direct a robot 120 to remedy any potential issues such as moving a misplaced item to the correct position. For example, if the planner 250 determines that the robot 120 has passed a target aisle and traveled too far away from the target aisle, the planner 250 may send signals to the FCU 225 to try to remedy the path). With regard to claims 6 and 17, Kim discloses performing and verifying the transaction of the inventory item includes identifying each of the plurality of images depicting a different stage of the transaction, determining, based on each of the plurality of images, inventory item information regarding at least one selected from the group consisting of a location of a stage of the transaction, a vehicle performing the transaction, and an amount of the inventory item (paragraphs 33-34, 42, and 44-45, the planner 250 determines the path of the robot 120 from a starting point to a destination and provides commands to the FCU 225. the planner 250 controls the robot 120 to move to the target location and take a picture of the inventory in the target location. When the robot 120 changes direction (e.g., rotations, transitions from horizontal movement to vertical movement, transitions from vertical movement to horizontal movement, etc.), the center offset information may be used to determine the accurate location of the robot 120 relative to an object. The images may include images that capture the surrounding environment and images of the inventory such as barcodes and labels.), and updating the chain of custody record to include the inventory item information (paragraphs 23 and 33, The inventory management system 140 may include a database that stores data regarding inventory items and the items' associated information, such as quantities in the storage site 110, metadata tags, asset type tags, barcode labels and location coordinates of the items.). With regard to claims 7 and 18, Kim discloses the electronic processor determines the location depicted within the image based on a physical marker within an area depicted in at least one of the plurality of images (paragraphs 19 and 69, Regularly shaped structures may be structures, fixtures, equipment, furniture, frames, shells, racks, or other suitable things in the storage site 110 that have a regular shape or outline that can be readily identifiable, whether the things are permanent or temporary, fixed or movable, weight-bearing or not. The robot 120 analyzes 550 the images captured by the image sensor 210 to determine the current location of the robot 120 in the path 470 by tracking the number of regularly shaped structures in the storage site passed by the robot 120.). With regard to claims 8 and 19, Kim discloses performing and verifying the transaction includes identifying a location of the physical marker within a first image of the plurality of images, identifying a location of the physical marker within a second image of the plurality of images, and determining an exact location of the vehicle based on a difference between a number of pixels between an edge of the first image and the location of the physical marker within the first image and a number of pixels between a same edge of the second image and the location of the physical marker within the second image (paragraphs 37, 39, and 74, The VIO unit 236 receives image data from the image sensor 210 (e.g., a stereo camera) and measurements from IMU 230 to generate localization information such as the position and orientation of the robot 120. The visual reference engine 240 may receive pixel data of a series of images and point cloud data from the image sensor 210. The location information generated by the visual reference engine 240 may include distance and yaw from an object and center offset from a target point (e.g., a midpoint of a target object). The VIO unit 236 may extract image feature points and tracks the feature points in the image sequence to generate optical flow vectors that represent the movement of edges, boundaries, surfaces of objects in the environment captured by the image sensor 210. Alternative to or in addition to using any machine learning techniques, other image segmentation algorithms such as edge detection algorithms). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2022/0011779 to Kim et al, in view of U.S. Patent Application Publication No. 2021/0374659 to Ganapathi et al., and further in view of U.S. Patent Application Publication No. 2019/0102686 to Yang et al. With regard to claim 20, the combination of references substantially discloses the claimed invention, however, the combination of references does not disclose a method for training a location prediction application of a deep learning module. However, Yang teaches receiving, at the deep learning module, location data from one or more location sensors (paragraph 37); receiving, at the deep learning module, image data from a camera system (paragraphs 24 and 35 ); generating, via the location prediction application, a predicted location of an inventory item or transaction based on the image data (paragraphs 24 and 29-30); determining an actual location of the inventory item or transaction of the inventory item based on the location data (paragraphs 2 and 39); comparing the predicted location and the actual location (paragraph 41); and updating the location prediction application based on the comparison of the predicted location and the actual location (paragraph 36). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of references to include, a method for training a location prediction application of a deep learning module, as taught in Yang, in order to improve efficiency of planogram compliance as compared to conventional vision-based methods (Yang, paragraph 29). Response to Arguments Applicants' arguments filed on 12/11/2025 have been fully considered but they are not fully persuasive especially in light of the previously references applied in the rejections. Applicants remark that “the combination of references does not disclose operate a handling device of the vehicle, wherein operating the handling device includes determining a position of the handling device based on image data from the camera system”. Examiner directs Applicants' attention to the office action above. Applicants remark that “the combination of references does not disclose generating, via the location prediction application, a predicted location of an inventory item or transaction based on the image data”. Examiner directs Applicants' attention to the office action above. Conclusion Please refer to form 892 for cited references. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARIEL J YU whose telephone number is (571)270-3312. The examiner can normally be reached 11AM - 7PM (M-F). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Obeid Fahd A can be reached on 571-270-3324. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ARIEL J YU/Primary Examiner, Art Unit 3627
Read full office action

Prosecution Timeline

Jun 22, 2023
Application Filed
Apr 28, 2025
Non-Final Rejection — §101, §102, §103
Jul 08, 2025
Interview Requested
Jul 29, 2025
Examiner Interview Summary
Jul 29, 2025
Applicant Interview (Telephonic)
Jul 31, 2025
Response Filed
Sep 08, 2025
Final Rejection — §101, §102, §103
Dec 11, 2025
Request for Continued Examination
Dec 20, 2025
Response after Non-Final Action
Mar 17, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579524
CRYPTOCURRENCY TERMINAL AND TRANSACTION PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12579526
TARGETED REMOTE PAYMENTS LEVERAGING ULTRA-WIDEBAND (UWB) AND MICRO-ELECTROMECHANICAL SYSTEMS (MEMS) SENSOR COMMUNICATIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12493916
COLLECTION OF TRANSACTION RECEIPTS USING AN ONLINE CONTENT MANAGEMENT SERVICE
2y 5m to grant Granted Dec 09, 2025
Patent 12456091
Automated Package Delivery System
2y 5m to grant Granted Oct 28, 2025
Patent 12456107
CUSTOMIZABLE MEDIA CONTENT FOR POINT OF SALE (POS) TRANSACTIONS
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
40%
Grant Probability
67%
With Interview (+27.4%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 389 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month