Prosecution Insights
Last updated: April 19, 2026
Application No. 18/454,245

NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING APPARATUS

Final Rejection §103
Filed
Aug 23, 2023
Examiner
MAIDEN, MICHAEL KIM
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Fujitsu Limited
OA Round
2 (Final)
93%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 93% — above average
93%
Career Allow Rate
67 granted / 72 resolved
+31.1% vs TC avg
Moderate +9% lift
Without
With
+8.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
16 currently pending
Career history
88
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
52.1%
+12.1% vs TC avg
§102
29.0%
-11.0% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 72 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Action is made FINAL. Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Japan on 12/07/2022. It is noted, however, that no certified copy of the JP2022-195979 application has been submitted as required by 37 CFR 1.55. Response to Arguments Applicant’s arguments have been considered but are moot in view of the new grounds of rejection in view of Wen (US 20210280027 A1) and Darvish (US 20190354770 A1). Specifically, Wen is relied upon to teach the added limitations of “acquire video image data on a user who purchases a commodity product…” at (Wen: ¶16 “In certain embodiments, the computer executable code is configured to, before track the product: segment the video frames such that each pixel of the video frames is labeled with hand of a customer, product in hand,” ). Claim Status Claim(s) 1-6, and 9-13 are rejected under 35 U.S.C. 103 as being unpatentable over Goncalves (US 20100059589 A1) in view of Wen (US 20210280027 A1). Claim 7-8 is rejected under 35 U.S.C. 103 as being unpatentable over Goncalves (US 20100059589 A1) in view of Wen (US 20210280027 A1) and in further view of Darvish (US 20190354770 A1). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, and 9-13 are rejected under 35 U.S.C. 103 as being unpatentable over Goncalves (US 20100059589 A1) in view of Wen (US 20210280027 A1). Regarding claim 1, Goncalves discloses specifying, by analyzing the acquired video image data, (Goncalves: ¶24 “the camera (101, 101') triggers image capture based on motion detection and/or optical flow analysis of a video stream”) the commodity product (Goncalves: ¶54 “The image captured through image acquisition module (110) is processed by the object recognition module (103), which compares the image data to a visual recognition model database (104) of item information about known items, and determines the identity of the item or items in the image,”) that is set for the code of the commodity product to be scanned to the accounting machine; (Goncalves: ¶56 “the visual recognition model database (104) also contains UPCs corresponding to objects or items whose features are contained in the database.”) acquiring, by scanning the code of the commodity product by the accounting machine, commodity product information that has been registered to the accounting machine; and (Goncalves: ¶56 “the visual recognition model database (104) also contains UPCs corresponding to objects or items whose features are contained in the database.”) generating, by comparing the acquired commodity product information with the specified commodity product an alert connected to an abnormality of a behavior of registering the commodity product to the accounting machine. (Goncalves: ¶59 “If there is a discrepancy between the sequence (105) of scanned UPCs and the sequence (109) of UPCs corresponding to visually recognized items, an exception has been detected and one of several possible actions can be executed.”) Goncalves fails to specifically disclose A non-transitory computer-readable recording medium having stored therein an information processing program that causes a computer to execute a process comprising: acquiring video image data on a user who purchases a commodity product and is scanning a code of a commodity product to an accounting machine; In related art, Wen discloses A non-transitory computer-readable recording medium having stored therein an information processing program that causes a computer to execute a process comprising: (Wen: ¶63 “The computer programs include processor-executable instructions that are stored on a non-transitory tangible computer readable medium.”) acquiring video image data (Wen: ¶8 “instruct the imaging device to capture video frames of a region of interest (ROI),” ) on a user who purchases a commodity product (Wen: ¶16 “In certain embodiments, the computer executable code is configured to, before track the product: segment the video frames such that each pixel of the video frames is labeled with hand of a customer, product in hand,” ) and is scanning a code of a commodity product to an accounting machine; (Wen: ¶14 “the computer executable code is configured to initiate a self-checkout event when the customer touches “Start” on a touchscreen of the computing device, or when the customer starts scanning a product, or the customer scans his membership card.”) that has been gripped by the user within a range of an area (Wen: ¶74 “the video frames may be simply labeled with numbers 0, 1, 2, 3, where 0 represents background, 1 represents hand or hands of a customer, either empty hand or hand holding a product”) that has been gripped by the user, (Wen: ¶74 “the video frames may be simply labeled with numbers 0, 1, 2, 3, where 0 represents background, 1 represents hand or hands of a customer, either empty hand or hand holding a product”) Therefore, it would have been obvious to for one of ordinary skill in the art before the effective filing date to incorporate the capturing video of a user scanning a product disclosed by Wen into the method of item recognition, correspondence, and alerting disclosed by Goncalves to identify items being scanned by either a customer or cashier to prevent during checkout. Regarding claim 2, Goncalves, as modified by Wen, disclose wherein the specifying includes specifying, by inputting the acquired video image data to a first machine learning model, the commodity product that has been gripped by the user (Wen: ¶74 “the video frames may be simply labeled with numbers 0, 1, 2, 3, where 0 represents background, 1 represents hand or hands of a customer, either empty hand or hand holding a product”) within the range of the area that is set for the code of the commodity product to be scanned to the accounting machine. (Goncalves: ¶54 “the object recognition module (103) receives images from one or more image acquisition modules (110, 110') and performs a visual recognition of any number of items present in an image.”) Regarding claim 3, Goncalves, as modified by Wen, disclose wherein the specifying includes specifying, by inputting the acquired video image data to a second machine learning model, an attribute related to an external appearance of the commodity product that has been gripped by the user, (Wen: ¶74 “the video frames may be simply labeled with numbers 0, 1, 2, 3, where 0 represents background, 1 represents hand or hands of a customer, either empty hand or hand holding a product”) and (Goncalves: ¶54 “object recognition module (103) can comprise a feature extractor (112), which generates a list of geometric point features present in even just a single image.”) the generating includes estimating an attribute related to the commodity product based on the commodity product information that has been acquired from the accounting machine, and (Goncalves: ¶54 “object recognition module (103) can comprise a feature extractor (112), which generates a list of geometric point features present in even just a single image.”) generating the alert when the estimated attribute related to the commodity product is different from the attribute related to the commodity product gripped by the user (Wen: ¶74 “the video frames may be simply labeled with numbers 0, 1, 2, 3, where 0 represents background, 1 represents hand or hands of a customer, either empty hand or hand holding a product”) and that has been specified by using the second machine learning model. (Goncalves: ¶59 “If there is a discrepancy between the sequence (105) of scanned UPCs and the sequence (109) of UPCs corresponding to visually recognized items, an exception has been detected and one of several possible actions can be executed.”) Regarding claim 4, Goncalves, as modified by Wen, disclose wherein specifying, by inputting the acquired video image data to a third machine learning model, a first region including a hand of the user, a second region including the commodity product, and a relationship between the first region and the second region; and (Wen: ¶16 “each pixel of the video frames is labeled with hand of a customer, product in hand, product on table, and background; and detect the product in hand and the product on table based on the labels of the pixels. In certain embodiments, the object detection methods, e.g. Mask R-CNN, Faster R-CNN, and RefineDet, can be used to detect the product in hand or on the table”) specifying that the user has gripped the commodity product based on the specified first region, the specified second region, and the specified relationship, wherein (Wen: ¶18 “the computer executable code is configured to track the product using tracking-by-detection and greedy search when the product is the product in hand”) the specifying includes specifying, by inputting, to the first machine learning model, the video image data in which it is specified by using the third machine learning model that the user has gripped the commodity product, the commodity product that has been gripped by the user within the range of the area. (Wen: ¶18 “ track the product using tracking-by-detection and greedy search when the product is the product in hand,” ¶16 “Mask R-CNN, Faster R-CNN, and RefineDet, can be used to detect the product in hand or on the table.”) Regarding claim 5, Goncalves, as modified by Wen, disclose wherein the generating includes generating, as the alert connected to the abnormality of the behavior of registering the commodity product to the accounting machine, an alert indicating that there is a commodity product that has not yet been registered to the accounting machine by the user, or, indicating that the commodity product that has been registered to the accounting machine by the user is abnormal. (Goncalves: ¶59 “If there is a discrepancy between the sequence (105) of scanned UPCs and the sequence (109) of UPCs corresponding to visually recognized items, an exception has been detected and one of several possible actions can be executed.”) Regarding claim 6, Goncalves, as modified by Wen, disclose wherein the process further includes notifying, when the alert connected to the abnormality of the behavior of registering the commodity product to the accounting machine is generated, a terminal used by a store clerk of identification information on the accounting machine and the generated alert in an associated manner. (Goncalves: ¶107 “The cashier can be alerted as soon as UPC fraud or scan passing is detected according to one or more of the following options: i) generate an auditory or visual alert executed on bi-optic (flat-bed scanner) and/or POS (cash register); or ii) suspend the transaction”) Regarding claim 9, Goncalves, as modified by Wen, disclose wherein identifying, by inputting the acquired video image data to the third machine learning model, the first region, the second region, and the relationship, wherein (Wen: ¶16 “each pixel of the video frames is labeled with hand of a customer, product in hand, product on table, and background; and detect the product in hand and the product on table based on the labels of the pixels. In certain embodiments, the object detection methods, e.g. Mask R-CNN, Faster R-CNN, and RefineDet, can be used to detect the product in hand or on the table”) the third machine learning model is a model that is used for Human Object Interaction Detection (HOID) and that is generated by performing machine learning such that a first class that indicates a user who purchases a commodity product and first region information that indicates a region in which the user appears, a second class that indicates an object including the commodity product and second region information that indicates a region in which the object appears, and an interaction between the first class and the second class are identified. (Wen: ¶16 “each pixel of the video frames is labeled with hand of a customer, product in hand, product on table, and background; and detect the product in hand and the product on table based on the labels of the pixels. In certain embodiments, the object detection methods, e.g. Mask R-CNN, Faster R-CNN, and RefineDet, can be used to detect the product in hand or on the table”) Regarding claim 10, Goncalves, as modified by Wen, disclose wherein the generating includes generating, when the commodity product information and the specified commodity product that has been gripped by the user (Wen: ¶74 “the video frames may be simply labeled with numbers 0, 1, 2, 3, where 0 represents background, 1 represents hand or hands of a customer, either empty hand or hand holding a product”) do not match, the alert connected to the abnormality of the behavior of registering the commodity product to the accounting machine. (Goncalves: ¶59 “If there is a discrepancy between the sequence (105) of scanned UPCs and the sequence (109) of UPCs corresponding to visually recognized items, an exception has been detected and one of several possible actions can be executed.”) Regarding claim 11, Goncalves, as modified by Wen, disclose wherein the accounting machine is a self-service checkout terminal. (Wen: ¶4 “Self-checkout (also known as self-service checkout or semi-attended customer-activated terminal (SACAT)) machines provide service for customers to process their own purchases from a retailer”) Regarding claim 12, Goncalves discloses specifying, by analyzing the acquired video image data, (Goncalves: ¶24 “the camera (101, 101') triggers image capture based on motion detection and/or optical flow analysis of a video stream”) the commodity product (Goncalves: ¶54 “The image captured through image acquisition module (110) is processed by the object recognition module (103), which compares the image data to a visual recognition model database (104) of item information about known items, and determines the identity of the item or items in the image,”) that is set for the code of the commodity product to be scanned to the accounting machine; (Goncalves: ¶56 “the visual recognition model database (104) also contains UPCs corresponding to objects or items whose features are contained in the database.”) acquiring, by scanning the code of the commodity product by the accounting machine, commodity product information that has been registered to the accounting machine; and (Goncalves: ¶56 “the visual recognition model database (104) also contains UPCs corresponding to objects or items whose features are contained in the database.”) generating, by comparing the acquired commodity product information with the specified commodity product an alert connected to an abnormality of a behavior of registering the commodity product to the accounting machine. (Goncalves: ¶59 “If there is a discrepancy between the sequence (105) of scanned UPCs and the sequence (109) of UPCs corresponding to visually recognized items, an exception has been detected and one of several possible actions can be executed.”) Goncalves fails to specifically disclose acquiring video image data on a user who purchases a commodity product and is scanning a code of a commodity product to an accounting machine; In related art, Wen discloses acquiring video image data (Wen: ¶8 “instruct the imaging device to capture video frames of a region of interest (ROI),” ) on a user who purchases a commodity product (Wen: ¶16 “In certain embodiments, the computer executable code is configured to, before track the product: segment the video frames such that each pixel of the video frames is labeled with hand of a customer, product in hand,” ) and is scanning a code of a commodity product to an accounting machine; (Wen: ¶14 “the computer executable code is configured to initiate a self-checkout event when the customer touches “Start” on a touchscreen of the computing device, or when the customer starts scanning a product, or the customer scans his membership card.”) that has been gripped by the user within a range of an area (Wen: ¶74 “the video frames may be simply labeled with numbers 0, 1, 2, 3, where 0 represents background, 1 represents hand or hands of a customer, either empty hand or hand holding a product”) that has been gripped by the user, (Wen: ¶74 “the video frames may be simply labeled with numbers 0, 1, 2, 3, where 0 represents background, 1 represents hand or hands of a customer, either empty hand or hand holding a product”) Therefore, it would have been obvious to for one of ordinary skill in the art before the effective filing date to incorporate the capturing video of a user scanning a product disclosed by Wen into the method of item recognition, correspondence, and alerting disclosed by Goncalves to identify items being scanned by either a customer or cashier to prevent during checkout. Regarding claim 13, Goncalves discloses specify, by analyzing the acquired video image data, (Goncalves: ¶24 “the camera (101, 101') triggers image capture based on motion detection and/or optical flow analysis of a video stream”) the commodity product (Goncalves: ¶54 “The image captured through image acquisition module (110) is processed by the object recognition module (103), which compares the image data to a visual recognition model database (104) of item information about known items, and determines the identity of the item or items in the image,”) that is set for the code of the commodity product to be scanned to the accounting machine; (Goncalves: ¶56 “the visual recognition model database (104) also contains UPCs corresponding to objects or items whose features are contained in the database.”) acquire, by scanning the code of the commodity product by the accounting machine, commodity product information that has been registered to the accounting machine; and (Goncalves: ¶56 “the visual recognition model database (104) also contains UPCs corresponding to objects or items whose features are contained in the database.”) generate, by comparing the acquired commodity product information with the specified commodity product an alert connected to an abnormality of a behavior of registering the commodity product to the accounting machine. (Goncalves: ¶59 “If there is a discrepancy between the sequence (105) of scanned UPCs and the sequence (109) of UPCs corresponding to visually recognized items, an exception has been detected and one of several possible actions can be executed.”) Goncalves fails to specifically disclose a memory; and a processor coupled to the memory and configured to: acquire video image data on a user who purchases a commodity product and is scanning a code of a commodity product to an accounting machine; that has been gripped by the user within a range of an area that has been gripped by the user, In related art Wen discloses a memory; and (Wen: ¶59 “The term module may include memory”) a processor coupled to the memory and configured to: (Wen: ¶59 “The term module may include memory (shared, dedicated, or group) that stores code executed by the processor.”) acquire video image data (Wen: ¶8 “instruct the imaging device to capture video frames of a region of interest (ROI),” ) on a user who purchases a commodity product (Wen: ¶16 “In certain embodiments, the computer executable code is configured to, before track the product: segment the video frames such that each pixel of the video frames is labeled with hand of a customer, product in hand,” ) and is scanning a code of a commodity product to an accounting machine; (Wen: ¶14 “the computer executable code is configured to initiate a self-checkout event when the customer touches “Start” on a touchscreen of the computing device, or when the customer starts scanning a product, or the customer scans his membership card.”) that has been gripped by the user within a range of an area (Wen: ¶74 “the video frames may be simply labeled with numbers 0, 1, 2, 3, where 0 represents background, 1 represents hand or hands of a customer, either empty hand or hand holding a product”) that has been gripped by the user, (Wen: ¶74 “the video frames may be simply labeled with numbers 0, 1, 2, 3, where 0 represents background, 1 represents hand or hands of a customer, either empty hand or hand holding a product”) Therefore, it would have been obvious to for one of ordinary skill in the art before the effective filing date to incorporate the capturing video of a user scanning a product disclosed by Wen into the method of item recognition, correspondence, and alerting disclosed by Goncalves to identify items being scanned by either a customer or cashier to prevent during checkout. Claim 7-8 is rejected under 35 U.S.C. 103 as being unpatentable over Goncalves (US 20100059589 A1) in view of Wen (US 20210280027 A1) and in further view of Darvish (US 20190354770 A1). Regarding claim 7, Goncalves, as modified by Wen, disclose the claimed invention except for wherein the generating includes outputting, in a case where the alert connected to the abnormality of the behavior of registering the commodity product to the accounting machine has been generated, voice or a screen that makes the user located at the accounting machine aware of a registration omission of the commodity product from the accounting machine. In related art, Darvish discloses wherein the generating includes outputting, in a case where the alert connected to the abnormality of the behavior of registering the commodity product to the accounting machine has been generated, voice or a screen that makes the user located at the accounting machine aware of a registration omission of the commodity product from the accounting machine. (Darvish: ¶137 “The visual warning interface 345 and loudspeaker interface 350 serve to generate an audio visual alert in the case of abnormal activity. In particular, where grievous abnormal activity is detect, such as theft, the audio visual alert serves to attract the attention of authorities while discouraging other would-be thieves of fraudsters from stealing.” ¶35 discloses the transaction monitoring system may be used in self-checkout counters ensuring the user purchasing a product will observe the alert) Therefore, it would have been obvious to for one of ordinary skill in the art before the effective filing date to incorporate the use of visual or audio alerts upon detecting abnormal checkout behavior disclosed by Darvish into the method of item recognition, correspondence, and alerting disclosed by Goncalves, as modified by Wen. It would have been obvious to incorporate these separate aspects in order to halt the transaction and alert store staff upon abnormal checkout activity being detected. Regarding claim 8, Goncalves, as modified by Wen disclose the claimed invention except for wherein capturing, when the alert connected to the abnormality of the behavior of registering the commodity product to the accounting machine is generated, the user by a camera included in the accounting machine; and storing image data on the captured user and the alert in an associated manner in a storage. In related art, Darvish discloses capturing, when the alert connected to the abnormality of the behavior of registering the commodity product to the accounting machine is generated, the user by a camera included in the accounting machine; and (Darvish: ¶138 “ For example, an alarm triggered by an alarm interface may be a local buzzer, a silent alarm to the manager or security, and indicator to cameras to start recording,”) storing image data on the captured user and the alert in an associated manner in a storage. (Darvish: ¶138 “For example, an alarm triggered by an alarm interface may be a local buzzer, a silent alarm to the manager or security, and indicator to cameras to start recording, or to camera systems to save/prevent discarding of recordings”) Therefore, it would have been obvious to for one of ordinary skill in the art before the effective filing date to incorporate the recording the user upon detection of abnormal activity disclosed by Darvish into the method of item recognition, correspondence, and alerting disclosed by Goncalves, as modified by Wen to prevent customer fraud by capturing the identity of the customer. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure: Debucean (US 20210216785 A1) discloses A system for detecting scan and non-scan events in a self-check out (SCO) process includes a a scanner for scanning objects and generating point of sale (POS) data, a video camera for generating a video of the scanning region, proximity sensors proximal to the video camera for defining an Area of Action (AoA), wherein the video camera starts capturing scanning region, when the objects enter the AoA, and the POS data includes non-zero values, an Artificial neural network (ANN) for receiving an image frame and generating one or more values, each indicating a probability of classification of the image frame into one or more classes respectively, and a processing unit for processing the POS data, and probabilities of one or more classes to detect a correlation between video data and POS data, and detect one of: scan and non-scan event in the image frame based on the correlation. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL KIM MAIDEN whose telephone number is (703)756-1264. The examiner can normally be reached Monday - Friday 7:30 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Koziol can be reached at 4089187630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL KIM MAIDEN/Examiner, Art Unit 2665 /Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Aug 23, 2023
Application Filed
Oct 17, 2025
Non-Final Rejection — §103
Jan 22, 2026
Response Filed
Mar 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597290
THREE-DIMENSIONAL (3D) FACIAL FEATURE TRACKING FOR AUTOSTEREOSCOPIC TELEPRESENCE SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12592058
DATA GENERATING METHOD, LEARNING METHOD, ESTIMATING METHOD, DATA GENERATING DEVICE, AND PROGRAM
2y 5m to grant Granted Mar 31, 2026
Patent 12579654
INTERFACE DETECTION IN RECIPROCAL SPACE
2y 5m to grant Granted Mar 17, 2026
Patent 12579830
COMBINING BRIGHTFIELD AND FLUORESCENT CHANNELS FOR CELL IMAGE SEGMENTATION AND MORPHOLOGICAL ANALYSIS IN IMAGES OBTAINED FROM AN IMAGING FLOW CYTOMETER
2y 5m to grant Granted Mar 17, 2026
Patent 12561944
POINT CLOUD DATA PROCESSING APPARATUS, POINT CLOUD DATA PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
93%
Grant Probability
99%
With Interview (+8.9%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 72 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month