Prosecution Insights
Last updated: April 19, 2026
Application No. 18/785,691

WORKING ROBOT SYSTEM

Final Rejection §103
Filed
Jul 26, 2024
Examiner
WOOD, BLAKE ANDREW
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Yamabiko Corporation
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 12m
To Grant
88%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
102 granted / 142 resolved
+19.8% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
39 currently pending
Career history
181
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
49.4%
+9.4% vs TC avg
§102
22.0%
-18.0% vs TC avg
§112
15.6%
-24.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 142 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 13 February 2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment Claims 1 and 4-6 have been newly amended. Claims 3 and 8 have been newly canceled. No claims have been newly added. Claims 1-2, 4-7, and 9 remain pending in the present application. The previous objections to claims 4-6 and the previous 35 U.S.C. § 101 rejection of claims 1-7 and 9 have been withdrawn as a result of amendment. Response to Arguments Applicant's arguments filed 20 January 2026 have been fully considered but they are not persuasive. Regarding claim 1, Applicant asserts that the previously applied prior art fails to teach each limitation of newly amended claim 1. Specifically, Applicant asserts that the prior art fails to teach at least the limitation of "wherein the controller acquires the self-position information output by the working robot at least at two points." Applicant notes that "[t]he Office Action acknowledges that the Murata publication does not disclose a working robot system having a controller that acquires self-position information of the working robot at least at two points. However, according to the Official Action, the Ohtomo publication makes up for this deficiency because paragraphs [0100] and [0103] teach acquisition of self-position information at least at two points, which can then be combined with captured images of a field in which the working robot is present." In response, Applicant asserts that "paragraphs [0100] and [0103] of Ohtomo no such combination is possible because [sic], while Ohtomo teaches acquisition of position information at two points P1 and P2, the position information is acquired by a drone and there is no possibility of combining that information with images of a field in which drone is working [sic]. In Ohtomo, it is the drone, i.e., the self-position acquiring entity, that captures images. No images are captured of the space around the drone, much less of a "field" in the which the drone is working [sic], and therefore Ohtomo could not possibly have suggested the claimed combination of (i) self-position information [sic] acquired from a working robot, and [sic] (ii) a position of the working robot in a captured image [sic]. Since the drone is taking the images, it cannot appear in the images, and its position in the images cannot be determined." Applicant further asserts that "[a]s explained in paragraph [0055]-[0056] of the Ohtomo publication, what acquires the self-position at two points (via GPS) is an unmanned aerial vehicle 2 equipped with a camera 7. In the present application, the working robot is a separate component form [sic] the imaging apparatus and is the subject being captured by the imaging apparatus [sic]. The drone of Ohtomo cannot be the subject of its own camera 7." Further still, Applicant asserts that "as explained in paragraphs [0098] to [0108] of the Ohtomo publication, the reason why GPS coordinates captured by a GPS device 8 on the drone 2 are acquired at two positions is to calibrate a ground-based position measuring device 3, so that the drone can be controlled by either [sic] the GPS coordinates alone or alternatively by a ground base station 4 in which the position measuring device 3 is situated, i.e., to enable measurement taken by the position measuring device to be converted [sic] into GPS coordinates of the drone 2." Finally, Applicant points to [0107] and [0108] of Ohtomo, and concludes therefrom that "[t]his has nothing to do with the claimed use of GPS or "self-position information" to assign position information to a captured image, so that a working robot can be controlled relative to an object or target in the image [sic], and is not suggestive of modifying the system of Murata to achieve the desired control of a working robot relative to objects in the image." The examiner respectfully disagrees. Regarding Applicant's assertion that "…no such combination is possible because … the position information is acquired by a drone and there is no possibility of combining that information with images of a field in which [the] drone is working," the examiner notes that Ohtomo is not being used for its teaching of image capturing of a field by a drone, as Murata already teaches a system including a drone that combines position information of a working vehicle with images of a field captured by the drone (see the 35 U.S.C. § 103 rejection of claim 1 below for further details). Rather, Ohtomo is used to teach wherein position information is captured at two points, as Murata only explicitly discloses the capturing of position information at a single point. Further, the examiner notes that Applicant has themselves admitted that Ohtomo does teach capturing position information at two points, as Applicant plainly states "…while Ohtomo teaches acquisition of position information at two points P1 and P2…." Furthermore, in response to Applicant's assertion that "the working robot is a separate component form [sic] the imaging apparatus and is the subject captured by the imaging apparatus [sic] … [and] the drone of Ohtomo cannot be the subject of its own camera 7," the examiner again notes that Ohtomo was not used to teach the image capturing of a field by a drone, as Murata already teaches such an arrangement. Hence, Applicant's arguments are not persuasive. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an imaging apparatus configured to capture an image” in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, and 4-7 are rejected under 35 U.S.C. 103 as being unpatentable over Murata (JP2018170991A), hereafter Murata, and further in view of Ohtomo (US 20150220085 A1), hereafter Ohtomo. Regarding claim 1, Murata discloses a working robot system comprising: A working robot configured to output a self-position on a field (0019, within the field H1, the unmanned flying device 70 communicates wirelessly with the tractor 1, receives the driving information of the tractor 1 along with position information and time information); An imaging apparatus configured to capture an image of the field (0048, the unmanned flying device 70 comprises a camera 71 that photographs the field H1); and A controller configured to acquire the image of the field captured by the imaging apparatus and the self-position information output by the working robot (0063, when communication is established with the unmanned tractor 1 that is running autonomously, the unmanned flying device 70 notifies the unmanned tractor 1 of a change in the work route when the monitoring control unit 76 detects an obstacle W1 on the work route of the unmanned tractor 1 from images captured by the camera 71, at this time the monitoring control unit 75 calculates the position information of the obstacle W1 from the relative position between the obstacle W1 and the unmanned tractor 1 calculated from the captured image and the position information of the unmanned tractor 1 received by the second wireless communication interface 73, and transmits the position information of the obstacle W1 to the unmanned tractor 1 along with a notification of a change in the work route, this allows the unmanned tractor 1 to confirm the position of the obstacle W1 on the work route and change the work route to avoid collision with the obstacle W1, thereby allowing the unmanned tractor 1 to continue autonomous driving), Wherein, based on a position of the working robot on the captured image and the self-position information output by the working robot, the controller assigns position information to the remaining parts of the captured image (0063, when communication is established with the unmanned tractor 1 that is running autonomously, the unmanned flying device 70 notifies the unmanned tractor 1 of a change in the work route when the monitoring control unit 76 detects an obstacle W1 on the work route of the unmanned tractor 1 from images captured by the camera 71, at this time the monitoring control unit 75 calculates the position information of the obstacle W1 from the relative position between the obstacle W1 and the unmanned tractor 1 calculated from the captured image and the position information of the unmanned tractor 1 received by the second wireless communication interface 73, and transmits the position information of the obstacle W1 to the unmanned tractor 1 along with a notification of a change in the work route, this allows the unmanned tractor 1 to confirm the position of the obstacle W1 on the work route and change the work route to avoid collision with the obstacle W1, thereby allowing the unmanned tractor 1 to continue autonomous driving); and Wherein the controller controls the autonomous travel of the working robot based on the position information (0063, when communication is established with the unmanned tractor 1 that is running autonomously, the unmanned flying device 70 notifies the unmanned tractor 1 of a change in the work route when the monitoring control unit 76 detects an obstacle W1 on the work route of the unmanned tractor 1 from images captured by the camera 71, at this time the monitoring control unit 75 calculates the position information of the obstacle W1 from the relative position between the obstacle W1 and the unmanned tractor 1 calculated from the captured image and the position information of the unmanned tractor 1 received by the second wireless communication interface 73, and transmits the position information of the obstacle W1 to the unmanned tractor 1 along with a notification of a change in the work route, this allows the unmanned tractor 1 to confirm the position of the obstacle W1 on the work route and change the work route to avoid collision with the obstacle W1, thereby allowing the unmanned tractor 1 to continue autonomous driving). Murata fails to explicitly disclose, however, wherein the controller acquires self-position information of the working robot at least at two points. Ohtomo, however, in an analogous field of endeavor, does teach wherein the controller acquires self-position information of the working robot at least at two points (0100, Step 01, A position as required during the flight of the flying vehicle system 2 is set as a point P1 and GPS coordinate A1 of the point P1 is acquired by the GPS device 8. The GPS coordinate A1 thus acquired is transmitted to the ground base station 4 via the remote controller 5, 0103, Step 03, The flying vehicle system 2 is moved to a point P2 at another position as required. Here, the moving distance is calculated based on the coordinates of the point P1 and the point P2, and the length of the moving distance is determined by taking the flying height of the flying vehicle system 2 and the accuracy needed for the measurement into consideration). Murata and Ohtomo are analogous because they are in a similar field of endeavor, e.g., vehicle localization systems. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention, with a reasonable expectation of success, to have included the acquiring of the self-position information at multiple points of Ohtomo in order to provide a means of increasing the accuracy of the self-position information. The motivation to combine is to ensure that the position information is relevant to the robot. Regarding claim 2, the combination of Murata and Ohtomo teaches the working robot system according to claim 1, and Murata further teaches wherein the self-position information is position information of an actual coordinate output by a self-position detector of the working robot (0018, the reference station 60 calculates position information from the satellite signal and the vehicle signal using the reference station communication device 62 by using the RTK positioning method or the like, and transmits it to the tractor 1 via the wireless communication antenna 64. The tractor 1 corrects the satellite positioning information measured by the positioning antenna 6 using correction information transmitted from the reference station 60 to obtain the current position information of the tractor 1, e.g., latitude information and longitude information). Regarding claim 4, the combination of Murata and Ohtomo teaches the working robot system according to claim 1, and Ohtomo further teaches wherein position information at the two points are acquired as the working robot moves (0100, Step 01, A position as required during the flight of the flying vehicle system 2 is set as a point P1 and GPS coordinate A1 of the point P1 is acquired by the GPS device 8. The GPS coordinate A1 thus acquired is transmitted to the ground base station 4 via the remote controller 5, 0103, Step 03, The flying vehicle system 2 is moved to a point P2 at another position as required. Here, the moving distance is calculated based on the coordinates of the point P1 and the point P2, and the length of the moving distance is determined by taking the flying height of the flying vehicle system 2 and the accuracy needed for the measurement into consideration). Murata and Ohtomo are analogous because they are in a similar field of endeavor, e.g., vehicle localization systems. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention, with a reasonable expectation of success, to have included the position information at the two points being acquired while moving of Ohtomo in order to provide a means of capturing position information at multiple points. The motivation to combine is to ensure that the position information is relevant to the robot. Regarding claim 5, the combination of Murata and Ohtomo teaches the working robot system according to claim 1, and Ohtomo further teaches wherein the self-position information at the at least two points are acquired at different times (0100, Step 01, A position as required during the flight of the flying vehicle system 2 is set as a point P1 and GPS coordinate A1 of the point P1 is acquired by the GPS device 8. The GPS coordinate A1 thus acquired is transmitted to the ground base station 4 via the remote controller 5, 0103, Step 03, The flying vehicle system 2 is moved to a point P2 at another position as required. Here, the moving distance is calculated based on the coordinates of the point P1 and the point P2, and the length of the moving distance is determined by taking the flying height of the flying vehicle system 2 and the accuracy needed for the measurement into consideration, Examiner's note: if the position information is taken at two different points spaced apart in space, they must necessarily be taken at different times if the position is based off of the position of a single working vehicle). Murata and Ohtomo are analogous because they are in a similar field of endeavor, e.g., vehicle localization systems. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention, with a reasonable expectation of success, to have included the position information at the two points being acquired at different times of Ohtomo in order to provide a means of capturing position information at multiple points. The motivation to combine is to ensure that the position information is relevant to the robot. Regarding claim 6, the combination of Murata and Ohtomo teaches the working robot system according to claim 1, and Ohtomo further teaches wherein the self-position information at the at least two points are acquired from one working robot or different working robots (0100, Step 01, A position as required during the flight of the flying vehicle system 2 is set as a point P1 and GPS coordinate A1 of the point P1 is acquired by the GPS device 8. The GPS coordinate A1 thus acquired is transmitted to the ground base station 4 via the remote controller 5, 0103, Step 03, The flying vehicle system 2 is moved to a point P2 at another position as required. Here, the moving distance is calculated based on the coordinates of the point P1 and the point P2, and the length of the moving distance is determined by taking the flying height of the flying vehicle system 2 and the accuracy needed for the measurement into consideration). Murata and Ohtomo are analogous because they are in a similar field of endeavor, e.g., vehicle localization systems. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention, with a reasonable expectation of success, to have included the position information at the two points being acquired by one working robot of Ohtomo in order to provide a means of capturing position information at multiple points. The motivation to combine is to ensure that the position information is relevant to the robot. Regarding claim 7, the combination of Murata and Ohtomo teaches the working robot system according to claim 1, and Ohtomo further teaches wherein an imaging condition of the imaging apparatus can be adjusted (0056, A control box 31 is provided on a lower end of the shaft 6. Inside the control box 31, the control unit 35 is accommodated. A camera holder 32 is disposed on the lower surface of the control box 31, and the camera 7 is provided on the camera holder 32 via a horizontal axis 33. The camera 7 is rotatable around the horizontal shaft 33 as the center and an image pickup direction changing motor (not shown) for rotating the camera 7 is installed via the horizontal shaft 33. A reference posture of the camera 7 is maintained with an optical axis in vertical direction, and the image pickup direction changing motor rotates the camera 7 at an angle as required with respect to the vertical direction according to an instruction from the control unit 35). Murata and Ohtomo are analogous because they are in a similar field of endeavor, e.g., vehicle localization systems. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention, with a reasonable expectation of success, to have included the imaging condition adjustment of Ohtomo in order to provide a means of changing imaging conditions. The motivation to combine is to ensure that the imaging device is able to obtain information relevant to the position of the vehicle. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Murata in view of Ohtomo, and further in view of Uemura (US 20190113928 A1), hereafter Uemura. Regarding claim 9, the combination of Murata and Ohtomo teaches the working robot system according to claim 1, but fails to explicitly teach wherein the controller outputs the captured image having the position information to a display device. Uemura, however, in an analogous field of endeavor, does teach wherein the controller outputs the captured image having the position information to a display device (0019-0024, a work area determination program for an autonomous traveling work vehicle, the program comprising: an image acquisition function for acquiring a photographic image photographed by a photographing device of a predetermined area including a work area, a photographing position acquisition function for acquiring position information indicative of a position where the photographic image was acquired, a map generation function for generating a map based on the photographic image and the position information, a displaying function for displaying the map in the displaying section, and a work area determination function for determining the work area where the autonomous traveling work vehicle is to work, based on an area designation for the map displayed by the displaying function). Murata, Ohtomo, and Uemura are analogous because they are in a similar field of endeavor, e.g., work vehicle localization systems. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention, with a reasonable expectation of success, to have modified Murata to have included the outputting of the image to a display device of Uemura in order to provide a means for a user to view the captured image. The motivation to combine is to allow a user to monitor the environment of the working vehicle. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kinoshita (US 20230322423 A1, having an effective filing date of at least 27 December 2021) teaches an agricultural work system including an agricultural ground work vehicle and an unmanned aerial vehicle. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BLAKE A WOOD whose telephone number is (571)272-6830. The examiner can normally be reached M-F, 8:00 AM to 4:30 PM Eastern. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Worden can be reached at (571) 272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BLAKE A WOOD/Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Jul 26, 2024
Application Filed
Oct 15, 2025
Non-Final Rejection — §103
Jan 20, 2026
Response Filed
Apr 01, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600269
Vehicle and Method for Adjusting a Position of a Display in the Vehicle
2y 5m to grant Granted Apr 14, 2026
Patent 12588955
COMPUTER-ASSISTED SURGERY SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12591256
WORK UNIT REPLACEMENT SYSTEM AND WORK UNIT REPLACEMENT STATION
2y 5m to grant Granted Mar 31, 2026
Patent 12591255
MOBILE ROBOT AND CONTROL METHOD THEREFOR
2y 5m to grant Granted Mar 31, 2026
Patent 12569985
RUNTIME ASSESSMENT OF SUCTION GRASP FEASIBILITY
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
88%
With Interview (+16.7%)
2y 12m
Median Time to Grant
Moderate
PTA Risk
Based on 142 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month