Prosecution Insights
Last updated: April 19, 2026
Application No. 16/553,693

METHODS AND APPARATUSES FOR COLLECTION OF ULTRASOUND DATA

Non-Final OA §103
Filed
Aug 28, 2019
Examiner
YIP, JACK
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
BFLY Operations, Inc.
OA Round
7 (Non-Final)
33%
Grant Probability
At Risk
7-8
OA Rounds
4y 1m
To Grant
70%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
229 granted / 702 resolved
-37.4% vs TC avg
Strong +38% interview lift
Without
With
+37.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
51 currently pending
Career history
753
Total Applications
across all art units

Statute-Specific Performance

§101
22.8%
-17.2% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
15.0%
-25.0% vs TC avg
§112
12.4%
-27.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 702 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 6/20/2025 has been entered. Claims 22 – 39 are pending. Claims 1 – 21 have been cancelled. Claim Objections Claim 31 is objected to because of the following informalities: a comma (,) is missing from the step: “transmitting from the operator processing device to an instructor processing device, the received ultrasound image data”. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 22 – 39 are rejected under 35 U.S.C. 103 as being unpatentable over Rothberg et al. (US 2017/0360401 A1) in view of Jung et al. (US 2015/0005630 A1). Re claims 22, 31: Rothberg teaches 22. A system (Rothberg, Abstract) comprising: an ultrasound imaging device having a body and a transducer head (Rothberg, fig. 1), an operator processing device including processing circuitry configured to implement a trained statistical model for identifying a pose of the ultrasound imaging device (Rothberg, [0006], “the App may leverage state-of-the-art machine learning technology, such as deep learning. In these embodiments, the App may employ a trained model, such as a trained neural network, that is configured to generate instructions to provide to the operator”; Machine learning and deep learning are trained statistical model), and transmit circuitry configured to transmit data representing ultrasound images and the pose (Rothberg, [0003], “learn how to appropriately position an ultrasound device on a subject to capture an ultrasound image in various anatomical views”; [0006]), an instructor processing device configured to receive from a user, a selection of a subset of ultrasound images from data transmitted from the operator processing device, and transmit an indication of the selection to the operator processing device (Rothberg, [0210], “FIGS. 7A-7H show an example user interface for a diagnostic application that is configured to assist an operator determine whether a subject is experiencing heart failure. The diagnostic application may be designed to be used by, for example, a health care professional such as a doctor, a nurse, or a physician assistant”; [0218]; [0224]; [0260], “The trained clinician may add a qualitative label to each of the sample patient images”); wherein the operator processing device is configured to implement the trained statistical model to determine a target pose for the ultrasound imaging device which corresponds to the pose which produced the ultrasound images from the selected subset (Rothberg, [0006], “the trained model may recognize these medically irrelevant images and generate an instruction regarding how the operator should reposition the ultrasound device to capture a medically relevant ultrasound image”; [0023]; [0145], “the computing device may analyze a captured ultrasound image using deep learning techniques to determine whether the ultrasound image contains the target anatomical view … the computing device may instruct the operator how to reposition the ultrasound device (e.g., “MOVE UP,” “MOVE LEFT,” “MOVE RIGHT,” “ROTATE CLOCKWISE,” “ROTATE COUNTER-CLOCKWISE,” or “MOVE DOWN”) to capture an ultrasound image that contains the target anatomical view”), and determine with the statistical model the current pose of the ultrasound imaging device and generate an instruction for a user to move the ultrasound imaging device from the current pose to the target pose (Rothberg, [0006]; [0023]; [0145]; [0164]; [0167]; [0185], “computing device 104 may use a machine learning technique (such as a deep learning technique) to directly map the ultrasound image 110 to an output to provide to the user such as an indication of proper positioning or an instruction to reposition the ultrasound device 102 (e.g., instruction 108)”; [0229], “identifying an anatomical view contained in the ultrasound image (e.g., using deep learning techniques) and map the identified anatomical view to a position on the subject. The target position may be identified by, for example, mapping the target anatomical view to a position on the subject”). Rothberg teaches 31. A method of instructing a user (Rothberg, Abstract) comprising: imaging with an ultrasound device, a patient at a series of positions (Rothberg, [0153], “The instructions may be pre-recorded and determined by comparing the current positioning of the ultrasound device relative to one or more prior positions of the ultrasound device”), receiving with an operator processing device, ultrasound image data from the ultrasound device at each position in the series of positions (Rothberg, [0026], “the guidance plan comprises a sequence of instructions to guide the operator of the ultrasound device to move the ultrasound device to a target location”; [0154], “generate a guidance plan for how to guide the operator to move the ultrasound device from an initial position on the subject to a target position on the subject. The guidance plan may comprise a series of simple instructions or steps (e.g., “MOVE UP,” “MOVE DOWN,” “MOVE LEFT,” or “MOVE RIGHT”) to guide the operator from the initial position to the target position”; fig. 16, “transducer array(s)”; [0320]), determining, with a statistical model operated by the operator processing device, the pose of the ultrasound device which produced the images of the selected subset a target pose of the ultrasound device based on the indication of the selected subset (Rothberg, [0006], “the App may leverage state-of-the-art machine learning technology, such as deep learning. In these embodiments, the App may employ a trained model, such as a trained neural network, that is configured to generate instructions to provide to the operator”; Machine learning and deep learning are trained statistical model; [0003], “learn how to appropriately position an ultrasound device on a subject to capture an ultrasound image in various anatomical views”; [0006]), determining with the statistical model, a current pose of the ultrasound device and comparing the current pose of the ultrasound device to the target pose for the ultrasound device (Rothberg, [0006]; [0023]; [0145]; [0164]; [0167]; [0185], “computing device 104 may use a machine learning technique (such as a deep learning technique) to directly map the ultrasound image 110 to an output to provide to the user such as an indication of proper positioning or an instruction to reposition the ultrasound device 102 (e.g., instruction 108)”; [0229], “identifying an anatomical view contained in the ultrasound image (e.g., using deep learning techniques) and map the identified anatomical view to a position on the subject. The target position may be identified by, for example, mapping the target anatomical view to a position on the subject”), and generating with the operator processing device, and instruction for the user to move the ultrasound device from the current pose of the ultrasound device to the target pose (Rothberg, [0006], “the trained model may recognize these medically irrelevant images and generate an instruction regarding how the operator should reposition the ultrasound device to capture a medically relevant ultrasound image”; [0023]; [0145], “the computing device may analyze a captured ultrasound image using deep learning techniques to determine whether the ultrasound image contains the target anatomical view … the computing device may instruct the operator how to reposition the ultrasound device (e.g., “MOVE UP,” “MOVE LEFT,” “MOVE RIGHT,” “ROTATE CLOCKWISE,” “ROTATE COUNTER-CLOCKWISE,” or “MOVE DOWN”) to capture an ultrasound image that contains the target anatomical view”). Rothberg does not explicitly disclose an instructor processing device configured to receive from a user, a selection of a subset of ultrasound images from data transmitted from the operator processing device, and transmit an indication of the selection to the operator processing device; instead Rothberg teaches a diagnostic application may be designed to be used by, for example, a health care professional such as a doctor, a nurse, or a physician assistant” (Rothberg, [0210], “FIGS. 7A-7H show an example user interface for a diagnostic application that is configured to assist an operator determine whether a subject is experiencing heart failure. The diagnostic application may be designed to be used by, for example, a health care professional such as a doctor, a nurse, or a physician assistant”; [0218]; [0224]; [0260], “The trained clinician may add a qualitative label to each of the sample patient images”). Jung teaches a method of sharing information about an ultrasound image including acquiring an ultrasound image of an object; identifying a sharing level associated with an external device for sharing the ultrasound image; and transmitting ultrasound information about the ultrasound image to the external device according to the identified sharing level (Jung, Abstract). Jung et al. (US 2015/0005630 A1) teaches a method of sharing information about an ultrasound image including acquiring an ultrasound image of an object; identifying a sharing level associated with an external device for sharing the ultrasound image; and transmitting ultrasound information about the ultrasound image to the external device according to the identified sharing level (Jung, Abstract). Jung teaches (claim 1) an instructor processing device configured to receive from a user, a selection of a subset of ultrasound images from data transmitted from the operator processing device, and transmit an indication of the selection to the operator processing device (Jung, [0274] – [0275], “The second device 2000 may control the ultrasound apparatus 3000 to display a certain ultrasound image, and thus, the medical expert may select and display a desired ultrasound image from the thumbnail images or review window displayed on the ultrasound apparatus 3000, and then, check the selected ultrasound image”; [0352], “the user of the ultrasound apparatus may communicate with the medical expert about the ultrasound images in real-time, the doctor may provide an expeditious feedback and may direct appropriate operations … the doctor located in a remote location may check the image and remotely direct the ultrasound apparatus to re-take the image, change the scan direction, take a different image, perform calibration, change imaging parameters, etc. (operations S2420 and S2422) … the doctor may control the scan direction and movement of the ultrasound probe, by operating the remote controller provided via the doctor's device”). (claim 2) transmitting from the operator processing device to an instructor processing device, the received ultrasound image data displaying, on the instructor processing device, the ultrasound image data as a series of ultrasound images, recording with the instructor processing device, a selection from a user of a subset of the series of ultrasound images (Jung, [0274] – [0275], “The second device 2000 may control the ultrasound apparatus 3000 to display a certain ultrasound image, and thus, the medical expert may select and display a desired ultrasound image from the thumbnail images or review window displayed on the ultrasound apparatus 3000, and then, check the selected ultrasound image”; [0352], “the user of the ultrasound apparatus may communicate with the medical expert about the ultrasound images in real-time, the doctor may provide an expeditious feedback and may direct appropriate operations … the doctor located in a remote location may check the image and remotely direct the ultrasound apparatus to re-take the image, change the scan direction, take a different image, perform calibration, change imaging parameters, etc. (operations S2420 and S2422) … the doctor may control the scan direction and movement of the ultrasound probe, by operating the remote controller provided via the doctor's device”), transmitting to the operator processing device from the instructor processing device, and indication of the selected subset (Jung, [0262], “the ultrasound apparatus 3000 may display the first pointer 1630 of the ultrasound apparatus 3000 and the second pointer 1620 of the second device 2000 on the ultrasound image 1600 displayed on the ultrasound apparatus 3000. Therefore, the technician and the doctor may identify areas of interest of each other by locating the pointer on a point of interest via the bi-directional chatting, in real time”; [0352], “the user of the ultrasound apparatus may communicate with the medical expert about the ultrasound images in real-time, the doctor may provide an expeditious feedback and may direct appropriate operations … the doctor located in a remote location may check the image and remotely direct the ultrasound apparatus to re-take the image, change the scan direction, take a different image, perform calibration, change imaging parameters, etc. (operations S2420 and S2422) … the doctor may control the scan direction and movement of the ultrasound probe, by operating the remote controller provided via the doctor's device”). Therefore, in view of Jung, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method described in Rothberg, by reviewing ultrasound images by remote physician as taught by Jung, so that the medical expert may select and display a desired ultrasound image from the thumbnail images or review window displayed on the ultrasound apparatus, and then, check the selected ultrasound image. The medical expert may expand another ultrasound image while reviewing the selected ultrasound image. The medical expert may control the ultrasound apparatus remotely to store the ultrasound image again, re-take the image, or may directly correct the report generated by the technician (Jung, [0274] – [0275]; [0352]). Re claims 23 – 24: 23. The system of claim 22 wherein; the pose of the ultrasound imaging device comprises pose data including a relative angle and a position of the ultrasound imaging device (Rothberg, [0269], “The set of instructions may include: (1) tilt the ultrasound device inferomedially, (2) rotate the ultrasound device counterclockwise, (3) rotate the ultrasound device clockwise, (4) move the ultrasound device one intercostal space down, (5) move the ultrasound device one intercostal space up, and ( 6) slide the ultrasound device medially”; [0236], “configured to detect movement (e.g., accelerometers, gyroscopes, compasses, and/or inertial measurement units)”). 24. The system of claim 23 wherein; the relative angle and position of the ultrasound imaging device may be relative to the operator processing device or to a body of a patient (Rothberg, [0269], “The set of instructions may include: (1) tilt the ultrasound device inferomedially, (2) rotate the ultrasound device counterclockwise, (3) rotate the ultrasound device clockwise, (4) move the ultrasound device one intercostal space down, (5) move the ultrasound device one intercostal space up, and ( 6) slide the ultrasound device medially”; [0236], “configured to detect movement (e.g., accelerometers, gyroscopes, compasses, and/or inertial measurement units)”). Re claim 25: 25. The system of claim 23 wherein the pose data comprises numerical values representing the relative angle and position of the ultrasound imaging device (Rothberg, [0269], “The set of instructions may include: (1) tilt the ultrasound device inferomedially, (2) rotate the ultrasound device counterclockwise, (3) rotate the ultrasound device clockwise, (4) move the ultrasound device one intercostal space down, (5) move the ultrasound device one intercostal space up, and ( 6) slide the ultrasound device medially”; [0236], “configured to detect movement (e.g., accelerometers, gyroscopes, compasses, and/or inertial measurement units)”; sensors output digital values). Re claim 26: 26. The system of claim 22 wherein the trained statistical model is trained to be configured to identify the pose of an ultrasound imaging device based on ultrasound images wherein a pose of an ultrasound imaging device which produced the image is identified (Rothberg, [0006]; [0023]; [0145]; [0164]; [0167]; [0185], “computing device 104 may use a machine learning technique (such as a deep learning technique) to directly map the ultrasound image 110 to an output to provide to the user such as an indication of proper positioning or an instruction to reposition the ultrasound device 102 (e.g., instruction 108)”; [0229], “identifying an anatomical view contained in the ultrasound image (e.g., using deep learning techniques) and map the identified anatomical view to a position on the subject. The target position may be identified by, for example, mapping the target anatomical view to a position on the subject”). Re claim 27: 27. The system of claim 22 wherein the trained statistical model is trained to identify the pose of the ultrasound imaging device based on images on an ultrasound imaging device in contact with a patient, wherein a pose of the ultrasound device is identified (Rothberg, [0006], “the App may leverage state-of-the-art machine learning technology, such as deep learning. In these embodiments, the App may employ a trained model, such as a trained neural network, that is configured to generate instructions to provide to the operator”; Machine learning and deep learning are trained statistical model; [0003], “learn how to appropriately position an ultrasound device on a subject to capture an ultrasound image in various anatomical views”; [0006]). Re claim 28: 28. The system of claim 22 wherein the instruction comprises a graphical or textual element displayed on the operator processing device (Rothberg, figs. 1, 7A – 7F; fig. 8D). Re claim 29: 29. The system of claim 22 wherein the instruction comprises instructing a user to move the ultrasound imaging device along a path across a body of a patient where the path includes the target pose (Rothberg, figs. 1, 7A – 7F; fig. 8D; [0118]; [0145]). Re claim 30: 30. The system of claim 22 wherein the instruction comprises instructing the user to rotate, tilt, and/or translate the ultrasound imaging device (Rothberg, figs. 1, 7A – 7F; fig. 8D; [0118]; [0145]). Re claim 32: 32. The method of claim 31 wherein the pose of the ultrasound device comprises the relative angle and position of the ultrasound device (Rothberg, [0269], “The set of instructions may include: (1) tilt the ultrasound device inferomedially, (2) rotate the ultrasound device counterclockwise, (3) rotate the ultrasound device clockwise, (4) move the ultrasound device one intercostal space down, (5) move the ultrasound device one intercostal space up, and ( 6) slide the ultrasound device medially”; [0236], “configured to detect movement (e.g., accelerometers, gyroscopes, compasses, and/or inertial measurement units)”). Re claim 33: 33. The method of claim 32 wherein the relative angle and position of the ultrasound device may be relative to the operator processing device or to the body of the patient (Rothberg, [0269], “The set of instructions may include: (1) tilt the ultrasound device inferomedially, (2) rotate the ultrasound device counterclockwise, (3) rotate the ultrasound device clockwise, (4) move the ultrasound device one intercostal space down, (5) move the ultrasound device one intercostal space up, and ( 6) slide the ultrasound device medially”; [0236], “configured to detect movement (e.g., accelerometers, gyroscopes, compasses, and/or inertial measurement units)”). Re claim 34: 34. The method of claim 32 wherein the pose data comprises numerical values representing the relative angle and position of the ultrasound device (Rothberg, [0269], “The set of instructions may include: (1) tilt the ultrasound device inferomedially, (2) rotate the ultrasound device counterclockwise, (3) rotate the ultrasound device clockwise, (4) move the ultrasound device one intercostal space down, (5) move the ultrasound device one intercostal space up, and ( 6) slide the ultrasound device medially”; [0236], “configured to detect movement (e.g., accelerometers, gyroscopes, compasses, and/or inertial measurement units)”; sensors output digital values). Re claim 35: 35. The method of claim 31 wherein the trained statistical model is trained to identify the pose of the ultrasound imaging device based on ultrasound images wherein the pose data of an ultrasound device which produced the image is identified (Rothberg, [0006]; [0023]; [0145]; [0164]; [0167]; [0185], “computing device 104 may use a machine learning technique (such as a deep learning technique) to directly map the ultrasound image 110 to an output to provide to the user such as an indication of proper positioning or an instruction to reposition the ultrasound device 102 (e.g., instruction 108)”; [0229], “identifying an anatomical view contained in the ultrasound image (e.g., using deep learning techniques) and map the identified anatomical view to a position on the subject. The target position may be identified by, for example, mapping the target anatomical view to a position on the subject”). Re claim 36: 36. The method of claim 31 wherein the trained statistical model is trained to identify the pose of the ultrasound device based on images on an ultrasound device in contact with a patient, wherein the pose data of the ultrasound device is identified (Rothberg, [0006], “the App may leverage state-of-the-art machine learning technology, such as deep learning. In these embodiments, the App may employ a trained model, such as a trained neural network, that is configured to generate instructions to provide to the operator”; Machine learning and deep learning are trained statistical model; [0003], “learn how to appropriately position an ultrasound device on a subject to capture an ultrasound image in various anatomical views”; [0006]). Re claim 37: 37. The method of claim 31 wherein the instruction comprises a graphical or textual element displayed on the operator processing device (Rothberg, figs. 1, 7A – 7F; fig. 8D). Re claim 38: 38. The method of claim 31 wherein the instruction comprises instructing a user to move the ultrasound device along a path across the body of a patient where the path includes the target pose (Rothberg, figs. 1, 7A – 7F; fig. 8D; [0118]; [0145]). Re claim 39: 39. The method of claim 31 wherein the instruction comprises instructing the user to rotate, tilt, and/or translate the ultrasound device (Rothberg, figs. 1, 7A – 7F; fig. 8D; [0118]; [0145]). Response to Arguments Applicant’s arguments with respect to claim(s) 22 – 39 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACK YIP whose telephone number is (571)270-5048. The examiner can normally be reached Monday thru Friday; 9:00 AM - 5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XUAN THAI can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACK YIP/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Aug 28, 2019
Application Filed
Apr 20, 2021
Non-Final Rejection — §103
Oct 04, 2021
Examiner Interview Summary
Oct 04, 2021
Applicant Interview (Telephonic)
Oct 25, 2021
Response Filed
Jan 29, 2022
Final Rejection — §103
May 03, 2022
Request for Continued Examination
May 08, 2022
Response after Non-Final Action
Jun 18, 2022
Non-Final Rejection — §103
Aug 04, 2022
Applicant Interview (Telephonic)
Aug 04, 2022
Examiner Interview Summary
Sep 23, 2022
Response Filed
Dec 30, 2022
Final Rejection — §103
Mar 01, 2023
Response after Non-Final Action
Mar 10, 2023
Response after Non-Final Action
Apr 04, 2023
Request for Continued Examination
Apr 11, 2023
Response after Non-Final Action
Sep 25, 2023
Non-Final Rejection — §103
Jan 29, 2024
Response Filed
May 16, 2024
Final Rejection — §103
Nov 20, 2024
Notice of Allowance
Jun 20, 2025
Request for Continued Examination
Jun 24, 2025
Response after Non-Final Action
Dec 12, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12588859
SYSTEM AND METHOD FOR INTERACTING WITH HUMAN BRAIN ACTIVITIES USING EEG-FNIRS NEUROFEEDBACK
2y 5m to grant Granted Mar 31, 2026
Patent 12592160
System and Method for Virtual Learning Environment
2y 5m to grant Granted Mar 31, 2026
Patent 12558290
BLOOD PRESSURE LOWERING TRAINING DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12525140
SYSTEMS AND METHODS FOR PROGRAM TRANSMISSION
2y 5m to grant Granted Jan 13, 2026
Patent 12512012
SYSTEM FOR EVALUATING RADAR VECTORING APTITUDE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
33%
Grant Probability
70%
With Interview (+37.6%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 702 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month